Browsing by Author "Erdman, Paolo A."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Identifying optimal cycles in quantum thermal machines with reinforcement-learning(Springer Nature, 2022) Erdman, Paolo A.; Noé, FrankThe optimal control of open quantum systems is a challenging task but has a key role in improving existing quantum information processing technologies. We introduce a general framework based on reinforcement learning to discover optimal thermodynamic cycles that maximize the power of out-of-equilibrium quantum heat engines and refrigerators. We apply our method, based on the soft actor-critic algorithm, to three systems: a benchmark two-level system heat engine, where we find the optimal known cycle; an experimentally realistic refrigerator based on a superconducting qubit that generates coherence, where we find a non-intuitive control sequence that outperforms previous cycles proposed in literature; a heat engine based on a quantum harmonic oscillator, where we find a cycle with an elaborate structure that outperforms the optimized Otto cycle. We then evaluate the corresponding efficiency at maximum power.Item Pareto-optimal cycles for power, efficiency and fluctuations of quantum heat engines using reinforcement learning(American Physical Society, 2023) Erdman, Paolo A.; Rolandi, Alberto; Abiuso, Paolo; Perarnau-Llobet, Martí; Noé, FrankThe full optimization of a quantum heat engine requires operating at high power, high efficiency, and high stability (i.e., low power fluctuations). However, these three objectives cannot be simultaneously optimized—as indicated by the so-called thermodynamic uncertainty relations—and a systematic approach to finding optimal balances between them including power fluctuations has, as yet, been elusive. Here we propose such a general framework to identify Pareto-optimal cycles for driven quantum heat engines that trade off power, efficiency, and fluctuations. We then employ reinforcement learning to identify the Pareto front of a quantum dot-based engine and find abrupt changes in the form of optimal cycles when switching between optimizing two and three objectives. We further derive analytical results in the fast- and slow-driving regimes that accurately describe different regions of the Pareto front.