Deep Reinforcement Learning-Based Optimization Framework with Continuous Action Space for LNG Liquefaction Processes
Citations

WEB OF SCIENCE

1
Citations

SCOPUS

1

초록

Recently, the application of reinforcement learning in process systems engineering has attracted significant attention recently. However, the optimization of chemical processes using this approach faces various challenges related to performance and stability. This paper presents a process optimization framework using a continuous advantage actor-critic that is modified from the existing advantage actor-critic algorithm by incorporating a normal distribution for action sampling in a continuous space. The proposed reinforcement learning-based optimization framework was found to outperform the conventional method in optimizing a single mixed refrigerant process with 10 variables, achieving a lower specific energy consumption value of 0.294 kWh/kg compared to the value of 0.307 kWh/kg obtained using the genetic algorithm. Parametric studies performed into the hyperparameters of the continuous advantage actor-critic algorithm, including the maximum episodes, learning rate, maximum action value, and structures of the neural networks, are presented to investigate their impacts on the optimization performance. The optimal specific energy consumption, namely 0.287 kWh/kg, was achieved by varying the learning rate from the base case to 0.00005. These results demonstrate that reinforcement learning can be effectively applied to the optimization of chemical processes.

키워드

Continuous advantage actor-criticReinforcement learningProcess optimizationSingle mixed refrigerant processContinuous action space
제목
Deep Reinforcement Learning-Based Optimization Framework with Continuous Action Space for LNG Liquefaction Processes
저자
Lee, JieunPark, Kyungtae
DOI
10.1007/s11814-025-00428-x
발행일
2025-07
유형
Article
저널명
Korean Journal of Chemical Engineering
42
8
페이지
1613 ~ 1628