Keywords
new energy power system; stability control policy; reinforcement learning; depth deterministic policy gradient algorithm; Markov model
Abstract
The rapid development of the power system has been changing its structure, making the system stability mechanism more complex. To ensure power angle stability in the new energy power system, a policy generation method for power system stability control during emergent tripping of units based on deep reinforcement learning is proposed. Firstly, the policies for emergent tripping of units of the power system are summarized, as well as the security constraints involved. The power system stability control model is then transformed into a Markov decision process. Next, the most typical feature data are selected by feature evaluation and the Spearman rank correlation coefficient method. To improve the training efficiency of the intelligent agent of the stability control policy, a training framework for the stability control policy based on the deep deterministic policy gradient (DDPG) is put forward. Finally, tests are performed in the IEEE 39 node system and a real-life power grid for validation. The results show that the proposed method can automatically adjust and generate a stability control policy for tripping of units according to the system’s running states and fault responses, confirming its enhanced decision-making effect and efficiency.
DOI
10.19781/j.issn.1673-9140.2025.01.004
First Page
39
Last Page
46
Recommended Citation
GAO, Qin; XU, Guanghu; XIA, Shangxue; YANG, Huanhuan; ZHAO, Qingchun; and HUANG, He
(2025)
"Policy generation method for power system stability control during emergent tripping of unit based on deep reinforcement learning,"
Journal of Electric Power Science and Technology: Vol. 40:
Iss.
1, Article 4.
DOI: 10.19781/j.issn.1673-9140.2025.01.004
Available at:
https://jepst.researchcommons.org/journal/vol40/iss1/4