引用本文: | 潘晓杰,胡 泽,姚 伟,等.融合电网拓扑信息的分支竞争Q网络智能体紧急切负荷决策[J].电力系统保护与控制,2025,53(8):71-80.[点击复制] |
PAN Xiaojie,HU Ze,YAO Wei,et al.Emergency load shedding decision-making using a branching dueling Q-network integrating grid topology information[J].Power System Protection and Control,2025,53(8):71-80[点击复制] |
|
摘要: |
暂态电压失稳事件紧急控制措施制定是电力系统仿真分析中的一个重要内容,离线预先制定紧急切负荷决策,在线匹配执行决策方案。但该工作目前主要依赖专家分析海量仿真数据得到,耗时耗力。因此提出了一种融合电网拓扑信息的分支竞争Q网络智能体的电力系统紧急切负荷决策方法,以提高离线紧急切负荷决策的效率。首先,建立了一种基于事件驱动的马尔科夫决策过程,可以有效指导深度强化学习智能体的训练。其次,设计了一种分支竞争Q网络智能体,相比传统无分支网络,基于分支竞争Q网络智能体具有更强的训练效率和决策能力。然后,为进一步增强智能体的训练效率和决策性能,通过图卷积增强将电力系统拓扑信息融入到智能体的训练过程。最后,在中国电力科学研究院8机36节点系统进行了验证。相比于无分支网络和无拓扑信息融入的深度强化学习智能体,所提方法具有更高的训练效率和决策性能。 |
关键词: 仿真分析 暂态电压失稳 紧急切负荷决策 深度强化学习 分支竞争Q网络 电网拓扑信息 图卷积增强 |
DOI:10.19783/j.cnki.pspc.240501 |
投稿时间:2024-04-25修订日期:2024-05-28 |
基金项目:国家自然科学基金项目资助(U22B20111);国家电网公司科技项目资助“基于数据-知识混合驱动的电网安全稳定分析关键技术研究”(52140023000S) |
|
Emergency load shedding decision-making using a branching dueling Q-network integrating grid topology information |
PAN Xiaojie1,HU Ze2,YAO Wei2,LAN Yutian2,XU Youping1,WANG Yukun1,ZHANG Mujie1,WEN Jinyu2 |
(1. Central China Branch of State Grid Corporation of China, Wuhan 430070, China; 2. State Key Laboratory of
Advanced Electromagnetic Engineering and Technology (School of Electrical and Electronic Engineering,
Huazhong University of Science and Technology), Wuhan 430074, China) |
Abstract: |
The formulation of emergency control measures for transient voltage instability events is a crucial aspect of power system simulation analysis. Traditionally, emergency load shedding decisions are pre-determined offline and matched for execution in real-time. However, this process heavily relies on expert analysis of massive amounts of simulation data, which is both time-consuming and labor-intensive. To improve the efficiency of offline emergency load shedding decision-making, this paper presents a method for power system emergency load shedding decisions that integrates power grid topology information into a branching dueling Q-network (BDN) agent. First, an event-driven Markov decision process (MDP) is established to effectively guide the training of the deep reinforcement learning agents. Second, a BDN agent is designed, which exhibits superior training efficiency and decision-making capability compared to traditional non-branching networks. Then, to further enhance the agent’s training efficiency and decision-making performance, power grid topology information is integrated into the agent’s training process through graph convolutional networks (GCN). Finally, the proposed method is validated on the 8-machine 36-node system of the China Electric Power Research Institute. Compared to non-branching networks and deep reinforcement learning agents without integrated topology information, the proposed method demonstrates higher training efficiency and better decision-making performance. |
Key words: simulation analysis transient voltage instability emergency load shedding decision-making deep reinforcement learning branching dueling Q-network power grid topology information graph convolution enhancement |