|
| Hybrid DRL-based task offloading and multi-resource coordinated scheduling optimization method in distribution networks |
| DOI:10.19783/j.cnki.pspc.250406 |
| Key Words:edge computing task offloading resource allocation distribution network deep reinforcement learning |
| Author Name | Affiliation | | ZHOU Ya1,2 | 1. Xuchang University, Xuchang 461000, China 2. North China University of Water Resources and
Electric Power, Zhengzhou 450045, China | | WANG Qian2 | 1. Xuchang University, Xuchang 461000, China 2. North China University of Water Resources and
Electric Power, Zhengzhou 450045, China | | FANG Ruju1 | 1. Xuchang University, Xuchang 461000, China 2. North China University of Water Resources and
Electric Power, Zhengzhou 450045, China |
|
| Hits: 388 |
| Download times: 41 |
| Abstract:To address the joint latency-energy optimization problem arising from task offloading and coordinated scheduling of “computation-communication-energy” multiple resources during the digitalization, decentralization, and intelligent evolution of distribution networks, a data-driven three-layer collaborative computing model encompassing local terminals, edge servers, and the cloud is developed. With a weighted delay-energy-fairness objective function, the model comprehensively characterizes key factors such as wireless channel conditions, transmission rates, and CPU frequencies, thereby quantifying the impact of multi-resource coordination on system performance. To tackle the challenge of a hybrid action space composed of discrete offloading decisions and continuous bandwidth, computation, and energy allocation, a hybrid deep reinforcement learning (HDRL) framework is proposed. In this framework, a double deep Q-network (DDQN) is employed at the upper layer to select offloading actions, while a deep deterministic policy gradient (DDPG) algorithm is used at the lower layer for continuous resource scheduling. An improved prioritized experience replay (IPER) mechanism is further designed to enhance sample utilization efficiency and convergence speed. Simulation results demonstrate that, compared with pure local computing, pure edge computing, random offloading, genetic algorithms (GA), and the DDQN+DDPG method without IPER, the proposed HDRL approach significantly reduces average system delay and total energy consumption across multiple scenarios. Moreover, it maintains high fairness as the number of users increases, exhibiting superior scalability robustness, improved task completion rates, and enhanced algorithm stability. The proposed method thus provides a feasible and efficient solution for multi-resource coordinated optimization in distribution networks. |
| View Full Text View/Add Comment Download reader |
|
|
|