DRL Based Naïve Emergency Control for Complicated Power Systems
Keywords:
Emergency Control, Deep Reinforcement Learning, Transient Stability Dynamic Breaking, Load SheddingAbstract
The power system's operating point will be less predictable as a result of these adjustments. Designing emergency controls offline via lengthy simulations has long been the norm. New sophisticated “wide-area emergency control algorithms” are needed since power system for the future is likely to vary more. The final line of defense for grid security and resilience is emergency control of the power system. For the most part, existing emergency response plans are developed off-line, using a “worst-case” scenario or a handful of representative operating situations. As the level of uncertainty and variability in contemporary electrical grids rises, these systems face considerable challenges in terms of adaptability and resilience. “Deep reinforcement learning (DRL)” for complex power systems was used in this thesis to build unique adaptive emergency control techniques that make use of DRL's non-linear generalization capabilities and high-dimensional feature extraction. “Reinforcement Learning for Grid Control (RLGC)” is a new open-source platform that was created to aid in the advancement and assessment of “DRL algorithms” for controlling electricity systems. There is a description of the emergency control systems for dynamic generator braking as well as platform and DRL-based under-voltage load shedding. The created DRL approach is tested for its potential to manage a wide range of simulation situations, uncertainty in model parameters, and noise in data.
References
- M. Zachar and P. Daoutidis, “Micro-grid/macro grid energy exchange: a novel market structure and stochastic scheduling, “IEEE Transactions on Smart Grid, vol. 8, no. 1, pp. 178–189, Jan. 2017.
- C. Ordoudis, P. Pinson, and J. M. Morales, “An integrated market for electricity and natural gas systems with stochastic power producers, “European Journal of Operational Research, vol. 272, no. 2, pp. 642– 654, Jan. 2019.
- L. Ze´phyr and C. L. Anderson, “Stochastic dynamic programming approach to managing power system uncertainty with distributed storage, “ Computational Management Science, vol. 15, no. 1, pp. 87–110, Jan. 2018.
- D. X. Zhang, X. Q. Han, and C. Y. Deng, “Review on the research and practice of deep learning and reinforcement learning in smart grids, “ CSEE Journal of Power and Energy Systems, vol. 4, no. 3, pp. 362–370, Sep. 2018.
- M. L. Tuballa and M. L. Abundo, “A review of the development of Smart Grid technologies,” Renewable and Sustainable Energy Reviews, vol. 59, pp. 710–725, Jun. 2016.
- R. C. Qiu and P. Antonik, Smart Grid and Big Data: Theory and Practice, New York: Wiley Publishing, 2017.
- X. He, Q. Ai, R. C. Qiu, W. T. Huang, L. J. Piao, and H. C. Liu, “A big data architecture design for smart grids based on random matrix theory, “ IEEE Transactions on Smart Grid, vol. 8, no. 2, pp. 674–686, Mar. 2017.
- T. Yu, B. Zhou, and W. G. Zhen, “Application and development of reinforcement learning theory in power systems,” Power System Protection and Control, vol. 37, no. 14, pp. 122–128, Jul. 2009.
- X. He, L. Chu, R. C. M. Qiu, Q. Ai, and Z. N. Ling, “A novel data-driven situation awareness approach for future grids—Using large random matrices for big data modeling, “ IEEE Access, vol. 6, pp. 13855–13865, Mar. 2018.
- L. Chu, R. Qiu, X. He, Z. N. Ling, and Y. D. Liu, “Massive streaming pmu data modeling and analytics in smart grid state evaluation based on multiple high-dimensional covariance test, “ IEEE Transactions on Big Data, vol. 4, no. 1, pp. 55–64, Mar. 2018.
- R. Qiu, L. Chu, X. He, Z. N. Ling, and H. C. Liu, “Spatiotemporal big data analysis for smart grids based on random matrix theory, “ in Transportation and Power Grid in Smart Cities: Communication Networks and Services, H. T. Mouftah, M. Erol-Kantarci, and M. H. Rehmani, Eds. John Wiley & Sons Ltd, 2018, pp. 591–633.
- R. C. Qiu, X. He, L. Chu, and Q. Ai, “Big data analysis of power grid from random matrix theory, “ Institution of Engineering and Technology (IET) in Smarter Energy: From Smart Metering to the Smart GridSmarter Energy: From Smart Metering to the Smart Grid, 2016, pp. 381–425, doi:10.1049/pbpo088e ch13.
- M. J. Han, R. May, X. X. Zhang, X. R. Wang, S. Pan, D. Yan, Y. Jin, and L. G. Xu, “A review of reinforcement learning methodologies for controlling occupant comfort in buildings, “Sustainable Cities and Society, vol. 51, pp. 101748, Nov. 2019.
- J. L. Duchaud, G. Notton, C. Darras, and C. Voyant, “Power ramp-rate control algorithm with optimal State of Charge reference via Dynamic Programming, “ Energy, vol. 149, pp. 709–717, Apr. 2018.
- D. P. Bertsekas. Reinforcement learning and optimal control. [online]. Available: http://www.athenasc.com/.
- F. Agostinelli, G. Hocquet, S. Singh, and P. Baldi, “From reinforcement learning to deep reinforcement learning: an overview, “ in Braverman Readings in Machine Learning. Key Ideas from Inception to Current State, L. Rozonoer, B. Mirkin, and I. Muchnik, Eds. Cham: Springer, 2018, pp. 298–328.
- B. V. Mbuwir, M. Kaffash, and G. Deconinck, “Battery scheduling in a residential multi-carrier energy system using reinforcement learning, “ in Proceedings of 2018 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids, 2018.
- J. D. Wu, H. W. He, J. K. Peng, Y. C. Li, and Z. J. Li, “Continuous reinforcement learning of energy management with deep Q network for a power-split hybrid electric bus, “ Applied Energy, vol. 222, pp. 799–811, Jul. 2018.
- Z. Q. Wan, H. P. Li, H. B. He, and D. Prokhorov, “Model-free realtime EV charging scheduling based on deep reinforcement learning, “ IEEE Transactions on Smart Grid, vol. 10, no. 5, pp. 5246–5257, Sep. 2019.
- Y. Hu, W. M. Li, K. Xu, T. Zahid, F. Y. Qin, and C. M. Li, “Energy management strategy for a hybrid electric vehicle based on deep reinforcement learning, “ Applied Sciences, vol. 8, No. 2, pp. 187, Jan. 2018.
- H. W. Wang, C. J. Li, J. Y. Li, X. He, and T. W. Huang, “A survey on distributed optimization approaches and applications in smart grids, “ Journal of Control and Decision, vol. 6, no. 1, pp. 41–60, Nov. 2019.
- H. Li, D. Yang, W. Z. Su, J. H. Lu¨, X. H. Yu, “An overall distribution particle swarm optimization MPPT algorithm for photovoltaic system under partial shading, “ IEEE Transactions on Industrial Electronics, vol. 66, no. 1, pp. 265–275, Jan. 2019.
- H. J. Gu, R. F. Yan, and T. K. Saha, “Minimum synchronous inertia requirement of renewable power systems, “ IEEE Transactions on Power Systems, vol. 33, no. 2, pp. 1533–1543, Mar. 2018.
- Y. X. Li, “Deep reinforcement learning, “ arXiv: 1810.06339 (2018).
Downloads
Published
Issue
Section
License
Copyright (c) IJSRST

This work is licensed under a Creative Commons Attribution 4.0 International License.