Study of All Available Strategies Integrated in Developing an Emergency Control Plan

Authors

  • Alok Kumar Sriwastawa  SHEAT College of Engineering, Varanasi, Uttar Pradesh, India
  • Virendra Pratap Yadav  SHEAT College of Engineering, Varanasi, Uttar Pradesh, India

Keywords:

Voltage stability, Emergency Control, Deep Reinforcement Learning, Transient Stability Dynamic Breaking, Load Shedding

Abstract

To maintain power balance, changes must be made on both the demand and supply sides. The power system's operating point will be less predictable as a result of these adjustments. Designing emergency controls offline via lengthy simulations has long been the norm. New sophisticated “wide-area emergency control algorithms” are needed since power system for the future is likely to vary more. The final line of defense for grid security and resilience is emergency control of the power system. This research offers and examines several strategies for under-voltage emergency protection. The solutions explored include LTC tap changes (locking, reversing, and blocking), distribution side voltage set point decrease, and eventually, load shedding. The study also discusses how some of the aforementioned strategies might be integrated into developing an emergency control plan. The ideas are shown in a small power system with three loads with encouraging results.

References

  1. R. C. Qiu and P. Antonik, Smart Grid and Big Data: Theory and Practice, New York: Wiley Publishing, 2017.
  2. X. He, Q. Ai, R. C. Qiu, W. T. Huang, L. J. Piao, and H. C. Liu, “A big data architecture design for smart grids based on random matrix theory, “ IEEE Transactions on Smart Grid, vol. 8, no. 2, pp. 674–686, Mar. 2017.
  3. T. Yu, B. Zhou, and W. G. Zhen, “Application and development of reinforcement learning theory in power systems,” Power System Protection and Control, vol. 37, no. 14, pp. 122–128, Jul. 2009.
  4. E. Mocanu, “Machine learning applied to smart grids, “ Ph. D. dissertation, Department, TechnischeUniversiteit Eindhoven, Eindhoven, 2017.
  5. M. F. Zia, E. Elbouchikhi, and M. Benbouzid, “Micro-grids energy management systems: a critical review on methods, solutions, and prospects, “Applied Energy, vol. 222, pp. 1033–1055, Jul. 2018.
  6. X. He, L. Chu, R. C. M. Qiu, Q. Ai, and Z. N. Ling, “A novel data-driven situation awareness approach for future grids—Using large random matrices for big data modeling, “ IEEE Access, vol. 6, pp. 13855–13865, Mar. 2018.
  7. L. Chu, R. Qiu, X. He, Z. N. Ling, and Y. D. Liu, “Massive streaming pmu data modeling and analytics in smart grid state evaluation based on multiple high-dimensional covariance test, “ IEEE Transactions on Big Data, vol. 4, no. 1, pp. 55–64, Mar. 2018.
  8. R. Qiu, L. Chu, X. He, Z. N. Ling, and H. C. Liu, “Spatiotemporal big data analysis for smart grids based on random matrix theory, “ in Transportation and Power Grid in Smart Cities: Communication Networks and Services, H. T. Mouftah, M. Erol-Kantarci, and M. H. Rehmani, Eds. John Wiley & Sons Ltd, 2018, pp. 591–633.
  9. R. C. Qiu, X. He, L. Chu, and Q. Ai, “Big data analysis of power grid from random matrix theory, “ Institution of Engineering and Technology (IET) in Smarter Energy: From Smart Metering to the Smart GridSmarter Energy: From Smart Metering to the Smart Grid, 2016, pp. 381–425, doi:10.1049/pbpo088e ch13.
  10. V. Franc¸ois-Lavet, P. Henderson, R. Islam, M. G. Bellemare, andJ. Pineau, “An introduction to deep reinforcement learning, “ Foundations and Trends in Machine Learning, vol. 11, no. 3–4, pp. 219–354, Dec. 2018.
  11. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, “Human-level control through deep reinforcement learning, “ Nature, vol. 518, no. 7540, pp. 529–533, Jan. 2015.
  12. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search, “ Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016.
  13. D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. J. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. T. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge, “ Nature, vol. 550, no. 7676, pp. 354–359, Oct. 2017.
  14. O. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder, L. L. Weng, and W. Zaremba, “Learning dexterous in-hand manipulation, “ arXiv: 1808.00177 (2018).
  15. W. Y. Wang, J. W. Li, and X. D. He.(2018). Deep reinforcement learning for NLP. [Online]. Available: https://www.aclweb.org/anthology/ P18-5007.pdf.
  16. Y. Deng, F. Bao, Y. Y. Kong, Z. Q. Ren, and Q. H. Dai, “Deep direct reinforcement learning for financial signal representation and trading, “ IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 3, pp. 653–664, Mar. 2017.
  17. 25Z. H. Hu, Y. T. Liang, J. Zhang, Z. Li, and Y. Liu, “Inference aided reinforcement learning for incentive mechanism design in crowdsourcing, “ in Advances in Neural Information Processing Systems, 2018.
  18. H. W. Wang, C. J. Li, J. Y. Li, X. He, and T. W. Huang, “A survey on distributed optimization approaches and applications in smart grids, “ Journal of Control and Decision, vol. 6, no. 1, pp. 41–60, Nov. 2019.
  19. W. B. Shi, N. Li, C. C. Chu, and R. Gadh, “Real-time energy management in microgrids, “ IEEE Transactions on Smart Grid, vol. 8, no. 1, pp. 228–238, Jan. 2017.
  20. M. Zachar and P. Daoutidis, “Micro-grid/macro grid energy exchange: a novel market structure and stochastic scheduling, “IEEE Transactions on Smart Grid, vol. 8, no. 1, pp. 178–189, Jan. 2017.
  21. L. Ze´phyr and C. L. Anderson, “Stochastic dynamic programming approach to managing power system uncertainty with distributed storage, “ Computational Management Science, vol. 15, no. 1, pp. 87–110, Jan. 2018.
  22. J. L. Duchaud, G. Notton, C. Darras, and C. Voyant, “Power ramp-rate control algorithm with optimal State of Charge reference via Dynamic Programming, “ Energy, vol. 149, pp. 709–717, Apr. 2018.
  23. H. T. Nguyen, L. B. Le, and Z. Y. Wang, “A bidding strategy for virtual power plants with the intraday demand response exchange market using the stochastic programming, “ IEEE Transactions on Industry Applications, vol. 54, no. 4, pp. 3044–3055, July-Aug. 2018.
  24. A. I. Mahmutogullari, S. Ahmed, O. Cavus, and M. S. Akturk, “The value of multi-stage stochastic programming in a risk-averse unit commitment under uncertainty, “ arXiv:1808.00999, (2018).
  25. Megantoro, Prisma, F. D. Wijaya, and E. Firmansyah, “Analyze and optimization of the genetic algorithm implemented on maximum power point tracking technique for a PV system, “ in Proceedings of 2017 International Seminar on Application for Technology of Information and Communication, 2017.
  26. I. E. S. Naidu, K. R. Sudha, and A. C. Sekhar, “Dynamic stability margin evaluation of multi-machine power systems using genetic algorithm, “ in International Proceedings on Advances in Soft Computing, IntelligentSystemsandApplications, M. S. Reddy, K. Viswanath, and S.P. K. M, Eds. Singapore: Springer, 2018.
  27. G. N. Nguyen, K. Jagatheesan, A. S. Ashour, B. Anand, and N. Dey, “Ant colony optimization based load frequency control of multi-area interconnected thermal power system with governor dead-band nonlinearity, “ in Smart Trends in Systems, Security and Sustainability, X. S. Yang, A. K. Nagar, and A. Joshi, Eds. Singapore: Springer, 2018.
  28. R. Srikakulapu and V. U, “Optimized design of collector topology for the offshore wind farm based on ant colony optimization with multiple traveling salesman problem, “ Journal of Modern Power Systems and Clean Energy, vol. 6, no. 6, pp. 1181–1192, Nov. 2018.
  29. H. Li, D. Yang, W. Z. Su, J. H. Lu¨, X. H. Yu, “An overall distribution particle swarm optimization MPPT algorithm for photovoltaic system under partial shading, “ IEEE Transactions on Industrial Electronics, vol. 66, no. 1, pp. 265–275, Jan. 2019.
  30. H. J. Gu, R. F. Yan, and T. K. Saha, “Minimum synchronous inertia requirement of renewable power systems, “ IEEE Transactions on Power Systems, vol. 33, no. 2, pp. 1533–1543, Mar. 2018.
  31. Y. X. Li, “Deep reinforcement learning, “ arXiv: 1810.06339 (2018).
  32. 41D. P. Bertsekas. Reinforcement learning and optimal control. [online]. Available: http://www.athenasc.com/.
  33. F. Agostinelli, G. Hocquet, S. Singh, and P. Baldi, “From reinforcement learning to deep reinforcement learning: an overview, “ in Braverman Readings in Machine Learning. Key Ideas from Inception to Current State, L. Rozonoer, B. Mirkin, and I. Muchnik, Eds. Cham: Springer, 2018, pp. 298–328.
  34. L. Busoniu, R. Babuska, B. De Schutter, and D. Ernst, Reinforcement Learning and Dynamic Programming Using Function Approximators, Boca Raton: CRC Press, Inc., 2010.
  35. M. Wiering and M. Van Otterlo, “Reinforcement learning, “ Adaptation, Learning, and Optimization, vol. 12, pp. 51, 2012.
  36. M. Lapan, Deep Reinforcement Learning Hands-On: Apply Modern RL Methods, with Deep Q-Networks, Value Iteration, Policy Gradients, TRPO, AlphaGo Zero, and More, UK: Packt Publishing Ltd, 2018.
  37. O. Nachum, M. Norouzi, K. Xu, and D. Schuurmans, “Bridging the gap between value and policy-based reinforcement learning, “ in Advances in Neural Information Processing Systems, 2017.
  38. E. Mocanu, D. C. Mocanu, P. H. Nguyen, A. Liotta, M. E. Webber, M. Gibescu, and J. G. Slootweg, “On-line building energy optimization using deep reinforcement learning, “ IEEE Transactions on Smart Grid, vol. 10, no. 4, pp. 3698–3708, Jul. 2019.
  39. Z. Q. Wan, H. P. Li, and H. B. He, “Residential energy management with deep reinforcement learning, “ in Proceedings of 2018 International Joint Conference on Neural Networks, 2018.
  40. B. V. Mbuwir, M. Kaffash, and G. Deconinck, “Battery scheduling in a residential multi-carrier energy system using reinforcement learning, “ in Proceedings of 2018 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids, 2018.
  41. J. D. Wu, H. W. He, J. K. Peng, Y. C. Li, and Z. J. Li, “Continuous reinforcement learning of energy management with deep Q network for a power-split hybrid electric bus, “ Applied Energy, vol. 222, pp. 799–811, Jul. 2018.
  42. Z. Q. Wan, H. P. Li, H. B. He, and D. Prokhorov, “Model-free realtime EV charging scheduling based on deep reinforcement learning, “ IEEE Transactions on Smart Grid, vol. 10, no. 5, pp. 5246–5257, Sep. 2019.
  43. Y. Hu, W. M. Li, K. Xu, T. Zahid, F. Y. Qin, and C. M. Li, “Energy management strategy for a hybrid electric vehicle based on deep reinforcement learning, “ Applied Sciences, vol. 8, No. 2, pp. 187, Jan. 2018.
  44. 55X. W. Qi, Y. D. Luo, G. Y. Wu, K. Boriboonsomsin, and M. J. Barth, “Deep reinforcement learning-based vehicle energy efficiency autonomous learning system, “ in Proceedings of 2017 IEEE Intelligent Vehicles Symposium, 2017.
  45. X. W. Qi, Y. D. Luo, G. Y. Wu, K. Boriboonsomsin, and M. Barth, “Deep reinforcement learning-enabled self-learning control for energy-efficient driving, “ Transportation Research Part C: Emerging Technologies, vol. 99, pp. 67–81, Feb. 2019.
  46. Y. K. Wu, H. C. Tan, J. K. Peng, H. L. Zhang, and H. W. He, “Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus, “ Applied Energy, vol. 247, pp. 454–466, Aug. 2019.
  47. V. Franc¸ois-Lavet, D. Taralla, D. Ernst, and R. Fonteneau, “Deep reinforcement learning solutions for energy micro-grids management, “ in Proceedings of European Workshop on Reinforcement Learning, 2016.
  48. V. Franc¸ois-Lavet, “Contributions to deep reinforcement learning and its applications in smartgrids, “ Ph. D. dissertation, Department, Universite´ de Lie`ge, Lie`ge, Belgique, 2017.

Downloads

Published

2022-02-28

Issue

Section

Research Articles

How to Cite

[1]
Alok Kumar Sriwastawa, Virendra Pratap Yadav "Study of All Available Strategies Integrated in Developing an Emergency Control Plan" International Journal of Scientific Research in Science and Technology(IJSRST), Online ISSN : 2395-602X, Print ISSN : 2395-6011,Volume 9, Issue 1, pp.300-308, January-February-2022.