Reinforcement Learning for Decentralized Energy Systems
Pysyvä osoite
Kuvaus
With the rise of electric vehicles (EVs) and smart grid technology, sophisticated energy management systems that guarantee sustainability and efficiency are required. An in-depth examination of reinforcement learning (RL) algorithms in a simulated smart grid system featuring prosumer-generated renewable energy and embedded EV charging stations is presented in this thesis. The study assesses how well the Deep Deterministic Policy Gradient (DDPG), Proximal Policy Optimization (PPO), and Rule-Based Control (RBC) algorithms manage the energy dynamics of 50 prosumer nodes and 40 EVs over a 24-hour period using a Markov decision process framework. The RL algorithms interact with the environment to learn sequential decision-making processes that maximize the overall reward, with a particular focus on balancing energy production, consumption, and vehicle charging demands. The simulation results reveal DDPG's strength in cost-efficient grid energy purchasing and effective state of charge (SOC) management, PPO's potential through exploratory learning, and RBC's advantage in minimizing energy wastage. The findings point towards the necessity of intelligent energy management strategies that not only minimize costs and maximize the use of renewable energy but also enhance the operational efficiency and sustainability of the smart grid and EV ecosystems.
