Diana Ibraheem Feasibility Study of the Remote Operation Centre (ROC) for Autonomous Power Plants Vaasa 2025 School of Technology and Innovations Master’s thesis Sustainability and Autonomous system 2 UNIVERSITY OF VAASA School of Technology and Innovations Author: Diana Ibraheem Title of the thesis: Feasibility Study of the Remote Operation Centre (ROC) for Auton- omous Power Plants Degree: Master of Computing Science Degree Programme: Sustainability and Autonomous System Supervisor: Petri Välisuo, Amir-Mohammad Shamekhi, Mahmoud Elsanhoury Year: 2025 Pages: 112 ABSTRACT: Autonomous operation of distributed power assets requires a remote operation centre (ROC) that integrates sensing, control and assurance with auditable execution. This thesis defines and implements a ROC reference architecture for autonomous power plants, emphasising determin- istic supervision, explainable autonomy and operator-in-the-loop authority. The design draws on established standards and technologies: measurement and calibration workflows, time syn- chronisation (PTP/NTP), interoperable interfaces, and the data plane that supports multi-rate fusion, digital-twin synchronisation and embedded-grade cybersecurity. The control stack com- bines model-predictive control and reinforcement learning, bounded by safety envelopes, ac- tion-admissibility checks and traceable logging to preserve transparency and rollback capability. The architecture is evaluated in the EPS/VEBIC laboratory against timing and data-integrity con- straints, demonstrating end-to-end timing fidelity, fault-tolerant acquisition and the feasibility of deterministic supervisory control as a base for higher autonomy. The main contribution is an implementable ROC blueprint that links measurement integrity, secure communications and hi- erarchical control to an auditable route from remote supervision to safe autonomy. The work is limited to architectural design and laboratory-based validation; a formal safety case and com- prehensive verification and validation of AI components remain future work, alongside scaling to multi-asset supervision. KEYWORDS: power plants; autonomous systems; artificial intelligence; control systems; rein- forcement learning; control rooms; synchronising; cyber security. 3 Foreword This Master`s thesis was carried out at the University of Vaasa as part of the research on autonomous systems. The work contributes to the study and design of a Remote Oper- ation Centre (ROC) for autonomous power plants, conducted within the framework of the Efficient Powertrain Solutions (EPS) laboratory and the Vaasa Energy Business Inno- vation Centre (VEBIC). Firstly, I would like to thank my instructor, Dr. Amir-Mohammad Shamekhi, for his guid- ance and committed support throughout this work. I am also grateful to my supervisor, Prof. Petri Välisuo, for his academic supervision and constructive comments. I extend my thanks and appreciation to my second supervisor, Dr. Mahmoud Elsanhoury. I would also like to thank the EPS and laboratory staff for their assistance during the research phase. Finally, I wish to thank my family for their continuous support throughout my studies. 4 Contents Foreword 3 1 Introduction 10 1.1 Background 10 1.2 Remote Operation Centres (ROC) 11 1.3 Research Problem and Questions 13 1.4 Thesis Structure 15 2 Literature Review and Technology Landscape 17 2.1 Evolution and State of the Art in Autonomy for Maritime and Power Sectors 17 2.1.1 Maritime Autonomy 17 2.1.2 Power Sector and Autonomous Power Plants 18 2.1.3 Synthesis and Implications for ROC 20 2.2 Data Foundations: Sensors, Acquisition, and Computational Topologies 20 2.2.1 Multimodal and Redundant Sensor Architectures 21 2.2.2 Self-Powered and Passive Sensors 24 2.2.3 Calibration, Synchronisation (PTP/NTP), and Reliability 26 2.2.4 Data Acquisition and Time-Series Handling 27 2.2.5 Computational Topologies: Edge, Fog, and Cloud 30 2.2.6 Sensor-Centric Cybersecurity in the Data Plane 32 2.2.7 Industrial deployments demonstrating ROC readiness 34 2.3 Control and Decision-Making Frameworks 36 2.3.1 Model Predictive Control (MPC) 38 2.3.2 AI-Based and Reinforcement Learning Controllers 40 2.3.3 Fault-Tolerant and Decentralized Control 42 2.4 Communication and Integration Protocols 43 2.4.1 Data plane and time base 44 2.4.2 Data lifecycle and latency 46 2.4.3 Cybersecurity Layers and Secure Interfaces 46 2.5 Digital Twins and Predictive Diagnostics 47 5 2.5.1 Digital Twin-as-Observer and Predictor for Model Predictive Control (MPC) 49 2.5.2 Twin-in-the-Loop Model Predictive Control 51 2.5.3 Hybrid Health Modeling Twin for MPC Constraint Management 53 2.5.4 Reduced-Order Real-Time Twin for MPC Feasibility 55 3 Laboratory Infrastructure and Suitability for ROC Development 57 3.1 Engine and Auxiliary Systems: EPS Laboratory Platform 57 3.2 Current Control and Automation Topology 58 3.2.1 Control Hierarchy and System Components 59 3.2.2 Data Acquisition and Synchronization 60 3.2.3 Automation, and Communication Architecture and SCADA Integration 60 3.2.4 Integration with Digital Twin and Simulation Frameworks 61 3.3 Data Lifecycle and Workflow in the EPS Laboratory 62 3.3.1 Data Acquisition and Sampling Strategy 63 3.3.2 Calibration and Preprocessing Pipeline 64 3.3.3 Data Storage and Retention Architecture 64 3.3.4 Post-Processing and Feature Extraction 65 3.3.5 Remote Access and Workflow Automation 66 4 ROC Vision 68 4.1 Functional Requirements of the Proposed ROC 68 4.1.1 Supervisory Intelligence (MPC, RL, DT) 68 4.1.2 Reliable and Secure Communication (PTP/NTP, OPC UA, MQTT) 70 4.1.3 Operator Support (Explainability, Alarms, Trust in Automation) 70 4.2 Control Hierarchy and Execution Path 71 4.2.1 Local Control Layer 72 4.2.2 Supervisory Layer: Hierarchical MPC and RL Integration 73 4.2.3 Protection and Safety Layer 74 4.2.4 Execution Path and Timing 75 4.2.5 Computational Topology Alignment: Edge-Fog-Cloud Mapping 75 4.3 AI Modules in the ROC Supervisory Layer 76 6 4.3.1 Model Predictive Control (MPC) 77 4.3.2 Reinforcement Learning (RL) 77 78 4.3.3 Digital Twin Synchronization 78 4.3.4 Runtime Assurance and Command Validation 80 4.3.5 AI-based Anomaly Detection and Prognostics 81 4.4 Data and Communication Architecture 84 4.4.1 Time Synchronization (PTP/NTP) 85 4.4.2 Telemetry and Messaging Protocols 86 4.4.3 Data Acquisition and Integration 87 4.4.4 Edge-Fog-Cloud Data Flows 89 4.5 Security and Safety Envelope 90 5 Feasibility Analysis and Integration Challenges 94 5.1 Technical Feasibility and Gap Mapping 94 5.2 Operational Integration and Interoperability 96 6 Conclusions 99 Acknowledgement 100 References 101 7 Figures Figure 1. Thesis structure: LR and EPS baseline → derived requirements & constraints → ROC design (Ch. 4) → feasibility & gaps (Ch. 5). 16 Figure 2. Architecture of a digital twin and Asset Administration Shell (AAS). 51 Figure 3. System overview of Twin-in-the-Loop MPC framework using a Time Series Dense Encoder (TiDE) as a digital twin surrogate for Directed Energy Deposition (Chen et al., 2025; Das et al., 2023). 53 Figure 4. Simplified technical architecture of the D-Hydroflex turbine digital twin at Wały Śląskie Hydroelectric Power Plant (Machalski et al., 2025). 55 Figure 5. Control Hierarchy, Execution Path and Data Flow. 74 Figure 6. Hybrid integration MPC and RL modules across Edge, Fog, and Cloud layers in the ROC supervisory architecture. 78 Figure 7. Digital-twin workflow: reduced engine model, real-time/HIL coupling, residuals and RUL features (Söderäng et al., 2022). 83 Figure 8. Supervisory ROC proposal Design. 92 Tables Table 1. Sensor Performance Characteristics (Fraden, 2016) 22 Table 2. Main challenges in time-series data management 28 Table 3. Comparative characteristics of edge, fog, and cloud computing layers in autonomous control environments adapted from (Shi et al., 2016; Satyanarayanan, 2017). 31 Table 4. Protocol Feature Comparison for ROC Applications 45 Table 5. Wärtsilä 4L20 engine specifications (Valkjärvi, 2022). 57 Table 6. Control and Computational Layer Alignment 76 Table 7. Legend table of Figure 8. 93 Table 8. Comparison of EPS Laboratory capabilities and ROC requirements. 96 Table 9. EPS Laboratory automation readiness matrix 98 8 Abbreviations AAS Asset Administration Shell ABB ASEA Brown Boveri ADC Analog-to-digital converter BESS Battery energy storage system CAD Crank-angle degree CA50 Crank angle at 50% heat release CAN Controller Area Network CLD Chemiluminescence detector CO Carbon monoxide DAQ Data acquisition DAS Distributed acoustic sensing DED Directed energy deposition DER Distributed energy resources DMPC Distributed model predictive control DRL Deep reinforcement learning DT Digital twin DTS Distributed temperature sensing EKF Extended Kalman filter EMS Energy management system EPS Efficient Powertrain Solutions FFT Fast Fourier transform FTIR Fourier transform infrared FTC Fault-tolerant control GOOSE Generic Object Oriented Substation Event HIL Hardware-in-the-loop HMI Human–machine interface IEC International Electrotechnical Commission IEEE Institute of Electrical and Electronics Engineers IMEP Indicated mean effective pressure IRKA Iterative Rational Krylov Algorithm LSTM Long short-term memory MHE Moving horizon estimation MIL Model-in-the-loop MOR Model order reduction MPC Model predictive control MQTT Message Queuing Telemetry Transport 9 NIST National Institute of Standards and Technology NOX Nitrogen oxides NTP Network Time Protocol OOTL Out-of-the-loop OPC UA Open Platform Communications Unified Architecture PHM Prognostics and health management PMU Phasor measurement unit POD Proper orthogonal decomposition PTP Precision Time Protocol RIAPS Resilient Information Architecture Platform for Smart Grid RL Reinforcement learning ROC Remote Operation Centre ROM Reduced-order model ROHR Rate of heat release RUL Remaining useful life SCADA Supervisory Control and Data Acquisition SV Sampled Values THC Total hydrocarbons TIDE Time-series Dense Encoder UAV Unmanned aerial vehicle UKF Unscented Kalman filter VEBIC Vaasa Energy Business Innovation Center VPP Virtual power plant XDT Executable digital twin 10 1 Introduction 1.1 Background The global energy sector is currently undergoing a major transformation, driven by three interrelated developments: decarbonisation, digitalisation, and increasing autonomy. Decarbonisation introduces requirements for efficiency, operational flexibility, and inte- gration of variable renewable energy sources. Digitalisation provides new capabilities in data acquisition, machine learning, and predictive analytics. Together, these develop- ment trends are directing the energy sector towards more autonomous operation, where supervisory intelligence gradually replaces traditional operator-driven coordina- tion. This transition the Remote Operation Centre (ROC) has become a sort of key ena- bler, providing the needed infrastructure for centralized monitoring, coordination and optimisation of power plants that are often scattered across wide areas. Industrial ex- amples already show how the ROC concept is actually used in practice. Wärtsilä’s WISE programme applies these same principles to power plant management, focusing mainly on predictive optimisation, lower emissions and better operator support especially in remote or hard to access environments. In a similar way ABB’s Ability™ platform goes in this direction too, by bringing automation systems together into central control hubs designed for both microgrids and larger industrial setups. In the maritime sector remote operation centres are also becoming quite common, used for supervising and controlling autonomous vessels. As reported by DNV (2022), shore- based control centres are now used for navigation, mission planning, and emergency response of vessels operating with reduced or no onboard crew. These examples indicate that the ROC should not be viewed solely as a monitoring layer. Instead, it acts as an essential component of the overall control architecture, having responsibilities related to coordination, safety supervision, and high-level decision support. Despite the rapid industrial deployment, academic research has not yet produced a uni- fied theoretical framework for ROCs in autonomous power plant applications. Existing supervisory systems are still limited by deterministic yet rigid optimisation methods, low adaptability to uncertainty, and fragmented data exchange. Advances in fields such as 11 Artificial Intelligence, Modelling, and Control provide new opportunities. These include reinforcement learning for adaptability, digital twins for predictive modelling, and ad- vanced model predictive control for constraint-aware optimisation. However, these methods are often examined separately and are rarely integrated into a comprehensive supervisory structure. Meanwhile the growing autonomy of industrial systems brings along several challenges in how humans and automation actually work together, like maintaining operator trust, keeping situational awareness, and avoiding the typical out-of-the-loop performance problem (Mooneyham & Schooler, 2013). The motivation for this thesis sort of lies at the crossing point of these industrial and academic developments. While the industry al- ready shows that ROCs can work as real enablers of autonomy, there is still a missing systematic, research-based design framework that limits their broader use in autono- mous power plants. This work tries to fill that gap by developing an AI-enhanced ROC architecture, which combines deterministic optimisation with adaptive learning, predic- tive digital modelling and a structured data management concept into one coherent su- pervisory setup. 1.2 Remote Operation Centres (ROC) Remote Operation Centres (ROCs) are fast becoming central parts in the digitalisation of critical infrastructures. In the energy field, a ROC basically works as a supervisory hub that gathers real-time data from distributed assets, runs optimisation and predic- tive algorithms, and gives decision-support interfaces for operators. Unlike local con- trollers that operate in millisecond timescales to keep deterministic feasibility, the ROC takes care of the long-term goals such as efficiency, emission reduction and asset health management across several plants at once. Industrial examples already show why this idea is important. Wärtsilä’s WISE platform brings ROCs into autonomous power plant operation by offering predictive mainte- nance, fleet-level optimisation and better operator support (Wärtsilä, 2024). ABB’s 12 Ability™ system follows a similar path in microgrids and industrial automation, provid- ing remote monitoring and event detection and centralised optimisation (ABB, 2024). While in the maritime sector, DNV (2022) describes how shore-based ROC are used to supervise remote and autonomous vessels, coordinating navigation, mission planning and also emergency handling when is needed, similar trends can be seen in the oil and gas industry, remote operation is already seen as a step toward fully autonomous sites. Devold and Fjellheim (2019) point out that these setups are enabled by Internet of Things technologies used for sensing and control, connectivity for information ex- change, and AI-based analytics. Jointly these cases make it clear that ROCs have moved past the trial phase and are now quite essential in safety-critical work. Nonetheless, despite the steady industrial progress, today`s ROC systems remain somewhat fragmented. Synchronising distributed telemetry is tricky because of differ- ent standards and protocols (like IEEE 1588 PTP, IEC 61850, OPC UA, MQTT) which of- ten makes integration more difficult than it should be, in many current setups, supervi- sory decision-making still relies on fixed optimisation logic. These methods are de- pendable but miss the flexibility of reinforcement learning or the predictive value that comes with digital twins. More automation inside supervisory systems also brings up its own set of problems with human-automation interaction. Issues like reduced situa- tional awareness, uneven operator workload and the well-known out-of-the-loop (OOTL) effect (Mooneyham & Schooler, 2013) tend to appear. The OOTL effect refers to how operator engagement and awareness drop when most control authority is handed over to automation, when human operator mainly monitors instead of directly controlling, the ability to notice anomalies or react quickly to abnormal events de- creases, which can weaken the reliability and safety of remote supervision in critical systems (Merat et al., 2019). In this thesis the ROC is observed as a centralised supervisory control hub for autono- mous power plants. The concept includes four main functions: (i) real-time data collec- tion and fusion, (ii) supervisory intelligence through hierarchical MPC, reinforcement learning and digital twin modelling, (iii) runtime assurance and validation of commands 13 for safety, and (iv) operator decision-support for trust and transparency. With this, ROC is not just a monitoring tool but an AI-supported supervisory architecture that fol- lows industrial practice while also tackling the open academic challenges still ahead. 1.3 Research Problem and Questions The global energy sector is advancing towards autonomous and data-driven opera- tional concepts, where supervisory intelligence and predictive analytics are increas- ingly integrated into plant-level decision-making (ABB, 2024; Wärtsilä, 2024). The in- dustrial initiatives have been discussed in earlier to demonstrate this development trend. However, despite the significant industrial progress the academic research has not yet formulated a unified framework for ROC design within the context of autono- mous power plants. The supervisory role of a ROC demands the integration of several advanced methodol- ogies within a single coherent architecture, Model Predictive Control (MPC) enables constraint-aware optimisation but remains restricted by its computational demands and modelling accuracy requirements (Rawlings et al., 2020), Reinforcement Learning (RL) provides adaptability under uncertain conditions though it lacks deterministic safety assurance (Yu et al., 2024), Digital Twins (DTs) support predictive and condition- aware modelling; however, their performance depends strongly on accurate synchro- nisation and model fidelity, especially during transient operation (Hautala et al., 2022). Simultaneously, the interoperability of communication standards for example IEEE 1588 PTP, OPC UA, and MQTT continues to be a challenge in heterogeneous industrial environments (Gupta et al., 2023). Furthermore the growing autonomy of supervisory systems introduces human-automation interaction issues with including operator trust, situational awareness, and the out-of-the-loop (OOTL) problem, which have been observed in both the energy and maritime sectors (Mooneyham & Schooler, 2013; DNV, 2022). 14 The increasing autonomy of supervisory systems also introduces human-automation concerns, including operator trust, situational awareness, and the out-of-the-loop (OOTL) problem, which have been observed in both energy and maritime domains (Mooneyham & Schooler, 2013; DNV, 2022). The objective of this thesis is to design and evaluate an AI-enhanced ROC framework for autonomous power plants, the framework is integrating the deterministic optimisation, adaptive learning, predictive modelling and robust data processing pipelines into a co- herent supervisory architecture, By consideration is given to computational feasibility and operator interaction to ensure both operational safety and trust. The proposed de- sign is validated the concept through literature and practically against what available in- frastructure in the EPS laboratory at the University of Vaasa. The research aims to answer the following questions: 1. What are the functional and architectural requirements for a Remote Operation Centre to manage an autonomous power plant? 2. How can hierarchical MPC, reinforcement learning and digital twins be integrated into a supervisory control layer that remains both adaptive and safe? 3. What data collection, fusion and communication pipelines are required to enable ROC operation across heterogeneous assets? 4. How can runtime assurance and human-AI collaboration be embedded into the ROC to ensure reliability, transparency, and operator trust? The scope of this thesis is focused on the supervisory design of a Remote Operation Cen- tre (ROC) intended for autonomous power plant applications. The main emphasis is placed on AI-based decision-making, predictive modelling, and data synchronisation at the supervisory level. Mechanical design aspects of individual plants and the implemen- tation of local controllers are excluded from consideration. The main contribution of this work is the development of an integrated ROC architecture that combines hierarchical model predictive control, reinforcement learning, and digital twins into a coherent su- pervisory framework, supported by a structured and reliable data pipeline. 15 1.4 Thesis Structure The thesis is organised in 6 chapters. After Introduction and problem definition in chap- ter 1, Chapter 2 is dedicated to Literature Review to establish the theoretical foundation for the ROC design. It examines supervisory intelligence methods, including model pre- dictive control, reinforcement learning, and digital twins, as well as data communication and human-automation interaction. Rather than serving only as a survey, this chapter identifies the functional requirements that a ROC must fulfil. Chapter 3, EPS Laboratory, introduces the experimental context of the Efficient Power- train Solutions facility at the University of Vaasa. The laboratory infrastructure, consisting of real-time controllers, data acquisition equipment, and communication systems, de- fines the practical boundary conditions against which the ROC design must be developed. Chapters 2 and 3 together establish both the conceptual requirements and the practical limitations that define the basis for the ROC development. Chapter 4, reflects ROC Design and Architecture Proposal, which forms the central part of this thesis. It synthesises to find from the literature review and laboratory investiga- tions within a conceptual framework for an AI-enhanced ROC. The proposed design in- tegrates the hierarchical Model Predictive Control, Reinforcement Learning and Digital Twin technologies within a supervisory control layer that supported by a real-time data pipeline architecture, this chapter demonstrates how theoretical progress and experi- mental conditions can be led to unified ROC framework works for autonomous power plant operation. Chapter 5, Feasibility and Challenges, assesses the proposed ROC in terms of computa- tional scalability, communication reliability and human-operator interaction, It highlights the main discrepancies between the conceptual design and the practical industrial im- plementation thereby defining the direction for continued development. Chapter 6, Conclusions and Future Work, presents the summary of the key contributions of the thesis and reflects on its limitations, furthermore, it suggests the possible exten- sions such as multi-plant coordination and enhanced human-AI interfaces in order to improve supervisory autonomy and decision support. 16 The methodological structure of the thesis is organised so that the literature review (Chapter 2) and the laboratory framework (Chapter 3) establish the foundation for the ROC concept presented in Chapter 4, which is subsequently evaluated in Chapter 5 and concluded in Chapter 6. Figure 1. Thesis structure: LR and EPS baseline → derived requirements & constraints → ROC design (Ch. 4) → feasibility & gaps (Ch. 5). Constraints Requirements Literature Review Chapter 2 EPS/VEBIC Lab Chapter 3 Requirements Constraints derived ROC design pro- posal /Chapter 4 Feasibility test Chapter 5 17 2 Literature Review and Technology Landscape 2.1 Evolution and State of the Art in Autonomy for Maritime and Power Sectors The shifting towards autonomous operation has evolved in different trajectories within the maritime and power sectors. In the maritime domain the progress has been system- atic and regulation-driven that guided by international frameworks and phased imple- mentation roadmaps, in contrast, the power sector has faced more gradually through ongoing digitalisation and the progressive centralisation of supervisory functions, in this section reviews the development trends within both sectors and examines their signifi- cance for the design and implementation of Remote Operation Centres (ROCs) intended for autonomous power plant applications. 2.1.1 Maritime Autonomy The maritime sector has improved towards autonomy, and that through staged roadmaps that combine technological innovation and with regulatory guidance, an early developments has included the adoption of autopilot systems, radar and GPS-based nav- igation, which provided the foundation for more advanced capabilities, since 2010, the integration of sensor fusion, real-time control and remote operation platforms has ac- celerated the development of Maritime Autonomous Surface Ships (MASS), demonstra- tor projects such as the Yara Birkeland, the vessel that represents the world`s first fully electric and zero-emission autonomous container, to illustrate the gradual transition from manual operation to remote control and eventually to full autonomy (Munim, 2022). Finland has contributed significantly in this trajectory, the Azipod propulsion system that developed collaboratively by Wärtsilä Marine, Stromberg and the Finnish National Board of Navigation was the first installation on the icebreaker Seili in 1990. System introduced 18 a high degree of manoeuvrability later supported the development of autonomous nav- igation platforms (ABB Marine, 2020). More recently the integration of Vessel Traffic Ser- vices (VTS) with coastal surveillance systems has enhanced situational awareness through the fusion of radar and AIS data, also improving the target detection and track- ing the performance by leveraging varied information fusion methods, such as Dempster, Shafer evidence theory (Wu, Wu, Ma, & Wang, 2023)., A central enabler of maritime autonomy is the International Maritime Organization (IMO), which has introduced a reg- ulatory framework for the Maritime Autonomous Surface Ships (MASS). The framework defines four levels of autonomy, ranging from Level 0 (manual operation) to Level 4 (fully autonomous operation without human intervention) (Goerlandt, 2020; Klein et al., 2020). The existence of a regulatory framework ensures interoperability and phased adoption and distinguishing maritime development from the more fragmented progress seen in other sectors. 2.1.2 Power Sector and Autonomous Power Plants The transition moving to the autonomy in the power sector has progressed more grad- ually than in the industrial domains, while most modern generation units, including wind, solar and hydropower plants, already employ advanced automation for local pro- cess control and optimisation, this automation represents only a partial form of auton- omy, as decision making remains rule based and limited within individual plant bound- aries. Anatomy system by adaptive coordination and predictive control across multiple assets, is still at an early stage of development. Traditional Supervisory Control and Data Acquisition (SCADA) systems and Energy Man- agement Systems (EMS) have improved monitoring and stability at the transmission level, but they have provided only limited autonomy within distribution networks. As stated by Di Silvestre et al. (2020, p. 3), “Smart Grid infrastructures are still fragmented across different voltage levels and ownerships, limiting the real-time coordination of 19 distributed energy resources and the achievement of full autonomy in system opera- tion.” This fragmentation continues to restrict the scalability of decentralised control and highlights the need for supervisory integration through Remote Operation Centres. Industrial practice during the last decade demonstrates clear progress toward remote and predictive operation. Wärtsilä’s Expert Insight platform enables predictive mainte- nance and lifecycle optimisation, with more than ninety-six percent of reported issues resolved remotely without on-site intervention (Wärtsilä, 2023a)., through support cen- tres in Houston and Trieste, Wärtsilä supervises over two hundred fifty power plants, by providing continuous diagnostics and remote lifecycle management (Wärtsilä, 2023b). Valmet also has achieved similar benefits by consolidating six biomass power plants into a single control room, which has improved operational efficiency and coordination (Val- met,2021). The Automation in renewable energy systems has also evolved rapidly. Chen et al. (2021) presented a deep learning aided model predictive control framework for wind farms that enhances automatic generation control through dynamic wake interac- tion modellng. This approach illustrates how intelligent supervisory algorithms can ex- tend automation towards more adaptive and cooperative control. At the grid level, the Siemens Finland Vibeco project integrates smart buildings, energy storage, and flexible loads into a digital platform that supports grid balancing and ena- bles participation in energy markets (Siemens, 2022). Despite these advancements, full autonomy at the multi plant and system coordination level remains uncommon. Most industrial plants still operate within predefined super- visory limits that correspond to intermediate stages of autonomy. Fragmented gover- ance and the lack of unified supervisory frameworks continue to slow down the pro- gress towards large scale autonomous operation (Liu et al., 2020; Zürn & Faude, 2013). 20 2.1.3 Synthesis and Implications for ROC The difference between the two sectors appears quite evident. In the maritime do- main, autonomy has evolved under structured roadmaps that have been strongly sup- ported by regulatory clarity and coordinated international efforts, which altogether have allowed a phased and rather predictable integration of new technologies. The power sector, on the other hand, has advanced in a slower and more organic manner, driven mostly by ongoing digitalisation and the steady centralisation of supervisory functions, yet its progress has often been limited by heterogeneous infrastructures and fragmented forms of governance that still vary across operators and regions. Even with these differences both sectors show a convergence towards Remote Opera- tion Centres (ROCs) as supervisory hubs. In maritime applications, ROCs function as transitional nodes between manned, remote, and autonomous operation. In the power sector, they provide multi-plant supervision, predictive optimisation, and reduced de- pendence on on-site intervention, for autonomous power plants this convergence means that ROCs must integrate constraint aware supervisory control using model pre- dictive control, adaptive intelligence through reinforcement learning, predictive model- ling through digital twins, and reliable communication infrastructure. These require- ments identified from the state of the art form the basis for the ROC design proposal presented in Chapter 4. 2.2 Data Foundations: Sensors, Acquisition, and Computational Topolo- gies Autonomous power plants rely on sensing as the main source of state information for both local deterministic control and supervisory decisions in the Remote Operation Cen- tre (ROC). When compared with more traditional SCADA-based systems, ROC-oriented installations usually require a higher sensor density, tighter temporal alignment, stronger observability, and also more reliable data quality. These requirements are 21 needed to support real-time optimisation, predictive maintenance, and different kinds of fault management. In this way, sensors are not only reporting devices but they ac- tively influence state awareness, which at the same time restricts and enables safe con- trol actions. This section discusses sensing and observability in the supervisory framework of auton- omous power plants. First, basic concepts of observability are introduced together with performance aspects of different sensing modalities. After this, redundancy, calibration, sensor fusion, and time synchronisation are considered, since they are important for dependable state estimation. The discussion then moves to data acquisition and han- dling of time-series, where different computational topologies such as edge, fog, and cloud are taken into account. Cybersecurity aspects that are directly related to sensors are also noted, followed by short remarks on self-powered and passive sensing technol- ogies. At the end, some industrial cases are presented and the key requirements for ROC design are summarised. 2.2.1 Multimodal and Redundant Sensor Architectures Following LaValle (2011), a sensor can be defined as a mapping h: X → Y where X denotes the physical state space, Y the observation space, and y = h(x) the observation. Since h is typically many-to-one, different states in X may correspond to the same observation y. The preimage ℎିଵ(𝑦), therefore represents the set of states that are consistent with a given observation. The usefulness of a sensing configuration depends on how effectively it reduces the information space (I-space), defined as the set of states consistent with all past and current observations. A smaller I-space improves state discriminability and enhances the quality of subsequent decision-making. The core relations are (LaValle, 2011): 𝑦 = ℎ(𝑥) (1) 22 the sensor maps state 𝑥 ∈ 𝑋to an observation 𝑦 ∈ 𝑌 ℎ{ିଵ}(𝑦) = { 𝑥 ∈ 𝑋 ∣ ℎ(𝑥) = 𝑦 } (2) The preimage ℎିଵ(𝑦) collects all states consistent with the observation 𝑦. ℐ𝓀 = ⋂ ℎக ିଵ(𝑦௜) ௞ ௜ୀ଴ (3) ℐ𝓀 means better state discriminability. Definitions in Eqs. (1)-(3) follow LaValle (2011). In practice, the informativeness of a sen- sor is constrained by device physics and signal processing. As described by Fraden (2016), the most relevant parameters include resolution (the minimum detectable change), sen- sitivity (output change per unit input), accuracy (deviation from the true value), drift (long-term deviation), hysteresis (direction-dependent output), repeatability (variation under identical conditions), and the transfer function y = f(x). These parameters col- lectively determine how reliably measurements reduce uncertainty in the I-space, espe- cially under conditions of noise, saturation, and environmental variability. Table 1. Sensor Performance Characteristics (Fraden, 2016) Property Description Resolution Minimum detectable change in input signal Sensitivity Output change per unit input Accuracy Deviation between measured and true value Drift Long-term deviation due to aging or environment Hysteresis Output variation depending on input direction Repeatability Variability in repeated measurements under identical conditions Transfer function y = f(x) Functional sensor response 23 These parameters reflect how much the sensor helps the system to understand its inter- nal state, especially when facing noise, signal overload, or changing conditions. Application-oriented sensing modalities in power plants cover thermal, mechanical, and flow processes. Temperature monitoring is typically performed using resistance temper- ature detectors (RTDs) and thermocouples, which provide input for thermal manage- ment and combustion optimisation. Pressure sensing in fuel, steam, and hydraulic cir- cuits relies on piezoelectric or capacitive devices, while flow measurement is achieved using ultrasonic and differential-pressure flowmeters to support mass balance and cool- ing system control. Mechanical condition monitoring is enabled by triaxial MEMS accel- erometers, which detect imbalance and misalignment in rotating machinery (Dhanraj et al., 2020). These fixed modalities are complemented by non-contact and spatially dis- tributed approaches. Electro-optical and thermal imaging systems support remote in- spection and hotspot detection, LiDAR provides three-dimensional geometry for UAV- based inspections, millimetre-wave radar maintains robustness under low-visibility con- ditions, and fibre-optic sensing (DTS/DAS) enables distributed monitoring along cables, pipelines, and offshore assets with immunity to electromagnetic interference (Oliveira et al., 2024; Zhu et al., 2022). Redundancy forms an essential mechanism for improving observability, and reliability in supervisory systems. Modal redundancy refers to measuring the same variable by apply- ing different physical principles, for example ultrasonic and differential pressure flowme- ters, spatial redundancy employs identical sensors installed at different locations to cap- ture gradients and identify local faults. Of time redundancy is based on repeated sam- pling to reduce noise and follow drift under fast varying thermomechanical conditions (LaValle, 2011; Fraden, 2016). These mechanisms enable cross validation, probability checking and gradual degradation behaviour, all of which are important when ROC deci- sions need to be made under delayed or incomplete data conditions. Sensor fusion transforms redundancy into actionable state information. In linear Gauss- ian settings, Kalman filtering provides minimum variance estimates by recursively 24 combining predictions with new observations (Kalman, 1994). For nonlinear dynamics or non Gaussian noise, Bayesian approaches such as extended or unscented Kalman fil- ters and particle filters propagate posterior distributions (LaValle, 2011). When noise statistics are uncertain but physical constraints are known, set-based filtering eliminates inconsistent states without requiring probabilistic assumptions (LaValle, 2011). In ROC- oriented deployments, fusion typically operates at the edge or fog level to enable rapid closure of local control loops, while higher-level fusion at ROC or cloud scale supports fleet-wide estimation, fault detection, and digital twin synchronisation. 2.2.2 Self-Powered and Passive Sensors In autonomous and remotely supervised energy systems, such as those coordinated through Remote Operation Centres (ROCs), sensors are often deployed in locations where access is limited or where stable power supply cannot be guaranteed. In these cases, self-powered and passive technologies provide a practical solution. Examples in- clude piezoelectric, inductive, and RFID-based devices that operate with minimal or no local energy storage. They either harvest energy from the surrounding environment or draw interrogator power through backscatter. Such technologies are particularly rele- vant in hazardous, enclosed, or remote areas where battery replacement or cabling is not feasible. By converting vibration, strain, thermal gradients, airflow, or electromag- netic fields into electrical power, these sensors reduce maintenance demands, improve safety, and extend the monitoring reach of autonomous systems (Bai, Jantunen, & Juuti, 2018). Several modalities have been explored in power and industrial settings. Piezoelectric har- vesters (PZT/ZnO) capture mechanical energy from rotating machinery and vibrating structures (Raja, Umapathy, Uma, & Usharani, 2023). Triboelectric nanogenerators (TENGs) use contact electrification to power gas sensors and low-power telemetry under variable operating conditions (Yu et al., 2024). Thermoelectric generators (TEGs) convert heat gradients near engines, turbines, or exhausts into usable electricity. Wind and fluid 25 harvesters make use of airflow or pressure variations in ducts and pipelines. Field trials have shown that wind-powered carbon monoxide monitoring can reach transmission distances of about 1.5 km without the need for batteries (Liu et al., 2020). In maritime applications, TENG-powered ammonia sensors placed in engine rooms have demon- strated detection limits close to 0.2 ppm, with vibration-powered wireless telemetry al- lowing continuous leak detection without wired connections (Yu et al., 2024). Cross-sec- tor demonstrations also indicate transferability. For example, wood-based TENG harvest- ers combined with CNT gas sensors for ammonia monitoring in cold-chain logistics sug- gest a pathway for distributed leak detection at plant scale, where ultra-low-power nodes and extended service autonomy are required (Zhang et al., 2023). At the ROC level, self-powered sensors contribute to early fault detection, emissions and leak surveillance, and predictive maintenance while reducing the need for frequent hu- man intervention. Their small size, local energy autonomy, and event-driven operation lower energy consumption at the sensing edge and decrease transmission frequency, which is important in bandwidth-constrained environments (Di Silvestre et al., 2018). Integration into industrial cyber-physical and IIoT networks is possible, although protocol stacks such as OPC UA and MQTT can be too demanding for ultra-low-power endpoints. This has motivated the use of lighter protocol adaptations and hybrid gateways at the fog layer. Energy-aware sampling policies, supported by machine learning, allow the ad- justment of sampling and reporting rates according to harvested energy reserves and the relevance of detected events. This approach extends operational lifetime while maintaining diagnostic fidelity (Bai, Jantunen, & Juuti, 2018; Liu et al., 2020). Security remains a challenge for these systems, since conventional cryptography requires high computational effort. Lightweight security methods (see 2.2.6) are therefore needed to maintain confidentiality and authenticity within the limited energy budgets of self-pow- ered nodes. 26 2.2.3 Calibration, Synchronisation (PTP/NTP), and Reliability Traceable calibration, precise time synchronisation, and device reliability are prerequi- sites for trustworthy autonomous operation in ROC-managed systems. Calibration must account for temperature coefficients, ageing, and installation-related deviations, and should follow recognised standards such as IEC 62828-1 (2017) and the NIST Handbook 44 (2021). Emerging practice includes peer-to-peer sensor comparisons, scheduled re- calibration during low-demand periods, and the use of online health indicators as part of predictive maintenance strategies. Time synchronisation requirements vary according to function. For supervisory logging, the Network Time Protocol (NTP) provides millisecond-level accuracy, which is typically sufficient, by contrast, the control-critical and analytics streams often demand sub- microsecond precision, achieved through the Precision Time Protocol (PTP) specified in IEEE 1588, IEEE C37.238, and the IEC 61850-9-3 time profile (Eidson, 2006; IEEE, 2008). Architectures commonly employ GPS-disciplined grandmasters, boundary clocks integrated in time-aware switches, and continuous monitoring of quality-of-time indicators, these synchronisation becomes a critical enabler for maintaining the digital- twin updates and for achieving PMU-like analytics, event correlation and coordinated operation of hierarchical MPC structures, also including the validation of twin-in-the- loop configurations (Zhang & Liu, 2019). Ensuring reliability under demanding environmental conditions, for instance high tem- perature, continuous vibration and electromagnetic interference, depends largely on compliance with established safety standards like IEC 61508. Alongside these addi- tional mechanisms like the redundant sensing is built-in self-testing, plausibility verifi- cation and metadata quality flags are used to guarantee trustworthy data for estima- tion and control. The use of modular sensor packaging further helps to reduce the mean time to repair and allows fast replacement of units even in remote or otherwise resource-limited sites 27 Industrial deployments highlight the role of synchronisation in practice, while ABB’s Longmeadow microgrid in Johannesburg coordinates photovoltaic generation, to battery energy storage, and diesel generators using sub-microsecond PTP alignment within the Microgrid Plus system. This synchronisation stabilises inverter, generator interactions during rapid transitions between grid-connected and islanded operation, while main- taining load balancing and generator coordination through consistent, time-aligned communication with distributed equipment (ABB, 2015). Similarly, the Secure Scalable Microgrid Test Bed at Sandia National Laboratories applies IEEE 1588-compliant PTP to synchronise sensors, controllers, and hardware-in-the-loop simulators across photovol- taic, storage, and diesel subsystems (Eidson, 2006; IEEE, 2008; Glover, Neely, Lentine, & Finn, 2012; Sandia National Laboratories, 2022). The resulting common time base ena- bles real-time testing of autonomous control strategies, fault-tolerance mechanisms, and digital-twin integration; it can be concluded that high-fidelity timing is therefore a functional prerequisite for ROC-orchestrated autonomy (Zhang & Liu, 2019). In summary, traceable calibration, high-precision time synchronisation with quality-of- time monitoring, and reliability engineering at both device and data-quality levels are essential prerequisites for autonomous power plant operation. These conditions ensure that measurements remain coherent, trustworthy, and time-aligned across supervisory functions. In Chapter 4 the ROC in design developing, by providing the requirements foundation for digital-twin-assisted MPC and reinforcement learning, where the quality of decision-making depends directly on the fidelity of sensed data 2.2.4 Data Acquisition and Time-Series Handling Autonomous power plants depend on continuous and high-frequency measurement streams, which is typically in the millisecond or microsecond range, to detect disturb- ances, stabilise control loops and enable predictive diagnostics, within the ROC, the per- formance of autonomous decision-making is determined by how efficiently of measure- ment data are acquired, stored an processed, and how it visualised, wide sensing base 28 covering electrical, thermal, mechanical, and environmental variables forms the founda- tion of this process. When high-resolution signals are correctly managed, they enable rapid detection of changes, support advanced control strategies such as Model Predic- tive Control (MPC), and provide input for digital-twin simulations. Previous studies have shown that well-structured time-series management improves maintenance prediction, decreases unplanned downtime, and enhances situational awareness across the system (Kejser et al., 2017; Grzesik & Mrozek, 2020). Three frequent challenges can be identified in practice, first, heterogeneous measuring devices that introduce irregular sampling and temporal misalignment, which it must be corrected before signals and can be compared or fused (Zhang & Liu, 2019). Second, the volume of persistent measurements can reach several million points per second that re- quiring the data platforms to scale without compromising reliability (Jensen et al., 2017). Third, the anomaly detection must be performed within sub-second latency in order to react to events such as generator malfunctions or fuel system leaks, since the delayed response may endanger both safety and availability (Grzesik & Mrozek, 2020; Dhanraj et al., 2020). Table 2. Main challenges in time-series data management Challenge Description Irregular sampling and time alignment Heterogeneous sensors may acquire at slightly different times, compli- cating comparison and fusion; alignment techniques are required to cor- rect temporal mismatches (Zhang & Liu, 2019). High volume and scalability Modern deployments can generate millions of data points per second; platforms must scale while maintaining reliability (Jensen et al., 2017). Fast anomaly de- tection Detection and response should operate at sub-second latencies for events such as generator anomalies or fuel leaks (Grzesik & Mrozek, 2020; Dhanraj et al., 2020). 29 Different approaches have been developed to address these requirements at the edge and data storage layers. At the edge level, embedded devices such as Raspberry Pi or Speedgoat controllers are commonly used to reduce network load by correcting timestamps, validating data formats, and applying compression. These devices also en- able local fault detection at an early stage (Grzesik & Mrozek, 2020). In the storage layer the use of time-series databases like InfluxDB, TimescaleDB and Alibaba Lindorm has become common, as these systems are particularly designed for handling ordered high- frequency data streams. They offer efficient data persistence together with fast retrieval and configurable retention management, features that are essential for tracking both historical trends and the real-time operating conditions within the ROC (Jensen et al., 2017). Benchmarking studies like TSM-Bench show that InfluxDB and TimescaleDB gen- erally reach the high performance in both absorption and querying, while Lindorm, though does not reflect the quickest in terms of raw throughput, and brings built-in machine learning tools that can directly assist ROC diagnostics (Jensen et al., 2017). The Real-time analysis is most often carried out through Complex Event Processing, ei- ther built inside the database itself or by using separate stream-processing platforms such as Apache Flink. This approach allows continuous pattern detection, sliding-window trend evaluation and near-instant alerting, there is growing evidence leads to that stream-based analysis can reveal operational problems faster and with better accuracy than the conventional SCADA-based alarm systems, especially in smart-grid and off-grid cases (Grzesik & Mrozek, 2020; Dhanraj et al., 2020). Visualisation works as the final layer that ties these functions together, refreshing oper- ator screens with only minimal a delay, ideally within about a second, proper visualisa- tion focuses attention on key parameters, short-term behaviour and selected historical comparisons so that both human operators and automated agents can read the situation and react to changes without delay (Grzesik & Mrozek, 2020; Dhanraj et al., 2020). The processes of measurement acquisition and time-series handling are closely con- nected to control and modelling tasks. The live data feeds become direct input for con- trol algorithms and digital-twin models, which then allow online adjustment of parame- ters, real-time simulation of possible actions and constant monitoring of system 30 performance against expected trajectories. In ROC-supervised autonomy this tight inte- gration helps to recognise and mitigate emerging faults early, often before any human intervention is required. 2.2.5 Computational Topologies: Edge, Fog, and Cloud Intelligent control for autonomous power systems imposes stringent requirements on real-time execution, low latency, and high processing capacity (Vidal et al., 2022). three- tier computing structure consisting of edge, fog and cloud layers has proved to be a prac- tical basis for meeting the performance constraints of ROC-supervised distributed en- ergy systems. In such setups the sensor data is gathered through SCADA connections and local edge devices, processed as near to the source as possible and then sent forward in an optimised form to the upper layers for broader coordination (Vidal et al., 2022). Within this arrangement the edge level works almost like a local extension of the cloud, just one hop away from the sensors or actuators, which helps to cut latency and enables fast actuation whenever time-critical reactions are required (Shi et al., 2016). An experi- ence from deployments in microgrids, autonomous vehicles and industrial automation has shown that these edge units are capable of validating control commands, detecting anomalies and carrying out the first level of sensor fusion already on-site, without the need to rely on remote servers (Satyanarayanan, 2017)., The fog layer was introduced by Vidal et al. (2022) as an intermediate step between edge and cloud. Fog nodes are typi- cally installed in substations or in site-level gateways and provide more computational capacity than edge devices while maintaining lower latency than cloud services. Accord- ing to Shi & Dustdar. (2016), fog computing enables coordination across local networks, buffering of critical signals during connectivity disruptions, and pre-processing of data before cloud transmission. The cloud layer is located in remote data centres and provides the largest computational and storage resources. Satyanarayanan (2017) state that cloud computing is mainly ap- plied to resource-intensive tasks such as large-scale data analytics and training of 31 artificial intelligence models. Due to its higher latency, cloud computing is unsuitable for real-time control functions, but it supports predictive maintenance, historical analysis, and system-wide optimisation. A comparative overview of the three layers is presented in Table 3. Table 3. Comparative characteristics of edge, fog, and cloud computing layers in auton- omous control environments adapted from (Shi et al., 2016; Satyanarayanan, 2017). Layer Latency Compute type Example applications Edge < 10 ms Embedded, real-time Combustion control; actuator response Fog 10–100 ms Site-level gateways Load balancing; anomaly detection Cloud > 100 ms Virtualised clusters Fleet analytics; AI training; long-term planning It was noted by Eidson (2006) and IEEE (2008) that accurate time synchronisation is es- sential for coordinated performance across edge and fog layers. Network Time Protocol (NTP) provides millisecond-level accuracy but is insufficient for substation-level or real- time control. Precision Time Protocol (PTP), defined in IEEE 1588 and profiled in IEC 61850-9-3, achieves sub-microsecond precision and is therefore adopted in systems where reliable communication and accurate event correlation are required. Without this synchronisation, phasor tracking, fault detection, and sensor-data fusion tasks may be compromised. Each layer introduces advantages and limitations. Control executed at the edge mini- mises latency but restricts the use of complex models. Cloud-based execution allows large-scale optimisation but adds delay and increases dependency on wide-area connec- tivity. Edge and fog computing improve local autonomy and resilience, but their distrib- uted structure raises exposure to physical and cyber security risks. Shi and Dustdar (2016) highlight the role of adaptive middleware that dynamically reallocates tasks across layers according to operating context and network conditions. 32 In ROC-managed deployments, the mapping of layers follows a clear division of roles. The edge layer typically consists of programmable logic controllers, embedded control- lers, and real-time prototyping platforms such as Speedgoat for critical control functions. The fog layer is commonly realised through substation or hub-level equipment, which coordinates local decision-making, executes backup logic, and performs diagnostics. The cloud layer connects multiple plants, enabling long-term planning, artificial intelligence training, and performance benchmarking (Yıldırım et al., 2025). Within the EPS Laboratory, elements of this layered model are already present. Real- time control at the edge is provided by Speedgoat and embedded devices, while fog- level tasks are implemented within the SCADA system for test sequencing and procedure management. Cloud-level functions are supported through remote monitoring and col- laborative analysis enabled by VPN connections and shared storage. Strengthening the interfaces between layers through semantic middleware, mirrored decision logic, and secure communication tools would align the laboratory with the computational sub- strate described in Chapter 4 and move it closer to full ROC-based autonomy. 2.2.6 Sensor-Centric Cybersecurity in the Data Plane In recent years, the extensive deployment of sensor networks within ROCs has markedly expanded the exposure of such systems to a variety of cyber threats, most what notably at the sensing layer, in this regard, vulnerabilities including weak device authentication, false-data injection, and the exploitation of low-power wireless communication links may significantly compromise operational stability, data integrity, and eventually, the quality of supervisory decision-making, these issues become particularly critical in con- strained devices that possess only limited computational and energy resources, where the implementation of conventional security stacks proves impractical (Al-Quayed et al., 2024; Gupta et al., 2023; Qiu et al., 2020). Therefore, research has increased in empha- sizing in the need for a layered defence strategy, one that effectively combines hardware- rooted identity mechanisms with physical-layer anti-spoofing and lightweight 33 cryptographic methods suitable for embedded environments (Beaulieu et al., 2015; Maes, 2013). One promising approach for strengthening authentication and data integrity is the ap- plication of Physically Unclonable Functions (PUFs). In this regard, PUFs provide a hard- ware-rooted basis of trust by exploiting unavoidable manufacturing variations to create unique challenge–response pairs, allowing each device to be identified without relying on stored long-term keys. This becomes particularly beneficial in wireless sensor net- works and industrial IoT nodes where the available secure memory is limited (Maes, 2013). Recent studies have combined PUFs with lightweight authentication protocols built on XOR operations, hashing functions, and session keys derived from real-time re- sponses. Such designs enable mutual authentication at very low energy cost. Moreover, public-key cryptographic schemes have been developed in which elliptic-curve keys are produced directly from PUF outputs, thus removing the need for pre-provisioned secrets and simplifying the onboarding of unattended ROC devices (Akhundov et al., 2020; Gupta & Varshney, 2023). At the same time the sensor networks must to be defended against spoofing and replay attacks that can disturb time-critical tasks such as event correlation, fault detection, and control. Physical-layer authentication methods, including RF fingerprinting, carrier-fre- quency-offset checking, and channel-state analysis, that have been shown to distinguish legitimate transmissions from forged ones even under constrained wireless conditions (Liu et al., 2022). These techniques will perform best in when supported by secure time synchronisation. Implementations of the IEEE 1588 Precision Time Protocol (PTP) that use timestamped packets and cryptographic watermarks provide guarantees of both freshness and origin, and when paired with machine-learning classifiers lead to these solutions to enhance detection of replay attempts and reduce false-alarm rates (Qiu et al., 2020). Lightweight cryptography forms the third key element of this defence strategy. Conven- tional RSA- or TLS-based mechanisms impose heavy memory, computation, and energy demands on embedded sensors, limiting their practicality. Alternatives such as the SI- MON and SPECK block ciphers achieve confidentiality with much lower resource use, 34 making them suitable for harsh or remote deployments (Beaulieu et al., 2015). Likewise, compact elliptic-curve settings allow efficient key exchange and message authentication. Hybrid schemes that combine ECC with PUF-derived keys further strengthen security by enabling devices to create secure sessions without storing long-term secrets (Akhundov et al., 2020; Gupta & Varshney, 2023). Deploying these mechanisms in industrial settings still presents challenges. Standard stacks such as OPC UA and IEC 61850 are typically bound to PKI and TLS infrastructures, which exceed the capacities of ultra-low-power devices (Gupta et al., 2023). A practical solution is to delegate heavy cryptographic and identity-management tasks to fog nodes or middleware, allowing constrained devices to participate securely with minimal over- head. Behaviour-based adaptive trust models that depend on traffic patterns and signal features can add complementary protection by reducing dependence on static creden- tials (Qiu et al., 2020). Before implementation, candidate protocols should be validated through formal analysis methods such as AVISPA or BAN logic to confirm resilience against replay, desynchronisation, and man-in-the-middle attacks (Gupta & Varshney, 2023). To summarise the hardware-rooted identity, physical-layer anti-spoofing supported by trusted time, and lightweight cryptographic schemes together form a coherent founda- tion for protecting sensor data in ROC-managed autonomous power systems, these same components define the security assumptions that underpin the models discussed in Chapter 4 2.2.7 Industrial deployments demonstrating ROC readiness This subsection presents six industrial deployments that illustrate how integrated sens- ing and remote operations support ROC readiness in different contexts. Wärtsilä has developed Remote Support and Data Management services that utilise real- time information from sensors embedded in engines, fuel systems, turbochargers, emis- sions units, and thermal circuits. Typical variables include vibration, cylinder tempera- ture, turbocharger speed, oil pressure, and emission levels. These are streamed to global 35 Expertise Centres for live monitoring and analysis. A practical example was reported where repeated night-time shutdowns were resolved when remote specialists identified a faulty speed-sensor cable using live feeds and historical logs. Guided replacement on site restored full operation in less than two hours, avoiding prolonged downtime and travel expenses (Wärtsilä, 2023a). In the turbomachinery domain, MAN Energy Solutions reports the use of more than 40,000 sensors that provide input to an AI-based Model Predictive Controller. The system continuously tracks flow behaviour, pressure, temperature, and vibration. Company fig- ures indicate that this approach has reduced unplanned downtime by approximately 30 percent and achieved annual cost savings of up to €2 million through lower maintenance requirements, extended equipment life, and improved efficiency (MAN Energy Solutions, 2023). Shell has introduced autonomous operations in a chemical-plant setting. In this deploy- ment, optical sensors and robotic systems perform monitoring tasks that would normally require on-site personnel. The plant operates continuously with only two remote oper- ators. Robotic units equipped with thermal and optical cameras conduct routine inspec- tions and diagnostics, reducing the need for human presence in hazardous environments while maintaining operational continuity (Chemical Processing, 2023). Autonomous aerial inspection has been applied by Sensorem in remote Australian min- ing regions. The company operates UAVs equipped with electro-optical and LiDAR sen- sors, which are used for inspections in areas with difficult terrain. More than 11,000 be- yond-visual-line-of-sight missions have been reported. Real-time sensor feedback sup- ports obstacle avoidance, infrastructure assessment, and transmission of inspection data to central control hubs. This demonstrates how mobile sensing platforms can extend the reach of ROC supervision (FlytBase, 2023). The Calistoga Resiliency Centre represents an example of hybrid microgrid implementa- tion. The facility integrates hydrogen fuel cells with battery storage and applies fibre- optic distributed temperature sensing together with AI-assisted controllers. These 36 elements provide load balancing, thermal monitoring, and equipment protection. Dur- ing external outages, the microgrid is able to supply power independently for up to 48 hours, with sensor-driven automation playing a central role in ensuring resilience (Calis- toga City Council, 2022). Predictive analytics is exemplified by ROTEC`s PRANA platform which is aggregating ex- tensive vibration and the temperature measurements collected from thousands of sen- sors distributed across multiple thermal power plants, then at this regard will notice the system continuously evaluates the operational condition of critical components such as turbines, pumps and generators that enabling early detection of performance degrada- tion and potential mechanical faults. Since its deployment, then the platform has report- edly prevented more than 300 major equipment failures, clearly demonstrating the tan- gible benefits of predictive analytics supported by high-fidelity sensor data (ROTEC, 2022). These industrial implementations illustrate the dense and the well-integrated sensing architectures when it is complemented by remote diagnostics, the autonomous inspection and predictive analytics can yield measurable enhancements in plant availa- bility and operational safety and overall efficiency. Furthermore, they are providing con- crete design patterns and functional insights that directly inform ROC development con- siderations that discussed in Chapter 4. 2.3 Control and Decision-Making Frameworks The necessitate of Autonomous energy systems is advanced the control and decision- making methodologies capable of addressing complex system dynamics, stringent real- time constraints, inherent safety requirements, and long-term operational objectives. In this regard, while the recent research has evolved beyond traditional reactive feedback mechanisms by incorporating elements of prediction and optimisation by learning, then and resilience. With all these components are providing a comprehensive basis for main- taining reliable performance under varying conditions of uncertainty and environmental change. Accordingly, this section outlines the principal strategies employed in remotely 37 operated and autonomous hybrid energy systems and establishes the conceptual foun- dation for the Remote Operation Centre (ROC) design that is further developed in Chap- ter 4. Proportional-integral-derivative (PID) control has long been the most widely used method in industrial automation. It is valued for its simple structure and its ability to work across a broad range of processes. However, tuning its parameters becomes diffi- cult in systems with nonlinear behaviour or in cases where operating conditions vary over time, or strong coupling between control variables. PID continues to perform well for single-input, single-output loops close to steady-state operation. However, its effec- tiveness decreases in multivariable systems with interactions or in cases where strict constraints must be respected (Qin & Badgwell, 2003). To address these challenges, Model Predictive Control (MPC) has been widely adopted. MPC computes constrained optimal actions across a receding horizon using a system model. It is capable of explicitly handling multivariable couplings and state/input constraints and has been applied in ar- eas such as microgrids, engine systems, and building energy control (Rawlings et al., 2020). With the growth of data availability and the increasing difficulty of first-principles mod- elling, AI-based controllers, particularly reinforcement learning (RL), have become more prominent. These methods derive policies from data, either offline or during interaction with the system. They are particularly suitable when the plant is only partially observa- ble or when non-stationary disturbances affect performance (Yu et al., 2024). In energy applications, RL and learning-enabled controllers have been studied for adaptive energy management, fault handling, and incremental performance improvement (Zahra & Singh, 2022). Fault-tolerant control (FTC) contributes to safe operation when components fail or when abrupt changes occur. Passive FTC provides robustness by design without changing the control structure, while active FTC detects faults and reconfigures the controller in re- sponse (Ghosh et al., 2020). In addition, decentralised and distributed control schemes have become important for modular or geographically dispersed assets such as islanded 38 microgrids and hybrid maritime power systems. These methods allow local autonomy while still maintaining coordination at higher levels (Luo et al., 2023). In practice, these control approaches are combined in hierarchical structures. Local MPC enforces the fast constraints and ensures the feasibility, and an upper-level learner, for example RL can adapts supervisory policies to evolving conditions. Meanwhile an FTC layer monitors the system and activates reconfiguration when necessary. Such layered control reconciles the demands of real-time responsiveness with long-term optimisation in ROC-supervised deployments. The subsections that follow examine each method’s principles, application contexts, and main challenges. These insights inform the archi- tectural design, module interfaces, and validation strategy applied in the ROC proposal of Chapter 4. 2.3.1 Model Predictive Control (MPC) Model Predictive Control (MPC) has became one of the most extensively applied meth- ods for the supervisory control in systems that what demand predictive, multi-objec- tive, and constraint-aware decision-making. Regarding, its significance to autonomous energy platforms arises from the capability to system anticipation the behaviour and optimise control actions while adhering to the underlying physical limitations. This makes MPC particularly well-suited for hybrid energy systems that exhibit nonlinear load profiles, to discrete the switching behaviours, and tightly coupled thermal, electri- cal, and mechanical subsystems. The fundamental principle of MPC is established upon a discrete-time representation of the controlled system, which is most commonly formulated in a linear state-space structure (Rawlings et al., 2020): 𝑥{௞ାଵ} = 𝐴 𝑥௞ + 𝐵 𝑢௞ (4) 39 Here, Xk represents the system state at time step k1, and Uk is the control input. This model serves as the basis for forecasting the future behaviour of the system and for solving an optimisation problem that balances the defined performance objectives against operational and physical constraints. In the context of energy systems, such objectives typically encompass minimising fuel consumption, maintaining battery state-of-charge within desired limits, reducing NOx emissions, and constraining mechanical stress or component wear to acceptable levels. Several industrial implementations have demonstrated the practicality of MPC in real- time operational environments, time-varying of MPC has been applied to wind tur- bines for dynamically adjusting blade pitch and generator torque in response to contin- uously changing wind conditions (Dickler et al., 2021). In islanded microgrids, distrib- uted MPC (DMPC) has been employed to coordinate local generation and storage as- sets while ensuring that voltage and frequency remain within prescribed tolerances (Liu et al., 2022). At the embedded level, explicit MPC (eMPC) achieves sub-millisecond control by relying on precomputed control laws, thereby enabling real-time operation on resource-constrained hardware platforms, such as those used in remote or mobile environments (Rawlings et al., 2020; Dickler et al., 2021). Despite these demonstrated benefits, conventional MPC frameworks can exhibit sensi- tivity to model uncertainties and unmeasured system dynamics. Consequently, hybrid control architectures have emerged, integrating learning-based techniques such as re- inforcement learning (RL) alongside MPC. In such configurations, RL agents are often employed to adapt control parameters or formulate long-term strategies, whereas MPC continues to enforce safety and feasibility within the short-term prediction hori- zon. This combined approach has been successfully implemented in energy systems characterised by high renewable penetration and variable fault conditions (Zhang et al., 2021). Within ROC-managed infrastructures, MPC operates across several hierarchical layers. At the edge level, it provides rapid and deterministic control for localised subsystems. 40 At the fog layer, it performs supervisory coordination by leveraging updated models, constraint information, and prioritisation schemes. Its intrinsic capability to manage constraints, resolve multi-objective trade-offs, and integrate seamlessly with estima- tion and diagnostic tools establishes MPC as a central component of supervisory con- trol design. However, the overall effectiveness of MPC remains dependent on the fidel- ity of the underlying system model, the accuracy of state estimation, and the computa- tional efficiency of the solver implementation. These factors become especially critical in distributed autonomous systems, where plant dynamics are complex, nonlinear, and continuously evolving. 2.3.2 AI-Based and Reinforcement Learning Controllers The growing complexity and geographical spread of modern power systems, driven by the integration of distributed energy resources (DERs), makes it necessary for ROCs to use controllers that can adapt on their own while still keeping safety, reliability, and effi- ciency under uncertain conditions, in this context, Artificial Intelligence (AI) and espe- cially reinforcement learning (RL) are used to support data-driven decision-making and to build control strategies that work with only limited direct supervision. Deep reinforcement learning (DRL) has been applied in a variety of power-system con- texts. By learning policies through interaction with the environment, DRL is well suited to time-varying tasks such as voltage regulation, power balancing, fault handling, and demand-supply management. In one grid-reconfiguration study, a duelling double deep Q-network (D3QN) successfully adjusted the power-flow topology in the IEEE 14-bus sys- tem, demonstrating the feasibility of fully automated high-level decision-making (Dam- janović et al., 2022). For emergency control purposes, deep reinforcement learning (DRL) has been applied to dynamic braking and under-voltage load shedding in the IEEE 39- bus test system, where it has demonstrated the capability to respond to faults without relying on predefined threshold rules (Huang et al., 2019). These studies suggest that 41 DRL can strengthen the autonomy of ROCs and improve overall system stability under nonlinear and nonstationary operating conditions. Because the trial and the error of exploration can introduce risks in safety-critical envi- ronments, recent research has placed increasing attention on the concept of safe rein- forcement learning. This approach embeds safety directly into both the training and de- ployment stages by incorporating mechanisms such as reward shaping, safety filters, constrained policy updates, and the inclusion of expert demonstrations. Contemporary surveys highlight the use of safe RL in frequency control, voltage regulation and mi- crogrid management, with particular focus on integrating operational limits and safety constraints into the learning process itself (Bui et al., 2024; Yu et al., 2024). These tech- niques are especially relevant for ROC-managed systems, where ensuring safe and relia- ble operation remains an absolute requirement. Another active area of development is multi-agent reinforcement learning. The physical distribution of DERs and communication limitations motivate decentralised and coordi- nated architectures. PowerNet is an example of a multi-agent DRL framework that coor- dinates distributed generators using location-aware rewards and structured communi- cation. This enables scalable team-based control across multiple sites, which is con- sistent with ROC supervision of microgrid clusters and DER fleets (Chen et al., 2020). To support standardised evaluation, Recent surveys outline evaluation practices and stress- testing protocols that expose agents to diverse operating conditions prior to deployment (Bui et al., 2024; Yu et al., 2024). In summary, DRL contributes to autonomy, safe RL ensures constraint-aware learning in safety-critical contexts, and multi-agent architectures provide scalability for geograph- ically distributed assets. Together, these strands form a coherent basis for AI-based su- pervisory control in ROC environments. They also inform the design of supervisory AI modules in Chapter 4, including policy adaptation under uncertainty and integration with MPC for runtime-assured decision-making. 42 2.3.3 Fault-Tolerant and Decentralized Control Autonomous power systems are increasingly distributed, which makes decentralised control and fault tolerance essential for stability and reliability, particularly under ROC supervision. Heavy reliance on central command structures brings several risks. These include communication delays, hardware failures, and cyber disturbances. To reduce such vulnerabilities, recent approaches shift more authority to local controllers. They also build resilience so that operation can continue even when part of the system fails. Decentralised control lets each subsystem act on its own measurements with only lim- ited coordination. This improves both responsiveness and scalability, especially in sys- tems with low inertia and variable energy supply. In these situations, decentralised de- signs support real-time regulation and demand coordination without creating a single point of failure (Zahra & Singh, 2022; Ahsan et al., 2023). Industrial middleware provides practical examples of this concept, the Resilient Infor- mation Architecture Platform for Smart Grid (RIAPS) offers resource monitoring, event management, and distributed fault detection across nodes. When a task or device fails, RIAPS performs local recovery actions. As a result, microgrid controllers and data-pro- cessing tasks can keep operating even if individual components are lost (Ghosh et al., 2020). Fault-tolerant control (FTC) contributes to performance preservation in the pres- ence of sensor, communication, or actuator faults by detecting anomalies and reconfig- uring control actions, for example, in the permanent-magnet machines, a detect-and- reconfigure scheme was able to maintain stability under fault conditions. This technique is transferable to applications such as wind farms, offshore platforms, and remote sub- stations, where high availability is critical (Hao, Li, & He, 2011). Complementary ap- proaches based on consensus further enhance recovery. Combining wavelet analysis with local agreement protocols allows groups of generators to remain synchronised de- spite sensor dropouts or communication failures, thereby reducing dependence on a central coordinator and exploiting device-to-device communication (Han, 2023). 43 From an architectural perspective, fault handling should be designed into communica- tion middleware and control software from the outset. Beyond node-level resilience, system-level services also need to handle degradation detection, mode switching, and safe reconfiguration. The RIAPS platform again provides an example of this integration. It supports machine-learning modules and fault-diagnosis functions as core system com- ponents (Ghosh et al., 2019). In ROC-managed deployments, these capabilities help local units maintain safe operation and continue exchanging critical state information even when higher layers are degraded or temporarily unreachable. Decentralised control, combined with explicit fault-tolerant mechanisms, allows scalable autonomy that can sustain safe operation during disturbances and partial failures. To- gether with predictive and learning-based strategies, these methods form the mathe- matical foundation for supervisory intelligence in ROC-managed systems. Their practical performance, however, it still depends on accurate state and parameter estimation, as well as careful management of model-plant mismatch and integration of predictive di- agnostics. These factors create the motivation for adopting digital twin technologies, which are discussed further in Section 2.5. 2.4 Communication and Integration Protocols Remote Operation Centres (ROCs) depend on communication infrastructures that can support real-time and secure data exchange across heterogeneous field devices and wide geographical areas. Legacy, centrally oriented links are often insufficient, as they do not provide the peer-to-peer interaction, rapid fault recovery, or interoperability re- quired at scale. Modern deployments therefore integrate a range of standards, including OPC UA, IEC 61850, MQTT, Modbus, and CAN. Each is applied according to its role and layer within the system, with growing emphasis on decentralised and resilient commu- nication (Tebekaemi & Wijesekera, 2018). In addition, recent studies emphasise that communication and control must be closely coupled to maintain stability under conditions of renewable variability and high levels of automation. Effective communication architectures therefore not only provide data 44 exchange but also enable dynamic coordination and robust supervisory control (Sajadi et al., 2019; Wang et al., 2022). 2.4.1 Data plane and time base Several communication standards are applied in ROC-managed systems, each contrib- uting specific capabilities at different layers of the architecture. OPC UA provides interop- erable client-server and publish-subscribe communication patterns together with rich semantic data models. Companion specifications extend its use to smart-grid contexts and allow integration with IEC 61850-based environments (Tebekaemi & Wijesekera, 2018; Wang et al., 2022). IEC 61850 itself defines substation object models in the form of logical nodes and supports time-critical messaging through GOOSE and Sampled Val- ues. These mechanisms enable protection and control actions with response times be- low four milliseconds and support progressive digital upgrades in utility environments (Rehtanz, 2003; Wang et al., 2022). MQTT provides a lightweight publish-subscribe transport that is particularly suited to constrained communication links in remote microgrids or isolated facilities. Although it lacks native semantic modelling, it can be integrated through middleware layers that supply the required data context (Sajadi et al., 2019). At the equipment edge, Modbus and CAN continue to be widely used for simple and robust signalling. In ROC deploy- ments, these protocols are typically gatewayed into higher-level namespaces for interop- erability across the wider system (Bumiller et al., 2010). According to latency measure- ments by Mathiesen (2024), CAN and its extensions exhibit transmission delays from tens of microseconds up to several milliseconds, corresponding to a medium-to-high real-time performance classification in Table 4. A common time base for these heterogeneous communication channels is established elsewhere in the stack through synchronisation protocols such as Precision Time Proto- col (PTP) or Network Time Protocol (NTP). This ensures coherent data fusion and reliable event correlation across system layers, as discussed in Section 2.2.3. 45 Table 4. Protocol Feature Comparison for ROC Applications Feature OPC UA IEC 61850 MQTT Modbus CAN Communica- tion Model Client/Ser- ver, Pub/Sub Event-driven, Client/Server Pub- lish/Sub- scribe Re- quest/Res- ponse Broad- cast/Mes- sage Semantic Data Modelling Yes Yes No No No Real-Time Per- formance Medium High (<4 ms) Medium Low Medium- High Interoperabi- lity High High Medium Low–Me- dium Low Resource Effi- ciency (Edge) Medium Low High High High Native Security Features TLS/PKI IEC 62351 Basic None None Note. Real-time performance categories are derived from typical latency ranges ob- served in industrial deployments. Latency classification for CAN protocols is based on measurements reported by Mathiesen (2024), where transmission delays ranged be- tween approximately 70 µs and 7267 µs depending on frame type and network load. The information shown in Table 4 comes from a careful review of trusted sources in com- munication and automation research. Features such as latency, support for data model- ling, and performance at the edge level were compared based on findings from Tebek- aemi & Wijesekera (2018), Rehtanz (2003), Sajadi et al. (2019), and Bumiller et al. (2010). The comparison shows that each protocol brings different strengths and challenges when used in real-time systems. It also points out that no single protocol works best for every task. Because of this, Remote Operation Centres (ROCs) benefit from using a lay- ered mix of protocols that can work together across different parts of the system. 46 2.4.2 Data lifecycle and latency For ROC applications, communication infrastructures must be capable of scaling with plant size while remaining regulation-aware and closely integrated with control func- tions (Sajadi et al., 2019; Wang et al., 2022). High- and low-frequency measurement streams from data acquisition units are normalised within an archive that applies con- sistent schema and quality flags. In parallel, streaming channels provide timely data in- gestion for analytics and decision-making. The end-to-end data path typically begins at the plant, passes through local control and acquisition systems, and proceeds to ROC ingestion services. From there, data are pro- cessed in AI modules, validated, and subsequently dispatched as control actions. Percen- tile latency and jitter targets are defined for each stage of this chain to ensure reliable timing performance. To maintain continuity during partial outages, modern designs in- creasingly employ peer-to-peer communication paths that allow local subsystems to ex- change critical information without relying solely on central coordination (Tebekaemi & Wijesekera, 2018). 2.4.3 Cybersecurity Layers and Secure Interfaces In ROC-based autonomous power systems, communication infrastructures must meet strict cybersecurity requirements in addition to performance objectives. As edge devices, substations, and cloud platforms become interconnected, every interface constitutes a potential attack surface. Without layered protection, such systems remain exposed to unauthorised access, data manipulation, replay attempts, and denial-of-service attacks (Sajadi et al., 2019; Wang et al., 2022). OPC UA provides a structured security framework that includes Transport Layer Security (TLS) for encryption, certificate-based authentication using X.509 certificates, secure 47 session handling, and role-based access control. When configured correctly, these fea- tures support compliance with IEC 62443 (Wang et al., 2022). Field experience, however, shows that weak configurations such as open endpoints or outdated cipher suites can reduce their effectiveness. This underlines the need for secure defaults and strict con- figuration management in operational environments (Sajadi et al., 2019). In IEC 61850-based systems, protection is specified in the IEC 62351 standard. The frame- work defines digital signatures, message authentication codes, and public-key infrastruc- ture for key management. Network-level measures, including VLANs and zoning, are also used to separate critical substation traffic. These mechanisms help protect time-critical communication services such as GOOSE and Sampled Values, while still maintaining the sub-cycle performance needed for protection and control (Rehtanz, 2003). Even with these standards in place, insecure practices are still widespread. MQTT brokers are often deployed without authentication, and Modbus messages are still sent in clear text. OPC UA can also become a weak point if its security settings are left at default or incorrectly applied. In practice, the actual level of security depends more on configuration, mainte- nance, and update discipline than on the protocol’s built-in capabilities (Bumiller et al., 2010; Sajadi et al., 2019; Wang et al., 2022). To include intrusion detection systems tailored to Modbus, MQTT, and GOOSE traffic; one-way gateways or data diodes to separate critical assets from internet-facing seg- ments; and micro-segmentation aligned with ISA/IEC 62443. Together, these measures reduce both the likelihood and the impact of cyberattacks and strengthen the resilience of communication infrastructures in distributed autonomous power systems. 2.5 Digital Twins and Predictive Diagnostics Digital twins (DTs) are real-time, virtual counterparts of plant subsystems that are able to anticipate future behaviour and reflect the evolving physical state. They could also ingest time-aligned system measurements to adapt and modify themselves . In ROC-su- pervised autonomous power plants, DTs underpin three core functions: Predictory 48 support, where the DT estimates the variables for which there are not sensors; supervi- sory support for control, where the twin supplies fast predictions to assist set-point se- lection and constraint handling; and predictive diagnostics, where the twin tracks deg- radation, detects incipient faults, and informs maintenance and risk-aware operation. Industrial studies consistently argue that future power facilities will require a robust DT architecture to achieve high reliability, availability, and maintainability at lower lifecycle cost (Sleiti, 2022). Within such an architecture, hybrid DT approaches fuse physics-based structure with data-driven learning to predict performance degradation more accurately than either method alone (Hartmann et al., 2021). From a diagnostics point of view DTs make it possible to keep continuous, watch over system behaviour, and give practical insight for detecting faults and tracing their root causes. This is achieved by maintaining a real-time reflection of the system’s dynamic state (Pasupuleti, 2025). Examples from different industrial sectors show how this works in practice. In thermal power plants, for instance, turbine-rotor twins demon- strate abilities in virtual monitoring, physical-state tracking, and predicting abnormal operating conditions for critical assets (Li et al., 2022). At a larger scale, plant-level decision-support systems that integrate DTs with machine learning have been proposed to forecast operational trends and deviations in engine- based thermal facilities. These systems improve operator awareness and allow timely actions before failures develop (Pasupuleti, 2025). Meanwhile, studies from the wind- energy sector point to a growing maturity in predictive DTs that merge physics-based, data-driven, and hybrid modelling methods to support operation and maintenance. Al- together, these developments show a convergence of modelling approaches that can serve different types of generation technologies (Sleiti et al., 2022). Meeting ROC timing requirements demands execution-efficient twins. Model-order re- duction (MOR) is widely recognised as a key enabling technology. By projecting high- 49 fidelity models onto reduced spaces, MOR accelerates simulation while preserving pre- dictive accuracy to a degree sufficient for online supervision and control. This makes ap- plications feasible that would otherwise be constrained by computational cost (Hart- mann et al., 2021). In practice, reduced-order or surrogate twins provide short-horizon predictions for supervisory control, while higher-fidelity counterparts operate asynchro- nously for validation and policy improvement, an arrangement consistent with the multi- tier execution used elsewhere in this thesis. 2.5.1 Digital Twin-as-Observer and Predictor for Model Predictive Control (MPC) In supervisory control architectures, the digital twin can extend beyond its role as a sim- ulation tool to act as a real-time observer for unmeasured variables. These both could serve as system-states estimator for Model Predictive Control (MPC). In this configura- tion, the twin continuously estimates both the system state vector Xk and key parame- ters of the physical plant, providing the optimiser with plant-consistent information at every control interval. This directly addresses one of the core challenges of MPC: main- taining alignment between the predictive model and the evolving, uncertain dynamics of the plant. y reducing model and plant mismatch, observer-based digital twins improve constraint handling and raise the accuracy of short-horizon predictions. Built on estab- lished MPC frameworks (Rawlings et al., 2020), digital twins act as real-time observers that rebuild hidden states and parameters when direct measurements are limited or un- reliable. This improves predictive control under uncertain or changing plant conditions. Traditional estimation methods such as Kalman filtering and moving-horizon estimation offer structured ways to estimate internal states, but they are often limited by model mismatch, unmeasured dynamics, and sensitivity to noise. Digital twins extend these methods by combining physics-based models with data-driven updates. This hybrid gray-box idea allows the model to stay aligned with live operational data. In this role, the twin works as a virtual sensor and at the same time adjusts its estimation accuracy when operating conditions change. Arulampalam et al. (2002) high- light the use of Bayesian filtering techniques, including EKF, UKF, and particle filters, for 50 reconstructing nonlinear and noisy state variables. Zhang and Liu (2019) show that com- bining EKF with MHE improves robustness under multi-rate sampling and measurement noise. These studies confirm the role of digital twins as observers that provide MPC with steady and timely state estimates in uncertain environments. The architectural role of digital twins as observers can also be seen in the wider system where they operate. Figure 2 shows a digital twin framework adapted from ABB (ABB, 2020). In it, simulation, information, and machine-learning models are linked with real- time data streams, enterprise systems, and the Asset Administration Shell (AAS). Within this structure, the twin works as a hub for state and parameter estimation. It uses inputs from devices, installed base records, and master data. The AAS provides standardised links between the twin and supervisory or external systems, keeping consistency across the asset lifecycle. This setup underlines the observer function. By combining different model views with operational data, the twin gives MPC accurate and timely estimates of unmeasured states and slowly changing parameters. In ROC-managed systems, this ensures that ob- server-based control is built into a scalable and interoperable environment rather than developed in isolation. Even with these advances, challenges remain for ROC-focused systems. Twin observers need to merge data streams that arrive at different rates, which calls for reliable multi-rate sensor fusion. Their performance must also be tested under real edge-fog latency, since communication delays can affect how quickly estimation re- sults are delivered. Handling these issues is important for using observer-based twins in autonomous power systems, where supervisory MPC depends on fast, accurate, and consistent state reconstruction. 51 Figure 2. Digital twin framework integrating multi-model data through the Asset Admin- istration; The digital twin: From hype to reality (ABB, 2020) 2.5.2 Twin-in-the-Loop Model Predictive Control Digital twins can also be integrated directly into the optimisation loop of Model Predic- tive Control (MPC), where they act as surrogate models for rapid trajectory prediction. This twin-in-the-loop approach allows the MPC to evaluate candidate control actions across the full prediction horizon without relying on computationally intensive simula- tion at each step. A practical implementation of this concept is the Time Series Dense Encoder (TiDE) neu- ral network, trained offline as a multivariate surrogate for plant dynamics. Once embed- ded in the control loop, the TiDE model produces complete state trajectories in a single forward pass, supporting fast multi-step forecasting. In a Directed Energy Deposition (DED) application, this setup improved temperature regulation, reduced signal oscilla- tions, and lowered defect rates when compared to conventional PID controllers, while still maintaining the required geometric and quality specifications (Chen et al., 2025; Das et al., 2023). 52 Compa