EnergyIoT Article 2 – Architectural Challenges to the Energy Transformation
image credit: Image by Stuart McCafferty - No copyright
- Apr 25, 2019 3:00 pm GMT
- 1877 views
EnergyIoT Article 2 – Architectural Challenges to the Energy Transformation
By Stuart McCafferty, David Forfia, and Eamonn McCormick
Disclaimer: The viewpoints in this article and others in the series are the personal views of the authors and in no way are meant to imply or represent those of the companies they work for.
In today’s electric power ecosystem, adding new capabilities is tedious, time-consuming, and expensive. A great example of this is the move towards Advanced Distribution Management Systems (ADMS). It takes years and 10’s of millions of dollars of investment. And, when it is finally integrated, it is a constant challenge and expense to add new IT and grid assets – just to maintain marginal situational awareness of the system.
Consider this example. . . When a homeowner purchases a photovoltaic (PV) system, he must first get the system installed by a certified electrician following the local building code process, then register it with the local power company and get in line for an interconnect agreement, which can take anywhere from several days to several weeks or even more.
If the same homeowner purchases a new television set, he does not need to do anything more than plug it in and connect it to his cable or satellite box. The cable/satellite company doesn’t care. And, the system is up and running in minutes with no hassle and no delay.
Why is the grid so different from every other connected service? Is it that much more complex? Is it because it is a monopolistic ecosystem? Or, perhaps, is it because we have not evolved the architecture, policy, and processes to enable a much more dynamic, democratic, and consumer choice driven marketplace?
No one will argue that operating the grid is easy. We have to maintain synchronization and appropriate voltage, amperage, real and reactive power levels across the entire system for power quality and reliability. But, we have also been doing this for over 100 years and have learned a lot about how to operate the grid. It is time to leverage all the knowledge we have experienced in moving electrons from generation to load, to apply new technologies to dramatically simplify adding and removing grid components, and create a new ecosystem that enables the ability to add PV, demand response, battery, and electric vehicle (EV) assets as easily as plugging in a new television. Imagine instead an ecosystem that allows a consumer to purchase a PV system at Home Depot, either pay someone or install it themselves, and when the consumer plugs it in:
- It ANNOUNCES itself
- It DESCRIBES itself
- It PROVISIONS itself
- It COMMISIONS itself
And minutes after it is installed, the utility knows what it is, where it is, and what capabilities it has. The power flow model is immediately updated to include the new “node”, and the consumer can immediately leverage its ability to convert solar radiation into electrons and perhaps even bid it into a local distribution market.
This is not science fiction. In theory, we could do this today. However, the existing architectural constructs and the legacy siloed utility systems make this extremely unlikely, perhaps impossible. We have a proposed solution that leverages today’s DevOps capabilities that companies like Amazon, Google, Microsoft, Red Hat, and Alibaba have been using for many years to allow their systems and their customers’ systems to elastically scale to enormous dimensions simply, elegantly, and practically. More to come on that later in Article 3 - 6.
The roots of our architectural challenges derive from how the grid has evolved over the past 100+ years. The grid represents a multi-generational investment in a “top down” architecture that delivers energy from central generation stations at the “top”, to loads “down” from the transmission grid to the distribution grid all the way to the energy consumer. The result is an electrically synchronized “machine” with power flowing downhill from high voltage to low voltage across a vast network of wires, transformers, and switches. The result is a hub and spoke architecture where transmission grids radiate various “branch” distribution networks that direct the energy “downhill”, eventually serving the loads.
The problem is that this is not how the transformed grid will work. In fact, many would argue it is just the opposite. The transformed grid will have large numbers of distributed assets that include distributed generation, demand response, energy storage, electric vehicles, and devices not yet developed. These new distributed assets are growing at a rapid pace and will continue that trend as Distributed Energy Resource (DER) prices drop, electricity prices increase, policy changes turn adoption into law, and a societal “call to action” to address climate change become more compelling. It is time to think differently.
Here’s a Crazy Idea - Think of Everything as a Microgrid
If we can agree that the future grid is a bottoms-up model, then take the next step to consider each building block that makes up the grid as a microgrid - distribution networks, feeders, neighborhoods, buildings, cars, etc. These microgrid building blocks make up the overall grid, can manage themselves, may have markets associated with them, and can run independent of a grid connection as an electrical “island” for some period of time. The grid as a whole becomes a “collective of microgrids” that can operate independently and in cooperation with other “microgrids”. This type of thinking is already occurring at progressive utilities like San Diego Gas and Electric where there are plans to pilot 10MW batteries at some feeder locations to support microgrid capabilities, including islanding. Electric vehicles are a great example of a “mobile microgrid” that is an architectural component and actor in a larger microgrid when it is plugged in and when it is not, it is an islanded microgrid with the capability of operating independently for some period of time. This “distributed intelligence” begins at the grid edge and propagates outwards and upwards to support larger and larger grid structures.
What’s Wrong with the Architecture We Have Now?
Even without taking the mental leap to “everything is a microgrid”, the change from a one-way power flow model to bi-directional flow is today’s reality. Today’s grid is different from the way it was designed. And, Policy mandates requiring rooftop solar on all new construction in California and renewable/clean energy targets in numerous states and even the US military are accelerating the change. Despite this, most utilities do not have a clear picture of what assets exist Behind The Meter (BTM) and are experiencing extremely challenging operational issues such as the “Duck Curve”. Two-way power can result in overgeneration and exceeding thermal limits or unachievable load generator ramp rates when demand increases and solar voltaic assets stop producing electricity at sunset. These types of issues put the grid in safety and reliability danger, and can result in circuits tripping off to protect grid equipment and leave customers in the dark.
Current State Issues
Consider how the current grid (and its architecture) meets the needs of the next generation of distributed energy:
Table 1: Current Grid Obstacles
Provide reliable and affordable electricity
On the surface the current “top down” grid provides relatively cheap power reliably. However, this is only true if we do not assign economic costs to emissions. The Paris Agreement signed by 175 countries in 2016 have escalated awareness and agreement to address climate change head on through greenhouse gas (GHG) reduction. It may be only a matter of time before policies are adopted that assign a value or “tax” to atmospheric carbonization contributors for their GHG impact. In its Four Pillars of a Carbon Dividends Plan, a modest “neutral” carbon tax as suggested by some Republicans led by ex-Secretary of State James Baker ("The Conservative Case for Carbon Dividends") of $40/ton of CO2 could nearly double the cost of wholesale energy.
Fair and equitable for large and small alike
Even for the largest utility players, the current architecture is not working well. Utilities are struggling to integrate more than 20% renewables, meet policy objectives for clean power, and maintain grid reliability. Large utilities are also struggling with high costs and integrating distribution automation systems, DER, Third Party Aggregators, microgrids, electric vehicles, and changes yet to come. The situation is even worse for municipal utilities, co-ops, and other smaller utilities that have smaller customer bases to fund the sophisticated distribution grid automation and situational awareness required for safe and reliable operations.
Small producers and consumers have fewer choices due to policies, investment, and physical constraints on users’ choices The lack of market participation choices and electricity costs actually incentivize some organizations to build their own microgrids and opt out of the grid altogether.
Wholesale energy market rules include requirements on generator or load sizes, capitalization, and operational sophistication making large “utility scale” options the only choice for companies that want to participate in wholesale markets. This creates unachievable barriers to entry for smaller players. In the current transmission (ISO) markets, participants must “qualify” and meet minimal local size generation restrictions (e.g. 1MW+). Currently, owners of DER that do not meet that criteria (the vast majority) must work through an aggregator that meets these requirements – or not participate at all.
Democratic, secure, trusted, reliable, resilient and safe
Following the disasters in New Orleans and Puerto Rico (to name a couple), it is clear that the grid is vulnerable to climate-related catastrophes. In addition, there is growing concern that centralized grids are prone to cascading event cyber-attacks from criminals or “foreign entities”. Both groups can pose a significant threat and can include quite sophisticated actors, offering significant threats to electric safety and reliability. The bottoms-up hierarchy is still prone to attacks and catastrophic events but can be isolated to keep damage as local as possible.
Provide solutions for the critical “deep electrification” challenges facing society
Reliable solar and DER integration - The industry is experiencing increasing reliability issues (Duck Curve). Solar and DER assets are creating difficulty in complete situational awareness and the management of behind the meter assets.
Transportation electrification - Small numbers of EVs are currently manageable, but higher numbers will require different strategies and technologies to manage loads and protect the grid from overload. EV mobility means that vehicles can enter and exit both circuits and utility territories, complicating circuit loading constraints as well as billing.
Effective energy storage integration - Integration is costly and “one offs” for every system. Command and control of energy storage requires “flipping registry bits”, which takes time, is prone to error, and makes troubleshooting complex.
Intelligent air conditioning and heating - Thermostat programs can be “gamed” by the consumer by ramping up or down the temperature prior to an event. When distribution markets come, a 2 degree bid into the market has no meaning and thermostat programs will evaporate.
Business Model Innovation
One of the biggest challenges to utilities is business model innovation. The structural hierarchical model and regulated business model makes it challenging for existing players to “think out of the box” and innovate with new services. Policy and regulatory reform is required that safeguards the interests of stakeholders, but also enables the ability for utilities to extend services beyond the meter and leverage DERs to provide higher reliability, power quality, and economy for its customers.
Enables pathway to a sustainable energy future
According to the Intergovernmental Panel on Climate Change (IPCC), we have a single decade to transform our grid to a sustainable posture that can take us “net negative” on emissions by 2050 at the latest. The current top down grid that relies predominantly on bulk generation precludes a realistic pathway to a sustainable future. The only realistic way forward would amount to a global shift to nuclear power which has significant barriers, takes years and billions of dollars to build, does not “ramp” well with fluctuations in load and demand, and simply takes too long. A much more viable growth plan is to unleash exponential innovation via a more distributed, intelligent and flexible grid.
DOE’s Concerns on “Siloed” Systems
Dr. Jeff Taft of PNNL and Paul De Martini of Newport Consulting discussed the “system-centric” architecture issue head on in their groundbreaking “Sensing and Measurement Architecture for Grid Modernization.”. According to Taft & De Martini, the core problem is that grid systems, sensors, and data are hierarchical and rigidly bound to a specific utility system and often form disjoint data sets that cannot be leveraged by other applications. The electric power architecture is hierarchical with a “system-centric” design rather than “data-centric”, forcing a siloed approach and leading to difficult and expensive systems integration challenges and orphaned data. The authors illustrate this example below, which not only shows the application silo “stacks”, but also implies the complexity around data reuse and integration.
Figure 1: Traditional Grid Sensor System Structure (Source Taft & De Martini - Sensing and Measurement Architecture for Grid Modernization)
Taft and De Martini highlight that the essential structures are vertical, leading naturally to silos. It is these silos which are the source of the fundamental limitations that a new architecture must address. Some Volt Var Control applications, Circuit Fault Indicator applications, Transformer Oil Analysis and other such applications all configured as silos. Each has its own dedicated application system, communications systems, and data sets. The results manifest themselves in high costs, complex maintenance requirements, and little or no flexibility.
Taft and De Martini further identify that communications networks for grid sensors are generally hub-and-spoke, or, in the case of AMI, local mesh to a hub-and spoke backhaul (via cellular or substation), which is still effectively hub-and-spoke. Such communication systems are often not scalable, have low bandwidth, and are siloed along with the data collection head ends and application systems. In addition, the grid communications systems (SCADA, AMI) are normally configured as “polling systems” where the central system makes status requests from the different grid assets. This round-trip communication paradigm unnecessarily burdens utilities with expensive high-bandwidth communications systems to support the heavy traffic. A better solution would be an “event-driven” grid whose assets automatically report their status based on rules only when something changes or based on a timed interval (which is still an event). Event-driven systems could cut the communications traffic in half, or even more, and could also save data storage and analytics costs and improve interoperability by removing siloed head end solutions and making data easily available to authorized systems.
The industry is essentially stuck. The old paradigm of a top-down hierarchical, system-centric grid and vendor driven solutions is just not capable of moving the industry forward. Despite what some key industry vendors would have us believe, most solutions are not scalable, expensive, and create high latency vertical silos. The cost of expanding and integrating these silos is extremely high. The resulting solutions generally underperform, are brittle, and expensive to maintain, while delivering sub optimal business outcomes. The industry must find a new way forward to break out of this pattern. We must “un-silo” systems for better interoperability, move to event-driven communication paradigms, and operate from a data-centric perspective to allow authorized systems to leverage information for greater efficiencies and economics.
This was the second in a series of EnergyIoT articles addressing the challenges we are experiencing and proposing a fundamentally different architecture to solve the problems of today and tomorrow. Our third article begins describing an alternative architecture. The article, “EnergyIoT Article 3 – EnergyIoT Domain Building Blocks”, will be published on Monday, April 29 on Energy Central and LinkedIn. It will be followed by deep dives in Articles 4-6 into the individual domains of the ecosystem.
The rest of the article series can be found here:
David is the Chair of the GridWise Architecture Council since 2015 and has been a council member since 2013.
The GridWise Architecture Council (GWAC) is a team of industry leaders who are shaping the guiding principles of a highly intelligent and interactive electric system. The Council is neither a design team, nor a standards making body. Its role is to help identify areas for standardization that allow significant levels of interoperation between system components. More about the Council can be found at www.gridwiseac.org
David is the current chair of the Technical Advisory Committee and a former member of the Board of Directors of the Smart Electric Power Alliance. He was also Chair of the SGIP Board of Directors from 2015 until 2017, and as a board member beginning in 2011.
In his current role, he is the Director of Technology Architecture and IT Transformation at the Electric Reliability Council of Texas (ERCOT). He began his career at Austin Energy Director of Information Technology Services for Austin Energy and was Deputy Director and Chief Information Officer for an $18B pension fund. He holds a BBA from the University of Texas at Austin and an MBA from St. Edward’s University.
Eamonn McCormick, Chief Technology Officer, Utilicast
Eamonn McCormick is the CTO at Utilicast, a leading energy industry consultancy. Eamonn is a passionate believer in the bright future of the energy industry and the importance of collaboration as the foundation for solving for our current industry challenges. He is a results driven technology leader with a track record of success. He has implemented strategic technology change at several large energy companies over the last twenty years in the areas of wholesale markets, transmission and energy distribution primarily. In addition Eamonn is currently chief architect of the Energy Block Chain consortium.
Stuart McCafferty, IoT Architect, Black & Veatch Management Consulting
Stuart McCafferty is an accomplished Smart Grid technical executive with an innovative history, strong relationships in the utility and vendor communities, business and partner development, platform and solution design, go to market planning and execution, and practical application of existing and emerging/disruptive technologies. Prior to B&V, he was VP of EnergyIoT for Hitachi America, where he led the architectural design of a distribution system platform supporting microgrid and Distributed Energy Resources (DER) related businesses. At B&V, Stuart supports the utility, technology, and vendor communities in strategy and pragmatic application of DER that combines IoT best practices and technologies with energy standards and protocols.
Thought leader in the Internet of Things (IoT), Big Data, Cloud Computing, Artificial Intelligence (AI), Machine Learning, and connected home with practical application within the Smart Grid ecosystem. Expert in utility IT/OT and the application of DER and microgrids for resilience, economics, and reliability.
Stuart is a US military veteran, Air Force Academy graduate, an Energy Fellow for community resilience at the National Institute of Standards and Technology (NIST), an Energy “Expert” for Energy Central, and Vice Chair of the Open Field Message Bus (OpenFMB) user group.