The mission of this group is to bring together utility professionals in the power industry who are in the thick of the digital utility transformation. 


Potential Impacts of European AI Regulation on the American Energy Sector

image credit: © Putilich |
Gloria Li's picture
Master's Candidate in Data Science for Public Policy McCourt School of Public Policy, Georgetown University

Hi! I'm a Master's Candidate in Data Science for Public Policy at Georgetown. Prior to this, I worked at Advanced Energy Economy for one year as a policy associate and interned at NextEra Energy.

  • Member since 2020
  • 2 items added with 3,077 views
  • Oct 1, 2020

This item is part of the Advances in Utility Digitalization - Fall 2020 SPECIAL ISSUE, click here for more

In February 2020, the European Commission published a White Paper foreshadowing the development of a comprehensive regulatory framework for artificial intelligence. This differs from the United States’ approach of minimizing regulatory barriers for artificial intelligence. Nevertheless, Europe’s upcoming regulations may have extraterritorial impacts on American companies, including energy companies, that are investing in artificial intelligence. I utilize a case study of the 2018 General Data Protection Regulation to provide insight into what these impacts may look like.


Artificial Intelligence (AI) is poised to revolutionize our modern lives and the functioning of many industries, including energy. As Dan Walker of BP’s Technology Group states, “AI is enabling the fourth industrial revolution, and it has the potential to help deliver the next level of performance.” Energy companies are actively exploring ways to use AI to optimize utility assets, the transportation of resources, and the utility customer experience.

Your access to Member Features is limited.

However, big data capabilities entail big risks. As technology progresses, the global rise of AI will be closely monitored by governments seeking to strike the increasingly difficult balance between encouraging an innovative and competitive marketplace and protecting the rights and welfare of their citizens. AI introduces new dimensions to the familiar questions of privacy, nondiscrimination, human dignity and democratic accountability that accompany digital growth. These questions become particularly sensitive in regard to ratepayer and customer data.

In February 2020, the European Commission (“Commission”) released its “White Paper on Artificial Intelligence – A European Approach to Excellence and Trust” (“White Paper”), inviting commentary from the public as it moves forward with a comprehensive regulatory framework for artificial intelligence in the European Union (EU). The White Paper describes how the impetus for such a framework derives in part from AI progress in high-risk risk sectors such as energy, transportation, and healthcare.

I will first summarize the White Paper and describe how it differs from the United States’ approach to AI regulation thus far. Then, I will provide a case study of how European data regulation has affected US companies in the past and explores some potential implications that the EU’s new regulations may have on American energy companies (with or without operations and/or customers in the EU). The goal of this exercise is to extract key lessons from past instances of regulation to inform American companies that are investing in AI for the future.

Summary of White Paper

The past few years have seen a significant uptick in government sponsorship of AI Research and Development (R&D) in Europe. The EU plans to invest 1.5 billion Euros from 2018-2020 under its Horizon 2020 program and plans to pledge at least 1 billion Euros per year from 2021-2027 under the Horizon Europe and Digital Europe programs. With this increased investment comes a recognition of the need for additional policy guidance, and the Commission has generated a number of policy-oriented documents to this end.

The recent White Paper builds upon a 2018 Communication by the Commission that outlines a ‘European Initiative on AI’. The Communication directs EU Member States to collaborate to boost the EU’s AI uptake, to prepare for the subsequent socio-economic changes, and to ensure an appropriate ethical and legal framework based on the Union’s values. It also mentions that a regulatory framework for AI requires a balance between promoting innovation and ensuring protection and safety for citizens, noting that the Commission was in the process of assessing any gaps in national and EU safety and liability frameworks.

Other foundational AI policy documents were generated by the AI High-Level Expert Group (HLEG), an independent expert group set up by the Commission. HLEG recommends a precautionary principle-based approach to regulation and outcome-based policies. It delineates the existing legal frameworks that may be relevant and in need of an update, including rules around consumer protection, non-discrimination, and anti-competitive behavior. Particular domains that may trigger the need for new regulation were highlighted, including AI systems built using children’s profiles or automated lethal weapons.

The White Paper builds upon these existing efforts to foster a single European market for AI and data, aligned under harmonious regulations and ethical guidelines. It explains that AI can bring various benefits to European society and economy, such as improved health care, better machinery, and safer and cleaner transport systems, if grounded in fundamental rights. To that end, the EU seeks to foster an ‘ecosystem of excellence’ and an ‘ecosystem of trust’. The White Paper declared that a “clear European regulatory framework would build trust among consumers and businesses in AI”, and goes a step further than previous documents by concluding that existing regulatory regimes did not adequately cover requirements for AI such as transparency, traceability, and human oversight.  

Accordingly, the Commission recommends in the paper that legislation be improved to clarify transparency requirements, expand the scope of existing product safety legislation to include stand-alone software and services, and address uncertainty in allocating responsibility to different economic operators in the supply chain. This introduces the possibility of developing new conformity assessment mechanisms where they do not already exist and specifically mentions that these assessments would be mandatory for “all economic operators addressed by the requirements, regardless of their place of establishment”. The new AI regulatory framework would primarily target high-risk sectors for which state oversight and intervention may be critical; the energy sector is listed as an example.

It is evident that the new AI regulatory framework will have extraterritorial impacts on companies in different countries, including the United States, though the scope and magnitude will not be clear until the framework is released. A past incidence of this sort of global impact will be explored in the Case Study section.

U.S. Approach to AI Regulation

In recent years, the United States has taken a markedly different approach from the EU by seeking to limit the regulation of AI. In May 2016, the Subcommittee on Networking and Information Technology Research and Development (NITRD) released the initial National Artificial Intelligence Research and Development Plan under the Obama Administration, recommending the development of an R&D implementation framework and investigation into AI workforce development.  

Various federal agencies, including the Department of Defense, Federal Aviation Administration, and the Department of Transportation (USDOT), have already begun to review the AI developments that fall within their purview, but their actual regulatory capacity remains to be determined through the evolving interplay between federal and state authorities. For example, USDOT is responsible for setting motor vehicle safety standards that may impact the deployment of autonomous vehicles, but their licensing, registration, and other regulatory functions fall primarily to the states.

In February 2019, President Trump issued Executive Order 13859 outlining the ‘American AI Initiative’, which set an updated trajectory for the country.  The Order directed the Office of Management and Budget (OMB) to publish a ‘Guidance for Regulation of AI Applications’. The OMB issued a draft Memorandum in January 2020 and is currently finalizing the document after the public comment period closed in March. The three goals of these regulatory principles were to 1) ensure public engagement, 2) limit regulatory overreach, and 3) promote trustworthy AI.

The most notable takeaway from the Memorandum is that it focuses heavily on avoiding policies that hamper the growth of AI or “hold AI systems to such an impossibly high standard that society cannot enjoy their benefits”.  The OMB points toward Executive Order 12866 (1993) designating U.S. policy for regulatory planning and cost-benefit analysis, and urged agencies only to consider AI regulation 1) when it is necessary, and 2) after they have assessed the adequacy of existing local, state, or federal regulation. The Memorandum also emphasizes non-regulatory approaches agencies could pursue, including sector-specific guidance, pilot programs, and voluntary consensus standards being developed by the private sector.

Ultimately, as analysts at the Brookings Institution note, it will be difficult to assess the true impact and reach of the ‘American AI Initiative’ before comprehensive implementation details are released.  Based on the information so far, it is highly unlikely that we will see U.S. AI regulations comparable to the scale and scope of what the EU is planning. 

European Case Study: GDPR

Perhaps the most direct analog useful for predicting the potential effects of the EU’s new AI regulatory framework is the EU’s landmark General Data Protection Regulation (GDPR). The GDPR bore tremendous implications for diverse sectors of the global economy when it became enforceable in May 2018 after being adopted two years prior. It replaced the largely outdated Data Protection Directive enacted in 1995 and addressed issues that the previous Directive did not, including privacy issues related to the storage, collection, and transfer of personal data that had arisen from the technological advancements of the information age. One of the most prominent rights enforced in the GDPR is the “right to be forgotten”, or for individuals to request for their personal data to be erased.

Though the protection of the regulation extended to European data subjects—EU citizens and those living in the European Economic Area—the GDPR exercises extraterritorial jurisdiction for any non-EU establishments processing the personal information of data subjects. As a result, many U.S.-based companies were also required to update their business and data protection protocols so that they were in compliance with the European rules. The fines for failing to adhere to the GDPR were hefty, reaching the higher of 20 million Euros or 4% of the global revenue of the company in violation.

The consequences of GDPR implementation were manifold, but two of the primary ones were: 1) impacts on the market, and 2) global domino effects. First, the implications for the private sector became clear soon after the GDPR came into effect. After the regulation was adopted in 2016, it gave companies operating in the AI space two years to make the necessary investments to ensure compliance. A December 2016 PwC survey of 200 C-suite executives found that 77% of them planned to spend more than $1 million on GDPR readiness, while 26% of respondents planned to exit the EU market altogether.  Another survey of 1,000 companies found almost half of them admitting that they were not ready to be in compliance when GDPR came into effect in 2018.

If this regulatory burden was heavy for the dominant firms that the regulation intended to target (think: Googles and Facebooks of the world), it was even heavier for small businesses and startup companies, of which many were unable to make the same level of investment to ensure compliance.  The GDPR presented obstacles to the development of new technologies and led to an overall decrease in the amount of venture funding deals with tech firms, particularly newer startups.  European regulators were not afraid to practice their newfound power, either: in January 2019, France’s data protection authority fined Google $57 million for violating the GDPR and not clearly disclosing its data collection methods.  

Additionally, global effects were pronounced, even beyond the scope of extraterritorial jurisdiction. Shortly after the GDPR became enforceable, California adopted the California Consumer Privacy Act (CCPA) in June 2018, modeled closely after the European rule. Other states that introduced similar legislation included Hawaii, Maryland, Massachusetts, New Jersey, New York, and Washington.  GDPR-style laws followed in other countries as well, such as Canada, Argentina, and Japan. Given this trend, some businesses began to voluntarily standardize their privacy frameworks for global users rather than adopting different practices for individual market regions.

With the GDPR, we have seen Europe extend its regulatory reach in the global data trade far beyond its borders. This increasingly observed “Brussels Effect” reflects Europe’s growing ability to influence market regulation in other countries from its position of power as the world’s largest trading bloc.

Potential Impacts on the American Energy Industry

Returning to our original discussion of the White Paper, it is reasonable to assume that, despite federal reluctance to enact new regulations at home, the potential impacts of European AI regulation will reach U.S. borders as it did in 2018. If this occurs, this may impact many American companies, including companies in the energy sector, ranging from those involved in energy services, smart home appliances, renewables and oil and gas, to providers of software and hardware solutions. As the Commission has hinted previously, high risk sectors such as healthcare, transportation, and energy will be the focal point of any binding regulations that are enacted. As with the GDPR, these rules will likely apply to any American company who uses AI and processes the data of EU citizens, has European customers, or has EU employees. The rules may also serve as a template for individual U.S. states looking to regulate AI.

There is no shortage of potential applications of AI in the energy sector; the Commission, when prioritizing energy as an area of AI interest, cited smart thermostats and smart grids as two examples. AI-enabled smart power grids and advanced metering infrastructure (AMI) will allow electric utilities to monitor and manage electric supply and demand in a way that minimizes service disruptions and the need for peaking resources.  Data analytics platforms utilizing AI will allow energy companies to predict and prevent transformer failures, provide ever-evolving demand-side energy solutions for homeowners and businesses, manage smart charging for electric vehicles, increase the peak efficiency of solar and wind farms, and much more. Even the very structure of energy market transactions is poised to change over time, with peer-to-peer energy transactions and the rise of the energy “prosumer”—an entity that generates its own power (e.g. with rooftop solar panels) and is able to supply it back to the grid.

Though AI technologies will undoubtedly take time to mature and reach any level of enterprise integration, companies looking to take advantage of these technologies should monitor regulatory developments that may impact their long-term usage.  With trends like AMI and smart home technologies, the volume of personal data collected by energy companies is set to increase dramatically, and that has already placed some of these companies within the purview of the GDPR. The methods used to collect, analyze, and utilize this data in the future may also place these companies within the purview of oncoming AI regulations, whether they are based in Europe with operations and/or customers in America or vice versa.

The European and American energy markets are becoming more and more intertwined as well. American exports of liquefied natural gas to European countries, including Spain, France, and the Netherlands, have increased significantly in recent years.  American corporate giants such as Google and Amazon are purchasing large amounts of European electricity to power their operations, with more than half of all long-term renewables contracts signed in Europe since 2007 underwritten by American companies.  Simultaneously, an increasing number of prominent European companies, including Italian renewables giant Enel and Danish wind developer Ørsted, have their sights set on the U.S. soil (and waters!) for upcoming development projects. Dutch-British oil major Shell, already a household name in the U.S., is also further diversifying its reach within American energy markets with its 2019 acquisition of L.A.-based electric vehicle charging company Greenlots.

The most notable consequences EU AI regulations may have for American energy companies and energy companies with operations in the U.S. involve potential impacts on R&D, investment, and market growth. Previous studies found that industry sector and company size were important factors in GDPR readiness; the Energy & Utilities sector ranked third highest for collective readiness for the GDPR, after Financial Services and Technology & Software, respectively. However, smaller and very large companies saw themselves as less likely to be in compliance with GDPR by its effective date than mid-size companies, and this was reflected within the Energy & Utilities sector as well.  These same trends may also be reflected in cross-industry readiness for AI regulation.

As with the GDPR, the cost of compliance for EU regulations will be disproportionately large for smaller and/or newer energy firms developing AI solutions. This could become a barrier to entry to rapidly growing energy markets both in the U.S. and overseas, especially for startup companies. R&D of AI solutions may slow, due to a regulated need for the technologies to be explicable and mechanisms such as the conformity assessments or voluntary “trustworthy AI certifications” that were considered in the White Paper.

These trends could eventually serve to create an artificial regulatory differential between energy technologies originating in Europe, the United States, China, and other countries, with an impact on global trade and investment patterns. For example, investment in companies that are more likely to incur penalties or high compliance costs due to AI regulation may decline.


European policymakers believe that a legal framework for AI will create regulatory certainty that gives European companies added value  in the global marketplace and prevents legal and market fragmentation between Member States.  However, many questions remain in terms of the impact on the rest of the world, including potential consequences of any artificial regulatory differential between AI technologies originating from different countries (United States, China, etc.) and impacts on future transatlantic cooperation and investment. What also remains to be seen are the types of enforcement mechanisms that will be included in the AI regulatory framework. Given the importance of “AI trustworthiness” going forward, negative media attention and consumer distrust may hurt a company’s global market outlook just as much as fines and regulatory penalties.

Thus, companies looking to stay ahead of the curve should be paying attention to when the EU’s regulatory framework is finalized. The current timeline points toward the next step of policy guidance being released in Q4 of 2020, barring delays caused by COVID-19.

In the digital age, data do not stop at national borders, and neither does the regulation thereof. In order for America’s private sector businesses, including energy companies, to protect the interests of their shareholders, customers, and society at large, they must prepare to be versatile and adopt digital best practices as new regulatory frameworks arise.

Matt Chester's picture
Matt Chester on Oct 1, 2020

In recent years, the United States has taken a markedly different approach from the EU by seeking to limit the regulation of AI. 

What do you think is behind this difference in approaches? Is it a cultural difference? A function of the EU as an international entity compared with a single nation like the U.S.? Differing regulations already on the books? 

Gloria Li's picture
Gloria Li on Oct 2, 2020

Hi Matt, that's a great question. There are certainly cultural differences in regulatory philosophy at play, with the EU being a supranational governing body and the U.S. traditionally favoring more of a states-based approach. Also, Europeans are generally more inclined to admit an active role for government to address social problems, including in industry. As a result, the U.S. and the EU have diverged markedly in recent years in regard to data and privacy regulation, which will have interesting implications for companies that have global operations. 

Gloria Li's picture
Thank Gloria for the Post!
Energy Central contributors share their experience and insights for the benefit of other Members (like you). Please show them your appreciation by leaving a comment, 'liking' this post, or following this Member.
More posts from this member

Get Published - Build a Following

The Energy Central Power Industry Network is based on one core idea - power industry professionals helping each other and advancing the industry by sharing and learning from each other.

If you have an experience or insight to share or have learned something from a conference or seminar, your peers and colleagues on Energy Central want to hear about it. It's also easy to share a link to an article you've liked or an industry resource that you think would be helpful.

                 Learn more about posting on Energy Central »