The mission of this group is to bring together utility professionals in the power industry who are in the thick of the digital utility transformation. 

WARNING: SIGN-IN

You need to be a member of Energy Central to access some features and content. Please or register to continue.

Post

The Promise (And Realities) Of AI / ML

image credit: Photo by Isaac Smith on Unsplash

Artificial Intelligence has been getting a bad rap of late, with numerous opinion pieces and articles describing how it has struggled to live up to the hype. Arguments have centered around computational cost, lack of high-quality data, and the difficulty in getting past the high nineties in percent accuracy, all resulting in the continued need to have humans in the loop.

None of this is new for those of us who have been doing simulation and optimization for some time. When I started my career, I had to contend with naysayers who liked to poke holes in my models and complain about their accuracy. For me (and other believers), it was never about achieving a perfect match between the model’s prediction and the ground truth. Models were simply a means to get new insights that could take us in the general direction of goodness.

All of this brings us to a philosophical question: why do we use models? In my opinion, we use models to explore complex phenomena that are too difficult to wrap our heads around.

Let’s be clear – the human brain is a remarkable evolutionary creation capable of many things that we cannot possibly model (e.g., empathetic and ethical decision making). However, there are certain things we can do with mathematical models that the human brain cannot do. A good example is weather forecasts, which come from large and complex computational models that consider a huge number of atmospheric characteristics. While we often complain about their accuracy, we also appreciate that they are much better than what we would predict without their help.

In the same vein, AI & ML are simply tools for building complex (and sometimes non-linear) models that consider large amounts of information. They are most potent in applications where their pattern finding power significantly exceeds human capability. If we adjust our attitude and expectations, we can leverage their power to bring about all sorts of tangible outcomes for humanity.

With this type of re-calibration, our mission should be to use AI to help human decision makers, rather than replace them. Machine learning is now being used to build weather and climate impact models that help infrastructure managers respond with accuracy and allocate their resources efficiently. While these models do not perfectly match the ground truth, they are much more accurate and precise than simple heuristics, and can save millions of dollars through more efficient capital allocation.

Vijay Jayachandran's picture

Thank Vijay for the Post!

Energy Central contributors share their experience and insights for the benefit of other Members (like you). Please show them your appreciation by leaving a comment, 'liking' this post, or following this Member.

Discussions

Spell checking: Press the CTRL or COMMAND key then click on the underlined misspelled word.
Matt Chester's picture
Matt Chester on Jan 22, 2021

With this type of re-calibration, our mission should be to use AI to help human decision makers, rather than replace them

This is important to say-- AI isn't necessarily here to put certain jobs out of commission, but rather to make the jobs held by the expert humans easier, faster, and more accurate. Do you think that messaging is finally getting across to steer people away from the fears of being completely replaced by AI? 

Vijay Jayachandran's picture
Vijay Jayachandran on Jan 22, 2021

I think it is a mix. There are certainly applications like RPA where AI is attempting to replace humans completely so that they can focus on higher order tasks. However, where we will see a significant "upgradation" in outcomes (not just an efficiency improvement) is Assistive AI. 

Paul Korzeniowski's picture
Paul Korzeniowski on Feb 3, 2021

Good points. The disconnect seems to come from expectations. If a computer tells us something, individuals expect it to be right 100% of the time. AI and ML will never reach that number. But they will provide companies with probabilities north of the 50% mark, in a growing number of cases in the 80% and above mark. So when companies use them, they need to be open to the possibility that their implementation may veer from the norm, which can be challenging when one makes important business decisions based on computer based data interpretations. We expect far less than 100% from humans when they make projections and need recognize that is not a number that computer will reach either. 

Get Published - Build a Following

The Energy Central Power Industry Network is based on one core idea - power industry professionals helping each other and advancing the industry by sharing and learning from each other.

If you have an experience or insight to share or have learned something from a conference or seminar, your peers and colleagues on Energy Central want to hear about it. It's also easy to share a link to an article you've liked or an industry resource that you think would be helpful.

                 Learn more about posting on Energy Central »