Welcome to the new Energy Central — same great community, now with a smoother experience. To login, use your Energy Central email and reset your password.

Innovations: A Personal View

When the boss pronounces “we need to be innovative”, it is easy to interpret that diktat was meant to spur you to invent a better mousetrap, think outside the box, and by the way earn your keep. Yes, it is an overused word. Innovation is a team effort. It is about problem solving, and invariably it takes more than one person. Innovation in a company always requires a champion at a high enough executive level. Her job is not necessarily to be a subject matter expert, but to have a good line of sight as to what the innovation could lead to: a better mousetrap no doubt but also a gut feeling that her staff is up to accomplishing this feat.

I have been in the power industry for over 40 years in various capacities: an early researcher in nondestructive evaluation of materials and components, in the instrumentation and control aspects, broadening into operations and maintenance technologies, and as an executive at a utility company and had “innovation” somewhere in my  title. While  our group was innovative in several areas of the grid-- smart “wires” to manage the bulk flow, transportation electrification-- they were due to team contributions. When I started my career, I was a strong proponent of digitizing data using then-novel PCs and a digital recorder (Biomation 2000) as an active assistant in the field. Yes, today digital data are in vogue, but utilities have and will continue to have a suspicious view of data. A regulated utility has many obligations with how they manage, process, and communicate data, and they may be penalized if that were not revealed. Hence their general skittishness about data.

Digital signal processing was my favorite subject. In the 1970s I was intrigued by fast means to compute the frequency content of signals which made it possible to perform spectral analysis. I would guard my box of punched cards which could execute the fast Fourier transform efficiently on mainframe computers. As computing power rapidly grew, signal processing became more sophisticated allowing for large amounts of data to be analyzed with PCs. In the early 90’s I wrote a primer on applications to NDE. During those early years I also worked on various defense applications, most significantly in passive undersea warfare. Few people know that the cold war—that tussle between the US and the Soviet Union which ended with a clear winner—was due to the US’s superior signal processing techniques. The weakness of the enemy’s defense was exposed with the US being able to track the opponent’s undersea assets anywhere, anytime with a precision that they could not match. That technique is called “correlation processing” – a time-tested procedure for similarity measures between signals.

I was intrigued in how some of the same methods could be utilized to track plant process parameters—temperature, pressure, flow, vibration—and measure similarity under normal conditions and how they are perturbed when the process changes and/or specific equipment are underperforming and indicating impending failure. The proposition was this: some plant process parameters are very closely correlated in their behavior: the feedwater flow is correlated with turbine pressures; vibration signals in critical pumps are correlated with power at the busbar, etc. But when things are not normal that correlative relationship is disturbed in specific ways. Developing how critical components and processes are related via the correlation measure provides plant operations a powerful diagnostic tool to determine what may be the cause if anomalies are detected.

Over a three-decade period, I was fortunate to work with several global utilities in assisting in implementing the technology. The value proposition was to use automation to determine problems & utilize the burgeoning internet capabilities to have centralized expertise for decentralized assets. In this period, more than 200 GW of generation assets—including many types of thermal plants—have benefited greatly[1].

The electric power industry is unique in its business structure. Most of them are monopolies whose oversight is provided by a public utility commission. It rules on the rate of return the utility owner receives for  providing reliable electricity at a reasonable price. Under this compact, the areas utilities are most interested in reducing costs  are in unplanned outages and maintaining high availability. Detecting anomalies and taking action before they become serious are precisely the area utilities could benefit from in managing their costs.

My passion today is to impart my knowledge to a younger generation entering the energy industry, which is now heading to be fully digital, decentralized and toward deep decarbonization. The skill sets needed are different from when I first entered, the fuel considered for electricity generation will be required to be low- or zero carbon-based; the students are expected to be adept in power flow equations as well as machine learning and business processes, and very importantly, they will have to be adept in communicating to the public.

Climate change is the seminal challenge facing the electric grid this millennium. Should we expect the reliability we have been used to in the previous century? Will we have to modify our behaviors to accommodate the realities of a zero-carbon grid? Innovation no doubt will provide the answers!

 


[1]  “Correlation Processing- Big Data at Work”, Public Utilities Fortnightly. February 2014.

 

9 replies