This group brings together the best thinkers on energy and climate. Join us for smart, insightful posts and conversations about where the energy industry is and where it is going.

Post

The Nuclear Safety Paradox

Meredith Angwin's picture
Carnot Communications

Former project manager at Electric Power Research Institute. Chemist, writer, grandmother, and proponent of nuclear energy.

  • Member since 2018
  • 81 items added with 21,786 views
  • Jun 29, 2011 8:59 pm GMT
  • 809 views

Your access to Member Features is limited.

This is a guest post by Jeff Schmidt on an important subject: The Nuclear Safety Paradox. Schmidt’s previous guest post was Flawed Analogies, describing common but misleading ways of describing nuclear plants.
—————
Over the past several months, a thought has been at the back of my mind about nuclear safety. I feel is important to enter this issue into the ongoing discussion about Nuclear Power in our country.
There are many people who are opposed to new nuclear. They look at the events that have unfolded over in Japan, and worry that the same can happen here, unless we quickly move away from nuclear power. To that end, they actively seek to slow or block the certification of new designs, and the construction of new power plants.
Rod Adams of the Atomic Insights Blog has recently posted an example: “Friends of the Earth” seeking to stall certification of the AP1000 design . The AP1000, for folks who are not familiar with it, is a new design by Westinghouse Electric (a subsidiary of Toshiba), which adds an emergency passive cooling system to the Light Water Reactor. This cooling would operate in the case of a complete loss of electric power for active cooling, as was the case in Fukushima.
A passive cooling system uses basic physics to work. Passive cooling systems do not require any outside intervention, like electric power, fuel, or other inputs. They work automatically, and always work because those principles of physics never change. Examples of passive cooling techniques include convection in the cooling fluid, air cooling, gravity-fed water cooling, etc). Such passive cooling will keep cooling the reactor from melting down for an extended period of time, when no outside power is available for pumping cooling water.
This is, objectively, a good safety improvement over previous designs, such as those at Fukushima Daiichi Nuclear Power Plant, or any of the reactors currently in use in the U.S. (I would add that GE-Hitachi Nuclear also has a new design for the BWR which adds passive emergency cooling, which they are trying to put into the market, and I believe other companies and researchers have other ideas for passive cooling systems in new designs).
So, why don’t we have any of these new, safer designs in the U.S.? Largely because we can’t get any new designs certified or built.
The Paradox
I have come to believe that in the U.S., we have a Nuclear Safety Paradox – namely, that because of our concern for safety, we are keeping older, less safe designs in active service longer, because new designs have not, and are not, being certified and built. I realize that many of the people who are opposed to nuclear, and are attempting to block forward progress, truly feel that no nuke is good, no nuke can be safe.
In contrast, I believe that most Americans, like myself, do have faith that engineers can create safer designs, in time. I also believe that, while there is probably some good opportunities to put solar and wind power to use in our country, we are not at the necessary technology level to try to deploy Wind and Solar on the scale necessary to completely replace nuclear. We may get there some day, or we may not, but we need much bigger advances in technology to get to a total renewable solution, compared to building safer nuclear reactors.
Now, I don’t think the older designs present a large safety threat – after all, there has only been one meltdown in U.S. reactor history, Three Mile Island, and that was, in the end, from a safety perspective, a non-event. But there is still a risk that certain circumstances, very rare, but not impossible, can come in and cause loss of cooling to the old reactors, and that in some circumstances, they might not handle the loss of cooling as gracefully as a reactor which has a passive cooling system. None of us want to face the prospect of having to evacuate a 10 or 20 mile zone around a nuclear plant for 6 months, or a year, or potentially longer. None of us want to see a situation where one natural disaster is followed by a nuclear incident that makes cleanup and repair of the damage from the natural disaster be delayed for long periods.
In particular, I don’t think Vermont Yankee presents a big threat to Vermont, as I think Meredith has made many good arguments over the past several years about the safety of even the “old” generation of nuclear plants. Nuclear plants here in the U.S., including Vermont Yankee, have added some additional safety features to, e.g. prevent building up hydrogen gas and resulting hydrogen explosions – safety features apparently lacking at the FD reactors in Japan. We have already made upgrades, well BEFORE the Fukushima reactor meltdowns and explosions, to address some of the exact scenarios that the Japanese ran into. That tells me that to a large extent, our engineering and regulatory systems are very actively working to prevent such a disaster.
The Missing Conversation
However, I suspect that we’d be having a much different conversation if there were plans to be building new nuclear power plants around the country, and in Vermont. Nuclear Power currently provides about 20% of the electricity generation in the United States. To take that offline, we need something to replace it. We could build natural gas (and, in fact, that is happening), but natural gas is not without its problems, either – environmental damage, deaths from gas explosions , and supply which, while ample in the short term, does not promise long term security.The Natural Gas marketers themselves only claim a 100 year supply , and that is including speculative, undiscovered resources. Also, that 100 years is only if we don’t increase exports and domestic consumption. We can’t expect “cheap” natural gas to last forever. Wind and Solar may someday be able to supplant nuclear power, but there are enormous technical and financial challenges, larger than even for Nuclear, in trying to do a truly massive build-out of Wind and Solar.
I’ve heard some people compare Fukushima Daiichi (and before that, Chernobyl) to the Titanic. They like to say that “The Titanic was a New, Safer Design – until it sank on it’s maiden voyage.” But we didn’t stop designing or building ships because of the Titanic, and I think everyone would agree that large commercial ships have gotten much, much safer – both as a result of improved design, as well as improved operational practice, over the years. I truly believe that with iteration (that is, the design and construction of new generations of technology after learning lessons with previous generations), all technology gets better with time.
This is almost surely to be true of Solar and Wind as well, but today nuclear reactor technology is better positioned to provide that power than Solar and Wind. As well, I’d rather see the market pick winners and losers than a system that hamstrings one solution (nuclear), while pushing forward another solution, based upon an inflated sense of risk and fear.
I’ve had the privilege of growing up in a period of extreme technological advancement in nearly every field of engineering, but perhaps none is more illustrative of the power of iterative improvement as computers and electronics. Since the 1970s, we’ve probably had close to 40 generations of computer technology (about one generation per year). Computers have gotten staggeringly faster, with more storage space, better reliability, more RAM, amazing graphics, very high speed networked communications, high quality sound, much smaller physically, and all at orders of magnitude lower cost. This is the result of lots of iterative improvement.
Where are the Iterations? Where are the New Nuclear Plants?
If there had been built, in the last 10 or 20 years, a gigawatt or two of new, safer nuclear power plants in Vermont, I bet Meredith, nor anyone else, would be trying to keep Vermont Yankee running, because there would be something better in place already, and it would just be time to retire that particular plant. The most natural way to get rid of old nuclear power plants is to build new, improved nuclear power plants to replace them. Without replacement, the result is (and we are seeing this all over the country) that we keep older plants online longer (however, those older plants are upgraded and updated with new safety features, new pipes, new turbines, etc, to keep them as safe as possible).
The most natural way to make nuclear safer is to increase the rate of iteration of generations of the technology. Of course, we need to go at a slow enough pace that we aren’t risking disasters, but I think we can do better than 30 years per generation. I think the key is what standard we hold the NRC to: we can enable iterative improvement not by giving it a mandate to ensure ‘perfect’ safety, because they really can’t ensure perfect safety, but rather, our standard should be, “Are the new designs being considered at least AS SAFE OR SAFER than any current designs”.
That is how you achieve progress in any field of engineering – not, “Is it perfect, right now”, but “Is it better than what we already have”. Perfection is a goal we are always chasing, never achieving. This is why computers can keep getting better and better and better, and why nuclear reactors could keep getting better, and better, and better.
Ending the Paradox
Let’s end the nuclear safety paradox by getting new, safer reactors built to replace the older reactors, and by giving the NRC the resources, people, and mandate to improve and speed up the certification process. We should enable a fairly rapid iteration of improved generations of nuclear reactor technology. As with other technologies, new improved generations will fairly quickly replace the old generations, leading to ever safer nuclear reactor designs.
AP 1000 illustration courtesy of Westinghouse through Wikipedia
Vermont Yankee photo also Wikipedia.
Meredith Angwin's picture
Thank Meredith for the Post!
Energy Central contributors share their experience and insights for the benefit of other Members (like you). Please show them your appreciation by leaving a comment, 'liking' this post, or following this Member.
More posts from this member
Discussions
Spell checking: Press the CTRL or COMMAND key then click on the underlined misspelled word.
Nathan Wilson's picture
Nathan Wilson on Jul 4, 2011

ChangeItOrDrownIt, I’m quite certain that the $5.59/W cost for CSP does not include energy storage or the associated increase in collection area.  The slide that you linked says it is based on US installations from 2009, but there were no major plants installed that year, just two small ones totaling 12 MW, and I don’t believe either had storage.

For comparison, the new Gemasolar CSP plant in Spain has enough storage and collection area to run 24 hours per day in the summer, has an annual average capacity factor of 63%, and has a cost of about $33 per average Watt delivered (nearly 5x the price of the Vogtle nuke).

http://theenergycollective.com/nathan-wilson/58791/20mw-gemasolar-plant-elegant-pricey 

Stephen Gloor's picture
Stephen Gloor on Jul 5, 2011

Nathan Wilson – “has a cost of about $33 per average Watt delivered (nearly 5x the price of the Vogtle nuke).”

I see you are doing this again.  The representitive all-up cost of any generating plant is calculated from the final cost divided by the nameplate capacity.  As any plant at this stage has not had a chance to run then you cannot include the CF in this calculation.  You say the cost for this plant is $20.95/W which is not out of the range for FOAK plants.  Subsequent plants built from the same design are very likely to cost a lot less than this.

Also the final cost seems much lower than the cost you found. All these articles seem to agree on Euro 171 million where I cannot find any that agree with the figure you gave.  I think the Daily Mail gave the US cost in pounds that you converted to US dollars:

http://www.pv-tech.org/news/torresol_commissions_19mw_gemasolar_csp_plant_with_central_tower_and_therma

http://solar.cleantechnology-business-review.com/news/torresol-energy-launches-199mw-gemasolar-csp-facility-in-spain-260511

http://energy.worldconstructionindustrynetwork.com/news/torresol_energy_constructs_gemasolar_csp_plant_in_spain_110526/

“The Gemosolar CSP plant featuring central tower receiver with thermal storage capabilities has received €171 million ($242.5 million) finance through several European financial institutions like Banco Popular, Banesto, ICO and the European Investment Bank.”

The US dollar has varied over the years so the cost of 171 million Euro has varied from USD$271 million (.63) to USD$206 milion (.85) making the FOAK costs lie in a range of USD$13.55 and $10.33 per watt which is much more reasonable that the USD 419million /$33/W that you were quoting.  

Where capacity factors come into play is the calculation for cost per kw/hr where a projected cost for various capacity factors can be calculated to see how viable the plant is.  CF has no place in the cost per watt calculation as this calculation is purely to get an idea of how much different types of plants cost to build in comparison to others.  I know that nuclear advocates often divide the cost/W by the imagined CF of a renewable plant to try to inflate the costs and minimise the cost of nuclear however as I have pointed out over the years this is invalid however it comes up now and again.

Nathan Wilson's picture
Nathan Wilson on Jul 6, 2011

Stephen, the cost per nameplate Watt is a simple and effective way to make comparisons for the same technology over time (e.g. solar PV).  However it is useless for comparing different technologies (eg. fixed tilt PV vs. tracking PV vs. CSP w/ storage vs. wind), as it does not include the capacity factor.

The levelized cost does include the capacity factor, but is much more complicated, and confounds the technology cost with other factors such as intest rate and government incentives which yields a number which cannot be compared without correcting for a list of assumptions.

The levelized cost is usually given for government funded studies, but is almost never published for commercial projects.   However, even for commercial projects, the total cost, and sufficient data (location) to estimate capacity factor is usually provided. 

The capacity factor of solar projects is always estimated before any construction is started (at least in the US), since the US DOE’s NREL has a very nice set of tables that provide the necessary solar data for any part of the US, for both tracking and fixed-tilt applications.

As a result, the best simple metric for comparing renewable technologies and projects is the cost of an average delivered Watt.

Nathan Wilson's picture
Nathan Wilson on Jul 6, 2011

Jeff, this is a good article. 

One minor correction I would suggest is that the US DOE’s NREL has done a reasonable amount of research to show that wind and solar could replace the 20% our electricity that comes from nuclear, and in fact, could probably go as high as 30-40%. 

The important distinction is that nuclear is a scalable technology that is currently providing about 80% of Frances electricity, and could do the same for the US.  Furthermore, nuclear power can be used to provide process heat for industrial application (e.g. chemical and biofuels processing).  Nuclear also gets around the uneven geographic distribution of renewables.

The result is that nuclear can potentially displace a much larger percentage of our fossil fuel use, while having a much smaller ecological footprint compared to current renewables.

Stephen Gloor's picture
Stephen Gloor on Jul 6, 2011

Nathan Wilson – “As a result, the best simple metric for comparing renewable technologies and projects is the cost of an average delivered Watt.”

So what do you set as the average CF of nuclear?  Do you use the .75 average of France that has the most nukes, including load followers, or the .90 of US power plants or the .40 of a new plant that has teething problems and is down for a year? Basically if you include CF in that calculation you can make the cost whatever you want depending on the CF that you select which makes the comparison useless.

The standard definition is for overnight cost is cost/nameplate.  For all up costs it is cost + interest etc / nameplate. If you want to include CF you have to do the extra work and model a period of operation that includes the anticipated CF along with fuel, staff etc to get a cost per kW/hr.

Trying to use the cost of an average delivered watt is just lazy and open to manipulation for whatever purpose you have in mind.  If you don’t like the standard definitions then you cannot make up your own unless you are Microsoft of course.

Get Published - Build a Following

The Energy Central Power Industry Network is based on one core idea - power industry professionals helping each other and advancing the industry by sharing and learning from each other.

If you have an experience or insight to share or have learned something from a conference or seminar, your peers and colleagues on Energy Central want to hear about it. It's also easy to share a link to an article you've liked or an industry resource that you think would be helpful.

                 Learn more about posting on Energy Central »