Part I: DeVolpi on Expert Knowledge, Scientific Knowledge and Ethics
Amory Lovins has Chutzpah. Lovins, a Harvard & Oxford physics drop out, believed that he could argue with Alvin Weinberg about reactor technology. Not long ago he showed up at Argonne National Laboratory where he attempted to argue with Alexander DeVolpi about nuclear proliferation. This was a big mistake, first because DeVolpi ought to be regarded as one of the world’s leading authority on nuclear proliferation, and further DeVolpi does not suffer fools gladly. And Lovins is nothing if not a fool, and only a fool would have argued with DeVolpi at Argonne about nuclear proliferation. For Amory Lovins to argue with DeVolpi on nuclear proliferation is more than a little like Alfred E. Newman attempting to argue with Albert Einstein on the theory of relativity.
Nothing illustrates the confusion of the Era of Confusion better than the fact that Amory Lovins is imagined to be an expert on energy by Al Gore and hundreds of of other political and business leaders in contemporary American Society. Lovins tends to overwhelm his auditors with a dense presentation of supposed facts and references, that cannot be easily or quickly or easily deconstructed. Thus an authorizing link may be in fact be to a statement that was published in an obscure South African humor magazine, and which is locked beyond a firewall, with the payment of a fee required for admission. Another Lovins trick is to reference a statement which he himself made 30 years ago, without any further support. Unless the auditor has a copy of the collected works of Amory Lovins, it is impossible to determine if the 30 year ago statement had just as weak evidence as the statement presented today.
Dr. DeVolpi can recognize shabby tricks for what they are, but most people, including Al Gore, doubt their own intelligence when confronted with a MacArthur Genius. To such people I can only recommend a reading of the story The Emperor’s New Clothes together and Immanuel Kant’s essay, What is Enlightenment. But such remedies may come with distress, because they invariably require a person to look at how he or she thinks and feels, and then to think about the answers to the questions those thoughts and feelings raise.
Dr. DeVolpi has written excellent and instructive Google Knols that should be read by anyone who wishes to think about let alone openly discuss nuclear issues like proliferation and nuclear safety. Beyond simply talking about proliferation issues, Dr. DeVolpi looks at a question that clearly strays into an area that might be described as practical epistemology. It particular how can we know if someone who claims expertise actually possess authority. Dr. DeVolpi shreds claims to authority on such matters, with a particular zest”
For progress in non-proliferation, we need be saved from the assumed or accorded authoritarianism of well-intentioned professors, especially from the East Coast, who have titles mistaken as credentials. Frank von Hippel of Princeton comes to mind. Notwithstanding good intentions, pleasant personality, teaching experience, and published papers — these do not constitute hands-on field or laboratory experience. Nor should one count time spent in Washington corridors, offices, and conference rooms.
I hold Frank partially responsible for the decade-long hiatus in reaching agreement with Korea on nuclear demilitarization, for decades of lack of progress in conversion of the Siberian plutonium reactors, for stalling growth of nuclear power in the United States, for misrepresenting the weaponizability of reactor-grade plutonium, and for sustaining radiophobia.
On the latter point, over two decades after the Chernobyl accident, Frank is yet to acknowledge in print that he was utterly wrong in projecting or implying a huge number of fatalities due to the accident. He and others cling to unvalidated beliefs regarding the effects of low levels of radiation . . .
DeVolpi scorns the authority of the under experienced.
were it not for the professors of the 1930s and 1940s who gained hands-on laboratory and field experience, we would not have succeeded in the timely development of nuclear weapons and nuclear reactors. With the demise of Hans Bethe and Pief Panofsky, a good example remaining is Dick Garwin (aside from some uncharacteristic overreaching he has done with regard to Chernobyl cancer projections).
DeVolpi offers us nothing less than a phenomenology of expertise.
Expertise isn’t fungible; it can’t be bought or transferred; it’s accumulated from sometimes-tedious, but aggregate years of hands-on experience. Nor is anyone’s accumulated expertise unique or exclusive; some individuals have subsets of very relevant knowledge or experience that include skills and understanding of energy released in fission or fusion, or of policies and implications regarding nuclear weapons. These fields of knowledge overlap and supplement each other; there are no islands of expertise.
Incidentally, professional conferences and lectures are a common adjunct for keeping up to date, but they do not contribute directly to hands-on experience in the functioning of complex equipment. The same can be said about presentations, lectures, and facility visits; these are an integral aspect of technical development, but they are not substitutes for actual laboratory or field development, construction, experimentation, and analysis.
To a certain extent DeVolpi is an advocate of the tacit knowledge tradition without formally acknowledging Michael Polany’s writings on the concept. In addition to focusing on the practical dimension of expert knowing, Dr. DeVolpi also focuses on ethical issues implicit in expert knowledge claims, authorized by a presumed claim to scientific authority. In particular Dr. De Volpi’s, focuses on be called a phenomenology of scientific authority. That is he offers a description of the essential characteristics of a knowledge claim that is advanced with scientific authority. That description did not originate with Dr. DeVolpi, rather he found it in the U.S. Supreme Court’s 1993 statement of standards for testimony regarding areas of science that required an explicit estimate of probabilistic error. That statement, presented in connection with the courts ruling on the Daubert v. Merrell Dow Pharmaceuticals case, set forth a four part qualifying standard:
and stated rates of error
Judiciaries have retrospectively encountered deficiencies in ad hoc scientific/technical testimony and in forensic evidence that did not fully comply with a standardized methodology. Individuals have been wrongfully convicted of crimes; cancer and other illnesses have been incorrectly attributed; and epidemiological data has sometimes been misrepresented.
This is an ethical issue in DeVolpi’s view. Making claims to scientific authority which do not conform to the Supreme Court’s Daubert standard are not just epistemologically flawed, they are also ethically flawed, and such scientifically wrong statements have lead to moral wrongs.
Individuals have been wrongfully convicted of crimes; cancer and other illnesses have been incorrectly attributed; and epidemiological data has sometimes been misrepresented.
While DeVolpi focuses primarily on a case study of the over statement of anticipated public health consequences for the Chernobyl and TMI reactor incidents. He observes:
The scientific, technical and journalistic professions, though not alone, must share significant responsibility f
or premature and exaggerated predictions that have not materialized nor been rectified.
This professional ethical lapse have had serious public consequences:
Unsubstantiated characterizations contribute to public confusion, rather than clarification. Inordinate risk estimates have lead to the expenditure of tens of billions of dollars to protect against dangers whose existence is highly questionable.
(In Part II, I will address the substance of Dr. DeVolpi’s “Bill of Particulars” against Amory Lovins.)
David LeBlanc has joined the Google Tech Talk roster with an excellent presentation on LFTR?MSR technology. David does an excellent job of exploring the diversity of technological options and their rational and value.
From The Abstract:
David’s Ph.d in physics was completed at University of Ottawa (1998) on high temperature superconductors. During this period, he developed a great interest to pursue both fission and fusion reactor design basics, which separately cumulated in a long term fellowship from the Canadian Fusion Fuels Technology Project (later ITER Canada) for his work on the use of high Tc superconductors in the fusion field and also work for Atomic Energy of Canada Limited on worldwide reactor design comparisons. Since then he has been teaching at the Carleton University physics department and continued his investigations primarily in the field of Molten Salt Reactors, also known as Liquid Fluoride Reactors. David founded Ottawa Valley Research Associates Ltd to expand these efforts and has completed a license agreement with a European firm with a goal of development of a new generation of Molten Salt Reactors.
The Gore plan, the Google plan, the energy writings of Joe Romm, the views of the Internet site Gristmill, and other self proclaimed energy authorities, all maintain the view that an all renewable grid is possible. Some time ago I attempted to evaluate the theory of reliable wind suggested by Mark Z. Jacobson. Jacobson argued, based on empirical data from 17 sites in the southwestern Great Planes, that wind generation could be made reliable by building grid links between those sites. Jacobson found that the linked sites could be expected to produce at least 20% of their rated capacity 80% of the time. Jacobson further argued that this reliability approached that of base generated electricity. My analysis, using a 2008 wind cost estimate of $2500 per KW, and evaluating the Google energy plan, found that the 380 wind GWs called for by the Google plan would cost $900 billion to install, This estimated installation cost did not include the expansion of the grid that would be needed to transmit the electricity from the windmill array to consumers. I found that the linked wind array could be counted on to produce about 80 GWs of electricity 80% of the time. The linked wind system, however, had a serious flaw. It could not deliver power on hot summer days when electrical demand peaked.
I stipulated a cost for new West Texas wind of $2250 per name plate KW in 2009. This price was at the low end of 2008 windmill costs in North America. Since the capacity factor of West Texas runs around .40, the adverage output West Texas wind producer can expect to pay $5625 for every KW of electrical producing capacity. Since only 70% of the electricity entering the CAES facility reaches the consumer, the wind producer must increase his wind generating capacity by 30% to compensate for the energy loss. Thus the price of the wind generated electry entering the CAES facility must compensate the wind producer for something like a $8000 capitol investment for every average KW sold by the CAES facility. When added to the $765 per KW capital investment in the CAES facility, and the cost of natural gas used with CAES technology, we get a very ugly picture, of the cost of wind generated electricity.
In my pumped storage study, I reviewed the cost of the Northfield Mountain Pumped storage facility in New England. I calculated the cost of the 1080 MW facility at 3.7 billion 2008 dollars using the 1972 cost and a standard conversion table. I noted that an estimated 2008 cost for a reactor of similar capacity would be around $5 billion. The pumped storage facility had the ability to deliver power for 10 hours at a tome, while the reactor could be expected to deliver power continuously at least 90% of a year.
In order to produce electricity for the pump storage facility, a wind generating array would have to be built. The cost of that array would be paid for when electricity from the pump storage facility was sold. Pump storage operates at 75% efficiency. That is 25% of the energy input is lost before electrical output. Thus assuming a very generous West Texas capacity factor of .40 for the wind array with a rated output of 1400 MWs operating 24 hours a day would be required to fill the pump storage facility. Lets assume costs at the low end of the 2008 range for windmills, say $2250 per KW. Thus the wind array required to fill the pump storage facility full would cost $3.150 billion. That would give us a figure of close to $7 Billion to be financed by the sale of peak electricity from the pumped storage facility. Seven billion dollars is a l;arge investment for electricity that would be only available for 10 hours a day. Since as reactor capable of producing a similar amount of electricity 24 hours a day could be had in 2008 for 2 billion dollars less, the reactor is the better deal.
Finally, I examined battery storage with wind. Battery storage appears to be the most expensive electrical reliability/storage systems. After producing an estimated cost of Wind + battery storage, i then looked at the cost of a non-storage backup system for wind, a conventional nuclear reactor. The reactor was actually less expensive than a combination windmill battery backup system In addition the nuclear system would be so reliable that wind generation could be dispensed with and the system rely entirely on nuclear power.
Post-carbon electrical generating systems require reliability. During the last few months I have produced 4 case studies of the cost of making wind generated electricity reliable through the use of different technologies. Renewable advocates often complain that nuclear power is too expensive. My assessments show that reliable wind capital costs would more expensive than the capital costs of nuclear generated electricity in any of the noted cases. Facility costs for PY and ST power would be considerably higher per KW than wind in any of the noted cases. Given that the capacity factor for Southwestern Solar is not much higher than .20, it seems likely that reliable solar would be even more expensive than reliable wind, however, since I have not studied the economies of solar storage systems this is impossible to confirm.
My current research has not focused on other hidden cost of renewables generation. These include the cost of new transmission lines that are required by renewable generation systems, to be born by rate payers, the cost of federal and states tax based subsidies, the cost of keeping grid voltage stable. None of my case studies would support the contention that the cost of reliable wind would be competitive with conventional nuclear as a source of reliable electricity.
/>My conclusions have been acknowledged by some of the more sober minded supporters of the renewables paradigm. It is my contention then conventional nuclear power will cost less than reliable renewable electricity in a post carbon grid, and that National energy priorities ought to be rethought in light of the evidence that conventional nuclear power is the lower cost option. If conventional nuclear power is too expensive, then renewables are even more expensive. Thus we need to find a lower cost electrical generation option.
Aim High got a good mention and a link on Brian Wang’s blog Next Big Future. Aim High is the name Dr. Robert Hargraves gave to the plan to mass-produce Liquid Fluoride Thorium Reactors. I have decided to endorse the “Aim High” name and of course I supported the plan before Bob coined the name. The Aim High plan is the only really viable plan to create a post carbon energy economy by 2050. The viability of the Aim High plan stems from its relatively low cost, and its potential to quickly build and set up large numbers of small safe, reliable and nuclear waste destroying LFTRs that can generate electricity anywhere 24 hours a day, 365 days a year.
One assumption of the Aim High concept directly challenges an assumption of the conventional nuclear industry. That is the Aim High concept assumes that there are major cost advantages for serial reactor manufacture at factories rather than custom onsite reactor manufacture. This argument might be challenged by reference to reported Chinese cost projections for factory produced Pebble Bed Modular Reactors that were recently discussed by Brian Wang. The Chinese do not assume that early factory built PBMRs will be lower cost than on site manufactured reactors. The Chinese envision manufacturing hundreds of PBMRs in their reactor factory. Thus it would appear that the Chinese do not anticipate that PBMR production will lead to significant savings. Not even the serial production of 248 PBMRs lead to significant cost advantages for PBMR. The Chinese do anticipate a significant 30% to 40% per unit savings from experience based learning.
What then is responsible for the Chinese cost data? First, it should be observed that the PBMR has extremely low power density. Thus the PBMR pressure vessel is the same size as that of a Light Water Reactor which produces 5 times the power output. Thus the PBMR will require greater materials input for a given unit of power output than the LWR. This would suggest that PBMR components may be assembled in the factory but not the entire reactor. Even relatively small PBMRs would be too heavy and bulky for truck or rail transportation. Thus reactors would be assembled on site from factory manufactured kits. The Chinese probably anticipating on-site construction of LWRs from factory produced kits, so actual manufacturing conditions for PBMRs would not differ greatly from those of Chinese LWRs. It is also argued that PBMR simplicity would lead to lower reactor costs, but a glance at PBMR design suggest that it might not be all that simple compared to LFTR design.
Wang refers to production plans for about 250 reactors. While this is a very large number, it would not be large enough to justify the installation of large labor saving production machines. If reactors are manufactured in kit form at the factory, there would be no assembly lines. Thus many of the cost saving advantages of factory production would be lost.
The Chinese appear to anticipate that the primary savings from factory manufactured PBMRs would come from the learning curve. This would suggest low capital costs, since it would be assumed that the capital cost per unit would decrease over time as loans were paid off. Indeed financial costs, taxes, insurance and contingencies account for around 20% of Chinese reactor costs. This combined category is not significantly higher for Chinese built LWRs than for PBRs. We can assume then that manufacturing techniques for Chinese PBMRs do not differ significantl from those used to manufacture Chinese LWR, and that materials inputs will be, if anything more expensive per KW of electrical output than would be the case for Chinese LWRs. It would appear then that the cost advantages of serial production do not greatly outweigh the cost advantages of economies of scale.
Financial costs are a far less significant cost factor than they would be for reactors built in the other countries. Because financing costs are low to begin with the shorter PBMR construction time would not be translated into a significant cost savings. Finally any savings in PBMR labor cost would probably be balanced by the greater cost of materials.
It would not appear then, that Chinese PBMR costs would not provide us with a comparative insight into LFTR costs. Recent reports from China indicate that the Chinese are paying about $1.60 per watt for new reactors. This cost would appear to be less than half of the cost of reactors in Europe or North America. Recently Indian reports anticipate costs as low as $1.40 per watt for Indian manufactured LMFBRs. This would indicate a significant post-carbon electrical cost advantage for the emergent Asian economic super powers. This advantage does not stem from a potential cost savings in design or manufacturing techniques. Thus roads to greater nuclear cost competitiveness are still open to the North American, European, and North East Asian economies.
In May of 2008, I posted a series of posts on the cost lowering potential of LFTR technology for nuclear power. I was able to point to a number of areas in which significant cost lowering potential was present, especially in comparison with Light Water reactor costs. I did not attempt to assign a cost number but a figure of 25% of the cost of conventional nuclear power generating reactors seemed plausible. It is possible that LFTR costs might not be that low, if attention is not paid to rigorous application of the cost lowering potentials.
Why does the LFTR have such cost lowering potential?
First because its design is very simple compared to LWRs. LWRs require many hundreds of valves, miles of pipes, numerous pumps, thousands of supports, miles of cables, hundreds of embedded instruments and other parts, each built to very exacting specifications and requiring intensive highly skilled labor for their installation. Mistakes in parts installation may necessitate large scale rebuilding. Constant monitoring of the quality of reactor construction is of the utmost necessity, and consumes many hundreds of thousands of hours of supervisors time. The LWR has two active control systems. It has at least two cooling systems, and a massive 8″ thick pressure vessel that surrounds the reactor. The LWR operates under high pressure conditions. Coolant water is pumped under high pressure through hundreds of channels in the reactor’s core. The entire core coolant system is built to very exacting specifications. Even a minor failure of the coolant system could lead to core damage. Thus the LWR is a highly complex machine, requiring hundreds of thousands of parts and millions of hours of labor to construct. LWR construction requires great organizational skills, and extremely diligent supervision. It is relatively easy for very costly problems to emerge during the construction. For example, if cement used in the reactor construction does not conform to design specifications, structures built with inferior cement have to be torn down and rebuilt. Such problems can lead to delays in construction schedule, and significantly contribute to project cost. The construction method, which requires the use of a huge amount of on site labor , is very difficult to organize and control. Problems associated with the complex reactor construction labor system contribute to overall reactor costs.
In contrast Molten Salt Reactors including the LFTR are very simple. A liquid coolant fuel mixture is pumped through the reactor core and then through a heat exchanger and back into the reactor. There may be a secondary cooling system if the primary system breaks down, there would also be an emergency back up system that would automatically drain the core if both cooling systems failed. The core would be drained into specially designed tanks that would be passively cooled, by a naturally circulating air or water coolant system.
The LFTR reactor can be designed to include several small chemical processing units. These units greatly improve the efficiency of LFTR operations. While they add to LFTR complexity, even with a full array of chemical processing units, the LFTR is still far less complex than the LWR. One of the keys to lowering LFTR costs is the simplicity of the LFTR design.
The LFTR operates at a much higher temperature than LWRs. For that reason a small LFTR will operate with greater thermal efficiency than a very large LWR. This opens the door to greater design flexibility. With LFTRs, designers have significant design options related to size. LFTRs can be designed for modular use. Several small LFTRs can be clustered to produce the power equivalent of one large reactor. There are a number of advantages in doing so. Small reactors can be factory built, and transported by truck, train or barge to a final set up site. Factory production would use labor more efficiently, and product quality could be much more easily controlled. A serial production process would lead to rapid learning, improved quality and lower price. Reactors could be built in periods of a few months rather than several years. The significantly shorter construction time would have a positive effect on overall reactor costs:
A. The cost of accrued interest during the construction phase would be significantly less or factory built small reactors than for site-built large reactors.
B. Small reactors can be set up and brought on line within months of being ordered, thus quickly adding to the owner’s revenue stream.
C. Small, low cost, quickly set up reactors have fewer risks. Lower risk premium, lower capital costs.
Small modular LFTRs allow greater flexibility in reactor siting and housing:
A. Old coal and natural gas power plant sites can be recycled with considerable economies.
B. New grid hookups would not be required. LFTRs sited at old power plant sites, could use the existing grid hookup.
C. On site facilities could be reused, decreasing construction expenses.
D. The location of small reactors close to electrical consumers would conform more closely to a distributive model, and would assure greater grid stability by bringing electrical production close to the customers. This reduces the necessity of making costly additions to the grid.
LFTRs do not require expensive, hard to obtain materials that could compromise LFTR production. In addition to materials options explored at ORNL between the 1950′s and the 1970′s a variety of other materials options appear to be available. These include Carbon-carbon composites which can be used with very high performance LFTRs that can produce industrial process heat, and commodity materials like stainless steel, that can be used to lower LFTR costs even further. Ultra low cost LFTRs might be used to replace natural gas fired generators that are currently used in peak reserve capacity.
The LFTR opens the door to relatively novel, innovative and low cost housing options, that can enhance reactor safety while lowering costs. Underground or underwater housing should be explored. LFTRs can be housed in underground chambers on existing power plant sites. Such chambers need not require massive amounts of steel, and concrete, as present reactor containment structures do. Underground reactors would have superior protection against terrorist attacks through car bombs or large aircraft attacks. They could easily be made relatively impervious to attempts by terrorists to seize the reactor. Finally underground reactor can be designed to include multiple anti-proliferation barriers, making the underground reactor a highly undesirable target for would be proliferating nuclear terrorists. These very desirable goals can be achieved without the high cost of above ground massive containment structures.
In addition to the use of recycled coal plants for underground sites, mines could be used for LFTR siting. Salt mines make a very interesting option, with the potential of clustering a relatively large number of modular LFTRs in salt mines, with no above ground structures required.
In addition to the many cost lowering options available to LFTR designers and operating utilities, the LFTR can be designed to operate without operational staff. Since the LFTR is highly stable under passive control, no operator input is required in order to assure the highest level of safety. Thus while security staff would be required to protect LFTRs from terrorists and misguided acts of vandalism, no operators are required for safe and efficient LFTR operation and an operational staff would be redundant. This would lead to further economies in LFTR design, construction and operation.
It is quite obvious that LFTR costs could be substantially less than Light Water Reactor costs. At the moment it is far future LWR costs that are far from clear. My own guess is that if all of the LFTR economies I have mentioned were implemented th
e relative cost savings of the LFTR would be greater than 50% of the cost of Light Water Reactors, and LFTRs might well cost 25% of the cost of LWRs. If we consider that large amounts of LFTR fuel has already been mined and is currently regarded as mine waste, the LFTR could well turn out to be the greatest bargain of the 21st century.
My advocacy for the Aim High concept is fundamentally political. Robert Hargraves named the Aim High energy concept and has explained it. The goal of the Aim High project is the rapid development and deployment of a very large number of Liquid Fluoride Thorium Reactors not only in the United States, but in most of the world, between 2020 and 2050. I view the LFTR as a lowest cost potential energy source, that is safe, pollution free, and sustainable. This Aim High goal cannot be reached unless it becomes a matter of United States government and International policy. Thus my goal on Nuclear Green, and in my other postings is the adoption and implementation of the Aim High Project as National and International policy.
It would not be rational to promote the Aim High Project as good policy, without explaining why other competing policy options are less desirable than the Aim High Project option. The two options I have reviewed are the renewables option and the conventional nuclear option. My goal in these studies has been to demonstrate that both the renewables option and the Light Water Reactor option policy is unlikely to produce a post carbon energy system. I have argued for the desirability of this goal both because the use fossil fuels have undesirable consequences even if the Anthropogenic Global Warming issue is excluded. In addition a plausible case has been made that global oil production will soon begin to decline world wide. This concern leads to a policy consideration that electrical technology be substituted for Fossil Fuel technology in transportation. This possibility has not been well thought through by either renewables advocates or the conventional nuclear industry, but it could lead to a significant increase in electrical demand.
Renewables advocates appear to believe that a large power production gap will exist in a renewables generating system and that the gap will be filled through greater efficiency. This would appear unlikely if transportation is electrified to any considerable extent. Thus the renewables option would appear to open an energy gap, especially if transportation is electrified.
The French model demonstrates that conventional nuclear power can takeover providing electricity on a national scale. The French nuclear model also provides for some use of electrification in transportation. However, American nuclear advocates have not advanced the claim that the American electrical system can or should be entirely converted to conventional nuclear technology.
Both the renewables model and the conventional nuclear model leave significant questions to be answered, before they can be considered good policy options. Among the most serious questions is that of cost. I have tried to show that making renewable generated electricity reliable raises its costs. Indeed the cost of reliable renewable generated electricity appears higher than nuclear units with comparable electrical output.
Even the highest estimated cost of nuclear generated electricity appear to be lower than the likely costs of reliable renewables. Thus there ought to be a considerable policy preference for conventional nuclear generation of 24/7 base electricity. There are never-the-less problems with a nuclear base. First, the economies of nuclear power are such that running LWR’s at full power on a 24/7 basis yields the best return. While conventional reactors can produce power on a 16/7 or 16/5 basis, this would increase the cost of power from high priced nuclear facilities to customers. On exception would be the use of older reactors for 16/7 0r 16/5 electrical generation, since older reactors are already paid for. Conventional reactors do not make a good fit to peak reserve requirements. Peak reserve capacity is usually characterized by low capital cost and high fuel costs. In addition older and inefficient coal powered generation facilities are also assigned peak reserve roles.
Wind powered generation facilities without storage are inappropriate for any generation role requiring reliability. In most localities peak wind capacity of ten occurs during the night when electrical demand has ebbed. During day time however wind speed often slackens in many localities, while electrical demand increases. Finally over much of North America, wind produces almost no electricity during the hottest days of summer. Only with electrical storage does wind emerge as an important post carbon power source. But while storage adds to the reliability of wind it also increase wind’s capital costs. With power on demand 16 hours a day reliability wind does not have a cost advantage over nuclear, and with 24 hour a day reliability, wind is at a decided cost disadvantage compared to conventional nuclear.
Solar generated electricity has many liabilities even where compared to wind even in most favorable localities solar facilities only produce about 20% of their rated power a day and then only under very favorable climatic conditions. Both clouds and winter adversely effect solar output. It is sometimes suggested that wind and solar compliment each other; solar having its peak output during days while wind at night, and so on. The disadvantages of such a hybrid system become apparent when the cost of redundant capacities are calculated and the number of hours during a year during with neither wind nor sun alone or in combination will generate enough electricity to satisfy consumer demand. Thus without storage renewables are not reliable. With storage, renewables are are more expensive than nuclear. Renewable advocates sometimes attempt to solve the problems of renewable limitations by pointing to the grid as a adjunct to renewable power. But this would assume that carbon based power would continue to be available in a post carbon energy era, and actually quite a lot of carbon based power. This leads us to the paradox that renewables in a post carbon energy scheme would continue to require the presence of carbon based generating capacity in order that the grid be reliable.
My conclusion then is that neither the mostly renewables grid nor the mostly conventional nuclear grid work well and would not provide low cost electricity. In contrast the LFTR grid would work well and at a far more modest cost. Thus advocating a Aim High oriented energy policy can and should include a discussion of the cost, reliability and other advantages of the Aim High option compares to the conventional nuclear or renewables options.
I have been criticized for discussing the disadvantages of both renewables and conventional nuclear power especially in comparison to Aim High LFTR technology. But it is difficult to portray the advantages of the Aim High project without pointing to the cost other advantages over the conventional nuclear and the renewables options. What sort of advocacy would refrain from pointing to the advantages of the preferred course?
Part of the Aim High project is the development of cost savings that are not possible with other energy approaches.
We should not plan the energy future without acknowledging economic fact.
In 1948 exploration of reactor technology was well underway. Most reactors had cores made of solid materials, for example uranium metal clad in aluminum. A second line of reactor development, the which began with the original chain reactor experiment at Cavendish Laboratory and continued with a reactor experiment at Los Alamos, involved the use of uranium compounds dissolved or suspended in water. The reactor was called the Aqueous Homogeneous Reactor. In 1948 reactors were cooled by air, some other gas, or water. Research was underway involving the use of molten sodium metal as a reactor coolant. Alvin Weinberg had proposed the use of water under pressure as a reactor coolant. This concept had the potential to control the heat produced by the reactor and put it to useful work powering ships, or driving electrical turbines. This technology attracted the attention of the United States Navy, and eventually led to the development of the nuclear-powered submarine. Naval reactor technology also had potential for electrical production, and the Navy set up the first project to demonstrate civilian electrical production at Shippingport, Pennsylvania.
Meanwhile the Air Force, which was also interested in its own reactor technology to power bombers, sponsored aircraft reactor research in Oak Ridge. The original aircraft reactor concept explored by engineers at the K-25 facility in Oak Ridge involved the use of liquid sodium as a coolant. The original K-25 aircraft reactor concept had a very significant safety defect, and in 1947 three K-25 engineers, V.P. Calkins, Kermit Anderson, and Ed Bettis began to explore a radical reactor concept involving the use of hot liquid fluoride salts. This was a natural concept, because in 1947 K-25 was the largest industrial facility using fluoride chemistry. The three engineers researched the possibility of using liquid fluoride salts as a reactor moderator, fuel carrier and reactor coolant. The K-25 research lead to the Molten Salt Reactor concept. When the aircraft reactor project was transferred to ORNL in 1950 and assigned to the brilliant chemist Ray C. Briant, Ed Bettis pitched the Molten Salt Reactor to Briant. Briant and Bettis pitched it to Weinberg, and it was agreed that the defective K-25 sodium-cooled aircraft reactor concept should be scrapped, and the promising liquid-salt reactor concept become the focus of ORNL Aircraft Nuclear Propulsion research.
During the next few years a radically different reactor concept was to emerge in Oak Ridge. Conventional reactors are much-evolved versions of Alvin Weinberg’s water-cooled reactor. They feature complex cores which contain a ceramic uranium dioxide clad with zirconium metal. This fuel system prevents the escape of radioactive fission products into the cooling water, but creates considerable difficulties for processing the fuel for fuel recycling and the extraction of fission products. The UO2 fuel is also a very poor heat conductor, and the fuel pellets inside conventional reactors become very hot, so much so, that there is a danger if the reactor cooling system fails that the UO2 fuel could melt at 2800°C and create an unholy mess.
The water-cooled reactor is just that, water cooled. A system of pipes carry the water through the core where it extracts heat from the fuel pellets. Water boils at 212 degrees under atmospheric pressure, but scientists had long ago discovered that if water can be kept under high pressure, its boiling point goes up. Engineers had discovered that power conversion becomes more efficient if water is pressurized and prevented from boiling at 212 degrees F. In order to prevent the water from boiling inside the sort of pressurized reactor that the Navy uses, the reactor is placed inside a massive steel pressure vessel, and water is pressurized in the reactor. The pressurized water is superheated, and because it is under pressure it does not turn to steam inside the reactor. A second type of reactor, the boiling water reactor, operates under a little lower pressure, and pressurized water begins to turn into steam in the upper part of the reactor.
Reactors that are cooled with pressurized water are quite complex and can be quite large and pose a number of problems. The presence of pressurized water leads to the danger of a steam explosion. Pressurized water can also leak from pipes outside the reactor, creating a danger that the reactor might not receive coolant water. Coolant failure can lead to core meltdown as it did at Three Mile Island. Core meltdown can lead to containment breach, either by melting through the steel pressure vessel, or by releasing hydrogen gas which in turn can explode with enough force to rupture the pressure vessel.
Pressurized water reactors and their cousins boiling water reactors can be made safe, but at a significant price in terms of complexity and weight. Light water reactors have control issues. Chain reactions may not be uniform throughout the reactor. Operators may need to employ control rods to prevent excess reactivity in parts of the reactor which can lead to local overheating and core damage. This necessitates an elaborate system of internal sensors inside the reactor along with an equally elaborate instrumentation, designed to provide operators detailed information about core conditions. During the 1970′s reactor operators could be swamped with information leading to confusion and operator errors. This happened at during the Three Mile Island accident. Computer systems are now in place to manage the flow of information from inside the reactor, and to assist human operators in managing pressurized water reactors. Recent designs of pressurized water reactors have impressive safety features and can be described as demonstrating revolutionary improvements in safety over earlier generations of water-cooled reactors. They are also very expensive, and still use enriched uranium dioxide fuel that is expensive and difficult to reprocess. Pressurized water reactor technology is stuck with once-through fuel technology and the problem of nuclear waste.
When Ray C. Briant and Ed Bettis approached Alvin Weinberg in 1950 to discuss the Molten Salt Reactor concept, Weinberg was already aware of the shortcomings of his invention, the pressurized water reactor. Weinberg’s mentor Eugene Wigner believed that the Aqueous Homogeneous Reactor was a better route to low-cost electrical energy than the Pressurized water reactor, and Weinberg was pushing Aqueous Homogeneous Reactor research at Oak Ridge. Ed Bettis’ Molten Salt Reactor had many of the attractive features of the homogeneous reactor without some of its drawbacks, but it was to take Weinberg some time before he realized that the MSR represented the preferred route to the pressurized water reactor alternative.
Both the Aqueous Homogeneous Reactor and the Molten Salt Reactor featured a liquid fuel-coolant mixture. The mixture was pumped into and out of the core where moderation and geometry enabled criticality. Eugene Wigner had been attracted to the Aqueous Homogeneous Reactor because its fuel could be continuously run through chemical processors outside the core. This meant that neutron-eating fission products could be removed, making the neutron economy of the Aqueous Homogeneous Reactor so efficient that it could breed Thorium to U-233 advantageously. ORNL reactor designers were to design an Aqueous Homogeneous Reactor with a thorium-containing blanket surrounding core containing a heavy water with a dissolved uranium compound. Before his death Ray C. Briant suggested to Weinberg that a Molten Salt Reactor with a thorium blanket, similar to that designed for the Aqueous Homogeneous Reactor would have superior performance to the latter reactor. Thus Briant can be considered the father of the Liquid Fluoride Thorium Reactor, but in many respects the LFTR had many fathers at ORNL.
Compared to the Light Water Reactor the MSR/LFTR had many safety features, the most outstanding of which was its strongly negative temperature coefficient of reac
tivity. The liquid salt fuel mixture of the LFTR responds to slow and then stop chain reactions as heat within the reactor increases.
The liquid salt in the LFTR core expands as it heats. As it expands there is less liquid salt in the core, carrying with it fissionable fuel. As fissionable fuel leaves the core, the fission reaction rate slows. At maximum core heat, enough fissionable fuel leaves the core to bring the fissionable mass left in the core down below the amount needed to maintain criticality, The chain reaction stops. Core salts retain heat, and heat is also replenished by the radioactive decay of fission products within the core.
What first attracted Ed Bettis and his associates to the Molten Salt Reactor idea was the way it would respond to a pilot’s throttle use.
When the pilot demanded more power for his jet engines, heat is drawn out of the reactor core and transferred into the jet engine where it produces jet power. Heat from the LFTR core can also power powers closed-cycle gas turbines in electrical generating systems. As core temperature decreases, core salts shrink, and more salt is in the core, thus increasing the fission reaction rate. The greater the demand for power for a jet engine or a generator the greater the amount of heat generated by the core, and as a consequence the reaction rate within the core increases. The limitation of power output is determined by the heat removal rate, which in turn is based on the limitations of the turbine generating system.
The reaction rates slow down and then stop as heat withdrawal is decreased, or as temperature increases in Molten Salt Reactors–they basically control themselves. Thus while Pressurized Water Reactors require constant operator monitoring and operator input into its control system, MSRs including the LFTR, basically control themselves. The potential instability of the PWR is simply not present in the LFTR.
Compared to PWR, the LFTR has superior peak load reserve and load-following capacities. Since a LFTR’s salts are at maximum heat when a LFTR is on standby, the LFTR can produce maximum power as quickly as its turbines can go to full generating speed under load. Thus the LFTR can not only load follow but can serve as peak demand reserve.
In the case of decreased load demand, less heat is drawn from the core, and the fission reaction rate slows. Thus the same feature that gives the LFTR superior safety over the Pressurized Water Reactor also gives it superior flexibility in generating electricity.
Warren Heath brought my attention yesterday to a couple of documents from the National Wind Watch. The document was a statement prepared for the Environmental Court
of New Zeeland by Bryan William Leyland, a consulting mechanical and electrical engineer who was extremely well qualified to evaluate costs related to electrical generating systems including wind generating systems. Leyland had been retained by an party to a matter before the Environmental Court, to offer his views on the likely cost of a wind generation project in New Zeeland. Leyland had been involved in wind projects as long ago as 1980, and had consulted on a wide variety of electrical generating projects as well as serving as a consultant on an electrical shortage to the New Zeeland government.