If your only focus for 2050 is man-made global warming, maybe you should consider expanding your horizons?
Commentary by Jon Morrow
Edited by Michael Goldstein
I preface this article with my personal belief that although man does have the ability to affect the earth’s climate, I do not believe we are doing so to any significant extent right now. I can, however, entertain a man-made global warming hypothesis conceived in accordance with certain rules, that suggests that I may be wrong.
In explanations of natural phenomena, I am a big believer in science and not in consensus. Just because a consensus of all the King’s men concludes that the earth is flat, is no reason whatsoever for me to believe the earth is flat. Show me a man-made global warming hypothesis based upon reproducible scientific evidence that vindicates your hypothesis in a meaningful way, and we can discuss and debate it.
Let’s assume that man-made global warming exists, that it is a very big problem, and that we are tasked with not only making policy to avoid this precipitous crisis, but also the pollution crisis brought on by rapidly expanding populations, and a water and food shortage crisis as well.
We have to get from point A (an insupportable crisis of expanding air pollution, expanding man-made CO2 production, and expanding use of precious potable water reserves and diminishing food supplies) to point B (our salvation technology).
Let’s assume that we need to find our salvation technology before 2050.
Let’s assume that as an economist, I accept as accurate the accounting behind the wind industry’s published numbers that show that wind is able to produce electricity as cheaply as natural gas does. Let’s also assume that in 20 years (in 2034) solar technology will be able to produce electricity as cheaply as wind.
To make an accurate and fair comparison of energy sources, we must look at the shortcomings of wind and solar together with all other relevant energy technologies. The problems we identify with wind and solar are that they need storage capacity to work well with other technologies employed on the grid, and to work with our modern lifestyle. This is because they cannot produce electricity when the wind doesn’t blow and the sun doesn’t shine, and there is no present way of storing the electricity produced when they do.
The storage issue for wind and solar is not new and has been a problem the federal government has been trying to lick since 1970’s. For more than 40 years no economical grid level storage system has been developed. Many of the grid level storage systems that have been developed tend to be not of a distributed nature, i.e., they must be centralized (for reasons of economics).
One of wind and solar’s strongest selling points, however, is has been that it is distributive in nature (power generators being close to power users).
Grid level storage for renewables must be centralized in order for storage costs to be economical. So a generation system that is physically distributed (such as wind turbines and solar panels) would feed a centralized storage system that would in turn distribute electric more evenly to the grid, making the power source dispatchable, yet, still, in many cases, not dependable. This winds up being no different than the large centralized fossil fuel electrical generation systems we have today, with the added disadvantage of energy lost in additional electrical transmission.
Batteries can store only a limited amount of energy, and they can only store that energy for a limited time. If Mother Nature would throw us a weather-peak or a weather-valley outside of normal operating parameters then energy will become unreliable, even with battery back-up (grid level storage).
Wind and solar either have to have economical storage capacity or collocate with a complimentary, reliable, and on demand power source.
Because the wind and the sun can change quickly, if we are to make wind and solar dispatchable and dependable, they must be backed up with an energy technology that can ramp up power quickly and ramp down power quickly, and do so very efficiently, without producing emissions. Today, that backup power comes in the form of “natural gas peaker plants.”
While being very clean, peaker plants still produce more carbon dioxide per kilowatt-hour than natural gas base-load plants. Many in the energy community argue that in several places where wind and solar are being utilized, their complimentary peaker power source may be generating more CO2 than if just a base-load natural gas power plant had been utilized. The most efficient solution (least amount of CO2 produced) would be to use solely a base-load natural gas plant in lieu of the entirety of the wind, solar, and peaker natural gas plants.
In places where it is consistently windy and consistently sunny, wind and solar paired with a complimentary power source can make sense. This is at best a very small area to work with. While such areas may hold a great potential amount of energy, harnessing and transmitting this energy dependably, consistently, and economically and safely, has been a challenge.
In identifying these myriad problems we have not even gotten to how to eliminate alternate (non-energy related) forms of air, land, and water pollution, and how do we produce more water and food?
Thorium is a plentiful and an abundant element, and we have many thousands of years of supply readily available if used as a power source.
Although there are engineering challenges in developing thorium MSRs (Molten Salt Reactors), the initial research and experimentation was completed in the late 1960’s – early 1970’s. A 10 year development to commercialization time frame is not unreasonable (though the current regulatory environment could push that to two-four times that length).
A LFTR (Liquid Fluoride Thorium Reactor) is a MSR which uses thorium as fuel. It has load following capabilities and produces no CO2. It can potentially replace both base-load and other technologies that ramp power up and down very quickly. This reactor produces no long-lived nuclear waste, is inherently safe, and can be configured to consume present day long-lived nuclear waste. So, a LFTR solves the safety problem, the pollution problem, and any global warming produced by burning fossil fuels in electrical generation plants.
Because MSRs can be configured to burn both uranium and thorium, and uranium and thorium are both found in coal ash, which is the by-product left over after coal is burned in a power plant, it is likely that reclaiming these elements from coal ash could very well be profitable. This also means that because large amounts of coal ash would be handled and processed, it could conceivably be profitable to harvest many more valuable materials from coal ash, such as aluminum, titanium, vanadium, and iron. This would greatly reduce the amount of land fill pollution that coal ash has caused over the past 100 plus years. Such reclamation requires a lot of heat, and LFTRs will produce prodigious amount of process heat. They will be ideal for this purpose.
MSRs are expected to produce electricity at a cost half that of the cheapest fossil fuel. This means recycling technologies like plasma gasification of municipal waste and sewage is possible. This would allow us to start eliminating landfills and stop flushing treated sewage into our lakes, streams, and oceans. This process can produce fertilizers that can be used in food production.
MSRs could power, on a very large scale, sea water desalination plants, which could eliminate any potential water shortage problems for the populations of 2050 and beyond. This would also provide potable water to be used in food production for agricultural irrigation and livestock.
MSRs will produce medical isotopes for treating cancer and for medical imaging.
Because of the high heat of MSR’s it is quite probable that synthetic carbon-neutral gasoline can be produced from seawater and waste.
We must overcome many obstacles by the year 2050, not just global warming. Successful development of economical grid level storage for wind and solar, even if it proves to be possible, solves only one problem. We will still have water and food issues to deal with.
Development of the MSR is a solution that would be embraced by the marketplace and not need to be mandated. It can be rapidly deployed once proven, and will be a comprehensive solution to many problems.
If you were going to rely on a technology to be our salvation, would you bet on renewables? Or would you bet on MSRs?
From time to time there are conspiracy theorist and opportunist that try to attach the incredible to the credible. If not addressed the incredible can detract from a very credible message and from a very credible technology. Recently, there has been a lot of conjecture of the possibility of the Thorium Powered Car and Thorium Plasma batteries.
While the Thorium powered Cadillac car is wickedly cool looking……..
The concept of a thorium powered car has many, many, problems.
Here is a very good video that thoroughly examines the problems with the thorium car. (Warning adult language used)
The Thorium powered Cadillac concept car is not even the first nuclear powered concept car by a big three auto manufacturer.
Thorium Plasma Batteries have the most dubious distinction of having everyone supposedly involved with this project being conveniently dead. No plans can be found for these batteries and none of the assertions made by the conspiracy theorists can be verified. From what I can tell is that the proof that is used in these articles are just interrelated links between conspiracy theory driven paranoia websites. I can find no credible websites on this supposed technology and many people are being duped into believing this technology is actually real.
Of course the Thorium Plasma battery is not a new story, it is a spin off of the 1990′s story of the water powered car and of the 100 mpg carburetor. I know something about this conspiratorial hoax as it originated in Ohio and was concocted by anti-capitalist leaning groups that want Americans to hate free market capitalism. These group reinforce plausible conspiracy theories that demonize big business.
Stanley Meyer of Columbus, Ohio and his water fuel cell were proved to be fraudulent http://en.wikipedia.org/wiki/Stanley_Meyer’s_water_fuel_cell and he died of a brain aneurysm in 1998.
Supposedly, we can all rest assured that it was those big evil oil companies that squashed Stanley Meyer because here is video proof that his car works…….
Yet, I do not see any cars filling up on water as of late.
Even this is a re-hashing of a conspiracy ladened mythology surrounding those that supposedly conspired against Tesla.
And here is yet another Ohio based water to fuel invention that recently was all the rage.
And of course we all hope that Mario is still alive! (Caution: Very Funny!)
Why is there such a breadth and scope of these conspiracies? These conspiracies start off as great entertainment and morph their way into reality through online forums, books, and movies. But they also gain their way into our culture in more malevolent ways. The Nazis of WWII and the Soviet Union started many dis-information campaigns that were designed to divide Americans and make our culture weaker. Operation INFEKTION was just one such campaign to convince the American public that AIDS was really created by the American government. Many more such campaign are still underway to this very day.
One of the very first JFK-Conspiracy books to be published was in 1964 by Joachim Joesten. The book was titled “Oswald: Assassin or Fall Guy” and was published prior to the Warren Report. In this book it was claimed that JFK was killed by the CIA and that Oswald was not a lone gunman. The book has been used by many subsequent conspiracy-theorists, including Oliver Stone and his movie JFK, to support their views. In fact, it is still parroted to this day.
Only decades later, with the fall of the Soviet Union, it was proven that Joesten was a paid KGB Agent and the publisher was a KGB Front. The purpose of this conspiracy-theory was once again to discredit America and the CIA and sow doubt and fear in the populace. It was quite a successful operation, judging from the thousands of books and articles that still emulate the original KGB-message.
Marching on Monsanto
While many people may have legitimate arguments with the Monsanto corporation many of the most virulent strains of hate fostered about this company originate with WWII Nazi propaganda and Soviet Union Cold War propaganda.
Monsanto was heavily involved in the Manhattan project that allowed Allied forces to win the day over the Axis forces. This involvement initially took the form of the Dayton Project that was based in Dayton, Ohio which was a logistical and recruitment effort for the Manhattan Project. Charles Allen Thomas of Monsanto was credited for recruiting many key Jewish nuclear physicist, chemist, and engineers that helped build the first nuclear bomb. Obviously, this did not garner any good will with the Nazi propaganda machine and this demonization was easily carried on by the Soviet Union in the Cold War when the chemical Agent Orange was used against soviet trained forces.
Here is a very good article of how Soviet propaganda is being used in the Ukraine to divide its people. (A very good read)
Today, even China is getting in on the demonization of the free market system (in Chinese)…
While I admit our government has many problems, and big businesses, as well as special interests, have undue influence with our federal government……we should not be so eager to believe everything on the internet.
Ohio has many testing facilities that could be used in developing a LFTR Reactor:
Ohio is home to two Nuclear power plants. Davis Besse and Perry Nuclear Power Plant. Perry Nuclear power plant was suspended into the construction of its second unit and could serve as a perfect site for a test or research reactor. (see picture of only one cooling tower in operation)
NASA Plum Brook Station in Sandusky, Ohio has 6,400 acres to develop new technologies and was already home to a nuclear reactor. This could also serve as a site for a LFTR production assembly line as well as a power generation test facility.
Most Recent tour of NASA Plum Brook Station has facilities that could be adapted for testing of research and test reactor components.
Older tour of NASA Plum Brook Station in Sandusky, Ohio
NASA Glenn Research Center in Cleveland, Ohio (formerly NASA Lewis Research Center) has a high temperature materials laboratory.
Dublin, Ohio also has a high temperature materials testing laboratory.
Wright Patterson AFB has the National Air Force Research Laboratories that have a multitude of laboratories that would be useful in the development of the LFTR reactor.
Piketon Ohio’s United States Enrichment Corporation Gaseos Diffusion plant is home to America’s last American owned enrichment facility. It is home to America’s attempt to modernize new enrichment technologies.
Babcock and Wilcox has a Barberton Location (Here)
Ohio is home to Batelle which is the world’s largest non-profit developer of technology.
Ohio State University is home to a Nuclear Reactor and has a very active nuclear program
Commentary by Jon Morrow
Let me preface this commentary by saying, Last night I drove a friend’s Tesla model S and it is an extremely cool car. A few months ago I drove a friend’s Chevy Volt, again, an extremely cool car. Their fit, finish, and function, are at, or above, that of their fossil fuel counterparts and so it is unfair to say that I hate these cars simply because they are electric. If I could afford one, I most likely would have one. If I were comparing these cars to cell phones, the Tesla would be an Apple iPhone in a world of flip-phone cars.
A base model Volt cost $34,185 (minus the $7,500 tax credit) and a base model Chevy Cruze costs $17,520. The Volt claims to get an average of 138/mpg and so if you drive 10,000 miles per year for 5 years you are going to use 360 gallons of gasoline. At $3.60 per gallon, that equates to $1,300 of gasoline.
The Chevy Cruze gets 27/mpg city and 46/mpg highway. If we assume all city driving for 10,000 miles per year for 5 years you will consume 1,851 gallons of gasoline. At $3.60 per gallon, that equates to $6,664 of gasoline.
The difference between the price of the cars is $16,665 and the difference between the gasoline usage is only $5,364. The Chevy Volt only starts to make economic sense if you travel on the order of more than 30,000 miles per year and most Americans travel between 10,000 and 20,000 miles per year. Of course, if you are an over the road salesman that travels vast distances the Chevy Volt is a great choice for you. Add in the $7,500 electric car tax incentive and you can begin to get close to making electric cars look attractive to the average American driver, but not quite.
The Tesla Model S starts at $69,600 and is an all-electric car. It has a range of about 300 miles on a full charge. It is not fair to compare a Tesla model S to a Chevy Volt, well, because it is just so much more of a luxury car compared to the Chevy Volt. The Cadillac ELR is sort of the luxury version of the Volt and has a base price of $75,000. The Cadillac and Tesla are in a price range that is outside of the average American’s pocketbook.
The less exciting and much less cool Ford Focus Electric and Nissan Leaf are all electric cars that are in a similar price range to the Volt ($28,000 to $34,000). These cars require a large up-front commitment (like the Volt) in the purchase price of the car, and in many places in America where coal fired powered plants have been closed early because of EPA regulations, and electricity costs have doubled to pay for the early closure, an all-electric car is not looking too seductive right now. Couple the high cost of electricity with a short range and I would gander to say these cars are either “feel good” about yourself cars for the environmentally conscious and those that can afford it or are “bragging right” cars for those pursuing a tech-nirvana lifestyle.
Many environmentalist groups now see the electric car as the answer to grid level storage of electricity to account for the negative effects of the variability of renewables. The theory is that if we all drive all-electric cars – that the batteries in the cars would act as grid level storage and thereby allow a greater use of renewables. The problem is unless gasoline shoots up to the stratosphere in costs and renewables plummet in cost, the electric car just does not make economic sense for the average American (that is not to say it does not make “cool sense”).
The Nissan Leaf and the Ford Focus Electric, more than likely, will never be adopted by high mileage drivers because of their short range and recharge times. They are too expensive for low mileage drivers with the relatively low price of gasoline. This is probably why the Chevy Volt is one of the best selling hybrids (not accounting for government sales).
There is a theory out there that all environmentalists should oppose base-load power plants. Base-load power is normally (and correctly) thought of as coal and nuclear power plants. The theory is we should replace base-load power plants with natural gas peaker plants. The way the theory is laid out sounds reasonable and plausible. By placing many small natural gas peaker plants and renewable plants on the grid and couple that with the battery storage in all-electric cars – we will then have a cheap, clean, and well distributed electrical generation system. The problem with this theory besides gasoline still being too cheap, electricity derived from renewables being too high, battery powered cars having too short of range, and electric cars being too expensive is: natural gas powered peaker plants produce much more CO2 than their natural gas based-load counterparts or baseload nuclear power plants. If the whole idea is producing less CO2, the peaker plant theory just doesn’t work.
An example of the no base-load power theory
An Example of Using the Electric Car for Grid Level Storage
A LFTR (Liquid Fluoride Thorium Reactor) has load following capabilities that is able to accommodate the peaks and the valleys that renewable energy creates while producing power at base-load cost and producing no CO2. In this sense a LFTR is renewable energy’s best friend if it does not put renewables out of business due to economic competition.
If a battery technology comes along that can store a massive amount more power, which does look like it is on the horizon, then most likely it would also be used for grid level storage. The economics of a LFTR can make the economics of the electric car look much better because it can radically reduce the cost of electricity. The lower the electricity costs are the more attractive electric cars are to the average American.
Some new Battery Technologies
At the higher temperatures that a LFTR operates at it is possible to produce carbon neutral fuels from seawater. Since carbon neutral hydrocarbon fuel has many times the energy density of batteries, why use battery powered electric cars at all. New Navy technology threatens the entire electric car industry. The thought of combining this seawater to fuel technology with LFTR technology might just make those that own stock in an electric car company shake in their boots.
Commentary by Jon Morrow
Cancer and HIV are some of the most important health problems of our day and they are of growing importance. The treatments available today, even though often effective, cannot permanently cure the majority of cancers. This is typically true for cancers that have spread around the body from the initial tumor site or are blood borne cancers.
Radiation therapy uses ionizing radiation to kill cancer cells and shrink tumors by damaging the cells’ DNA, thereby stopping these cells from continuing to grow and divide. The most common way of exposing cancer patients to radiation is through external radiation therapy (such as the Axesse, Cyberknife, Gamma Knife, Novalis, Primatom, Synergy, X-Knife, TomoTherapy, Trilogy and Truebeam and Proton Therapy).
With this approach, delivering a beam of high-energy x-rays or protons to the main tumor irradiates only a limited area of the body.
Cutting edge technologies such as Proton Therapy treatments are preforming miracles where previously patients were given death sentences. This is done by delivering radiation very accurately and precisely.
Each of these cutting edge technologies are attempting to deliver radiation with greater precision to the cancer cells only and trying to leave healthy tissue unharmed. The greater the precision, the less healthy tissue that is harmed, and the less time of recovery and the less sick a patient becomes from the treatment.
Targeted radionuclide therapy is a new kind of cancer treatment that aims to be even more precise. It uses radionuclides (radioisotopes) as smart bombs in waging the war on cancer. Targeted radionuclide therapy combines new developments in molecular biology and in radionuclides to create new medical applications. Due to their decay characteristics alpha-emitting-radionuclides are particularly promising in selectively destroying just cancer cells and leaving healthy tissue relatively untouched. This has spawned an area of research known as TAT (Targeted Alpha Therapy). Some biomolecules, like monoclonal antibodies or specific peptides, can selectively target particular cancer cells; they will find these cells, even if spread around the body, and bind to them. If an alpha-emitting radionuclide is attached to such a tumor specific carrier, the alpha particle produced during its radioactive decay can kill one or a few targeted cancer cells along its trajectory.
TAT is a bit like chemotherapy, because it is a systemic treatment; however, it uses a monoclonal antibody labeled with a radionuclide to deliver a toxic level of radiation to diseased sites. A unique feature of radionuclides are that they can exert a “bystander” or “crossfire” effect, potentially destroying adjacent tumor cells even if they lack the specific tumor-associated antigen or receptor. In addition, a systemically administered targeted radiotherapeutic that combines the specificity of cancer cell targeting with the known antitumor effects of ionizing radiation has the potential to simultaneously eliminate both a primary tumor site and cancer that has spread throughout the body, including malignant cell populations undetectable by diagnostic imaging.
Alpha radionuclides in laymen’s terms, are very powerful yet, have a very short kill radius (only a few cell diameters). Current beta radionuclides used in some chemotherapy treatments have a large kill radius and tends to harm quite a bit of healthy tissue on its way to knocking out the cancer. Beta radionuclides tend to make patients sick and weak because of the healthy tissue they kill. Whereas, if Alpha radionuclides can be successfully delivered to the cancer cells only, then healthy tissue will remain unaffected.
The actinium225 radionuclide is a particularly promising smart bomb in the treatment of cancer because of its decay rate and an increasing number of different cancer types are under study in pre-clinical and clinical approaches, including in vitro studies, animal studies and phase I/II clinical trials with alpha radionuclides. In addition, recent studies have demonstrated the applicability of targeted alpha therapy for the treatment of fungal, bacterial and viral infections.
European and American researchers believe that radiolabeled (radionuclides attached to) antibodies might eradicate the immunodeficiency virus-infected cells from a patient’s body. Scientists have combined antibodies with radioactive payloads that deliver lethal doses of ionizing radiation to selectively target and destroy HIV infected cells. This hypothesis has been successfully tested in a joint project between the Albert Einstein College of Medicine in New York and the Joint Research Center (JRC) Institute for Transuranium Elements (ITU) as reported in the Public Library of Science. These results provide first support for the concept that these antibodies labeled with the radionuclide bismuth213 (a daughter isotope of actinium 225) can be used for treatment of HIV. Pre-clinical development testing the efficacy and safety of this novel therapy approach are being undertaken in preparation of a Phase I clinical trial in HIV infected patients.
Because this method of treatment allows these radionuclide (smart bombs) to be injected into the body and seek out and find the disease, therein lies the hope of a permanent cure. The theory is in essence, they will eradicate all the disease no matter where it is in the body.
Alpha particles are especially well suited for targeting micrometastatic disease and single tumor cells such as leukemia and other blood-borne disease. The bismuth213 radionuclide is of special interest in treating leukemia because of its unique properties, which include a short 45 minute half-life and high energy (8.4 MeV) alpha-particle emission. Its unique availability from the actinium225/bismuth213 generator system makes this radionuclide particularly well suited for medical use. Actinium225 is formed from radioactive decay of radium225, the decay product of thorium229, which is obtained from decay of uranium233. The National depository of uranium233 is at ORNL (Oakridge National Laboratories), and both INL (Idaho National Laboratories) and ORNL have developed effective methods for obtaining thorium229 (half-life 7340 years) as feed material to routinely obtain actinium225.
The majority of the available thorium229 stock has been recovered from the nuclear waste material uranium233 from experiments conducted at ORNL and has been stored at ORNL for about 30 years. This stockpile has been able to produce about 1,000 doses per year for clinical trials.
The problem is we are not producing anymore uranium233 and the federal government has had its eye on a half a $billion program to down blend this uranium 233 to essentially destroy this life saving material.
Ironically, we could potentially have both the cure for cancer and HIV but, the government may destroy the curedue to special interests and anti-nuclear activist.
If we are able to prevent the government from destroying the potential cure for Cancer and HIV how are we to produce more of the actinium225 isotope?
A LFTR (Liquid Fluoride Thorium Reactor) which, is a type of MSR (Molten Salt Reactor) can produce actinium225 in the normal course of operation and a fleet of LFTRs can produce enough actinium225 to meet the needs of the medical community if a cure for these diseases are developed. A LFTR can produce carbon free electricity at half the price of coal and study of an operational LFTR most likely will allow us the technical expertise to create a MSR Actinide Burner. A MSR Actinide Burner is a type of reactor that could reduce our unspent nuclear fuel stockpile to a mere fraction of what it is today and reduce the need to store this waste from hundreds of thousands of years to just 300 years.
Students celebrate winning Oberlin’s annual Dorm Energy Competition Photographer: College Relations
Commentary by Jon Morrow
The world is filled with hate and if you cannot be respectful of other peoples views you can help foster more hatred and anger. This is especially true when you talk about our energy future.
There are many people that believe in man-made “Global Warming” and “Climate Change” and believe there is only one way to avert these crises and that is with renewable energy technology. No other technology can be discussed as a solution, because if you do, then you are a pawn for the big money behind that respective technology.
Can we all grow-up and have a rational discussion on energy without calling each other names?
Oberlin, Ohio is home to Oberlin college with some of the best and brightest students in America attending this small school. While it is a small school, it ranks right up there with Harvard, Princeton, and Yale in cost and in quality of education. It is also known as having one of the most green conscious and liberal student bodies in America.
Oberlin college is a perfect example of where, if paranoia and name calling is put aside, that both the ultra-right and ultra-left can come together on an issue like thorium based MSRs (Molten Salt Reactors). I know, I have experienced rationale debate on campus first hand.
A popular restaurant with college students is the Feve, and when in town, I like to go to the upstairs bar and strike up a conversation on energy with some of the students or faculty. This is normally very easy to do and students love to share their views.
I respect the position of the people that believe in man-made Global Warming and the renewables solution, I just ask if they have thought it all the way through (I am a man-made global warming skeptic and I do not believe that wind and solar are a solution to our environmental problems). I am very careful in my discussion, as I know the words that will shut them down and cause them to close their minds to any further debate. Words like “intermittency” and “on-demand” are not things a person with a vastly different viewpoint wants to hear.
I normally start the conversation as such:
Because our current electrical grid has to work with other technologies, natural gas peaker plants have grown by leaps and bounds with the addition of solar and wind. These peaker plants are cleaner than coal but are less clean than baseload natural gas plants. The natural gas peaker plants act as a compliment to wind and solar to stabilize the grid. The question I ask is, “Do the peaker natural gas plants put out more CO2 running in compliment with the wind turbines they support than what a natural gas baseload and/or a nuclear power plant does to create the same amount of energy?”
Many students cannot answer this question with any certainty.
I then start to talk about the benefits of MSRs and LFTR and there is a lot of push back.
At Oberlin college there are a lot of students there that are anti-fracking advocates and so, while natural gas burns cleaner, they do not necessarily like natural gas. Solutions that they like are natural gas made from bio-digesters and natural gas from landfill to support wind and solar. More times than naught, when nuclear is mentioned, you get looked at as if you have a third eye. After much discussion I challenge them to watch three documentaries. The first documentary is “Cool It!” by Bjorn Lomborg, an environmentalist and a big believer in global warming. The unbiased review of all energy technologies by Dr. Scott Tinker in the “Switch Energy Project” and finally “Pandoras Promise” by director Robert Stone. After watching these three films, it has been my experience, that even the most vitriolic anti-nuclear opponents have warmed to nuclear energy.
Getting a college student to watch a documentary in their free time is hard but the ”Cool It!” documentary draws them in and, dare I say, helps to form a bridge between the left and the right. Many times if you get someone to watch “Cool It!” they will watch the other two documentaries. “Cool It!” is available on iTunes and “Pandora’s Promise” is available on iTunes and on Netflix.
Kirk Sorensen’s “TED talks” and Dr. Robert Hargraves “Aim High” video seals the deal and gets them so enamored with thorium that I get students that will call me telling me they have discovered yet another of Gordon McDowell’s videos.
Now, when I go to the Feve, a lot more people know about thorium energy and MSRs, not from me, but from other student advocates. They still believe in man-made global warming and climate change and that is okay with me. They also believe that wind and solar are part of the solution and that is okay with me. But now, instead of a vitriolic hatred for nuclear energy, they see it as having a dominant and substantial role in our future.
Although there is a very high worldwide demand for aluminum, as will be seen below, the American aluminum industry is hamstrung in its goal of meeting this demand by a series of outsized costs, including the cost of building a new plant; regulatory costs; and energy costs. Plant costs and regulatory costs must be addressed by the industry and by their federal, state, and local governments.
A steep reduction in energy costs, which constitute a very large portion of the cost of producing aluminum presently, can and will be addressed by the production and operation of the LFTR (Liquid Fluoride Thorium Reactor).
What would it cost today to build a typical aluminum reduction plant with an annual production capacity of 250,000 tons? Based on the most recently completed plants, an estimated $1.5 billion would be required.
Add to this the regulatory requirement that this new smelter provide its own generating facilities to provide the large amounts of electricity needed for processing the aluminum which it will produce, and that it meet governmental regulatory emissions goals. This will increase the installation cost by $300 million to $406 million. Now we are pushing the $2 billion mark for a new aluminum smelting plant. All of this assumes that a suitable site location can be found with the necessary support services, and that the site will be approved by all the relevant federal, state, and local regulatory agencies, and in a timely manner.
Today in the United States it would take several years to get the required permits and clearances. It is almost as hard to build and Aluminum plant as it is a Nuclear power plant. The construction of an aluminum plant would involve the need for environmental impact studies and reports, and hearings with many regulatory agencies and local and national governments, with no guarantee that final approval would not be challenged by court appeals.
Such cumulative considerations, when combined with the present unavailability of needed energy at a competitive price, lends some credence to the often heard statements that another aluminum smelter plant will not be built in the United States. These roadblocks must be overcome for American industry to take advantage of the huge worldwide demand for aluminum. Americans’ jobs and America’s prosperity are at stake.
Ormet Aluminum, America’s 4th largest producer shutdown in December of 2013
There is an enormous market for continued and increased production of aluminum in the United States. Like oil, the world cannot get enough aluminum. China and India’s reach from the third world to the first world has dramatically increased the demand for aluminum. Top market sectors for the industry are transportation, including automotive and aerospace, beverage cans and other packaging, building/construction, and the electrical industry.
Chinese demand, as measured by Chinese consumption of unwrought aluminum, grew almost every year during the 1995-2004 period, nearly doubling between 1995 and 1999, and subsequently more than doubling between 1999 and 2004. Over the full 10-year period, Chinese consumption rose nearly three-fold (up 4.0 million metric tons) to reach 5.9 million metric tons by 2004, equal to 20.1 percent of global consumption in that year. From 2004 to 2012 there was nearly a four-fold increase in Aluminum consumption.
In contrast to developed countries where the transportation sector dominates, building and construction is the largest aluminum-consuming sector in China, a reflection of ongoing building construction and infrastructure development and significantly lower per capita automobile ownership. In addition, the share of Chinese aluminum consumption accounted for by electrical products and consumer durables exceeds that of many industrialized nations, a reflection of both the country’s growing export-oriented manufacturing sector and its rising domestic consumer markets.
China’s relatively low per-capita consumption rate for unwrought aluminum, coupled with its expanding industrial activity and government housing programs, suggest that Chinese demand for aluminum will continue to grow, particularly in the construction and automotive sectors. An estimated 3.3 million apartments are being built every year in China, averaging approximately 240 million square meters of new housing each year.
Integrating lightweight aluminum into transport vehicles is one of the easiest ways to reduce the amount of fuel our vehicles consume. If conservation is a goal of America’s energy plan, then the use of lightweight and more affordable aluminum should be part of that plan.
In 1994, transportation first emerged as the largest market for aluminum, at about one-quarter of the market, with passenger cars accounting for the vast majority of the growth. Up until 2009, that trend largely continued. However, 2009 marked the worst year for auto sales since 1982 and, as such, transportation applications accounted for only 23.7 percent of all aluminum shipments in 2009 – 4.22 billion pounds in all.
The majority of this aluminum was used in automotive and light truck applications, as vehicle manufacturers continue to opt for lightweight aluminum solutions to improve fuel economy, reduce emissions, and enhance vehicle performance, for which aluminum is ideal. Accordingly, the aluminum content in passenger vehicles and light trucks has grown more than 40 percent and 68 percent, respectively, since 1991. Aluminum-intensive automobiles include the Audi A8 – with its aluminum body, aluminum front and rear axle, aluminum engine block, and numerous other aluminum components – and the Jaguar XK, with its aluminum body structure.
The China automobile market is expected to surpass that of the United States in 2014, which will result in more aluminum usage. Ownership of private automobiles in China is expected to increase. According to the Central Government, vehicle sales in China may rise to 20 million units in 2014 (from 5.1 million in 2004). By 2010, Chinese aluminum usage in automobiles was anticipated to approach 5 million metric tons.
In the aerospace market, increased build rates for both military and civil aircraft have led to increased demand for aluminum. For example, between 1995 and 2004 U.S. production increased from 1,625 to 3,440 aircraft per year, despite a significant drop-off in production after the September 11 attacks. A new surge of aircraft orders in 2005 has sustained aerospace industry demand for aluminum through 2013, even in America’s slow growth economy (new orders are expected between 2014 through 2016 to replace aging aircraft).
Demand for aluminum packaging, consisting mostly of flat-rolled aluminum sheet for beverage cans and foils for food packaging, has dramatically increased in China and India as their standard of living increased. Adding to this, many new applications for aluminum beverage cans have been introduced, particularly for energy drinks and beer. Additionally, the packaging market reflects increasing trends for prepared and frozen meals and blister-packaging for pharmaceutical products.
In the construction market, leading uses of aluminum are for window frames, doors, siding and facades, closely followed by support framing for roofs and walls. The construction market has been particularly strong in the industrializing economies of China and India.
Aluminum has many advantages for electrical applications. It is lightweight, strong, corrosion resistant, and a highly efficient conductor (aluminum has twice the conductivity, per pound, of copper)—rendering it the material of choice for widespread applications such as transmitting power from generating stations to homes and businesses, and to make electronic boards for computers and handheld electric devices such as cell phones. Aluminum is also infinitely recyclable, making it a perfect fit for today’s environment and environmental priorities.
Aluminum is one of the few products and industries left in America that truly impacts every community in the country, either through physical plants and facilities, recycling, heavy industry, and/or consumption of consumer goods.
China is rapidly dominating aluminum markets, from securing mineral rights to many foreign countries’ bauxite formations, to building refining and smelting plants in China. All the while, America is not constructing any more aluminum manufacturing plants due to environmental regulations and electricity costs.
So, it is in America’s best interest to lower the manufacturing costs of aluminum, to produce aluminum economically and with a high degree of environmental responsibility for our nation’s greater economic security.
America exports much of its recycled Aluminum. A discarded can of soda has a 75% chance of ending up in China.
China’s impact on the global market has been significant in three principal ways.
First, China’s need for alumina to fuel its expanding aluminum production has driven alumina prices to record highs in some places, narrowing profit margins for producers of unwrought aluminum, and contributing to restructuring throughout the industry. In other places, because China’s aluminum business have been operating at a loss and because the world economy has been failing there is a surplus of aluminum and American aluminum manufacturers are going out of business (because they are not subsidized like China’s Aluminum companies).
Second, anticipation of growth in China’s demand for aluminum has increased production capacity worldwide. New countries have emerged as leading players in world markets as firms look to streamline operations and take advantage of low-cost electric power.
Finally, China’s role in the global marketplace has expanded significantly as state-owned Aluminum Corporation of China (Chalco) has emerged as one the world’s leading aluminum producers and China has moved from a net importer of aluminum to a net exporter.
Looking forward, it is uncertain whether Chinese aluminum output can keep pace with anticipated growth in domestic consumption from its rapidly urbanizing economy and that of India, and their expanding industrial production. If China does not receive help in producing more aluminum for the world market, aluminum prices could rise dramatically.
In 2005, 40 percent of China’s smelters were operating at a loss, and an estimated one-quarter of Chinese capacity was idle. Additionally, the Chinese aluminum industry’s rapid expansion risked overwhelming the world market, leading to sharp declines in the global market price for unwrought aluminum.
Inadequate electricity supply and the lack of high-quality bauxite constrained faster expansion of Chinese aluminum production. For example, inadequate and uncertain electric power supplies had prevented expansions of several primary-smelting operations. As new coal and nuclear power plants come on line, the problem of inadequate power supply is being erased. Additionally, China currently relies on imports for an estimated one-half of the alumina necessary to meet its aluminum smelting needs, as the mineral content of the Chinese bauxite renders it more expensive and difficult to refine than bauxite available elsewhere.
The only major supplier of alumina from domestic sources in China is Chalco, which has traditionally supplied many Chinese aluminum smelters with alumina through contracts priced below the cost of imports. Imported alumina usually reflects the spot market price. However, as Chalco has expanded its production of domestic unwrought aluminum, the firm has reduced sales of alumina in order to supply its own smelters and has raised the price at which it sells alumina to other firms outside the country. Chalco’s actions have increased market demand for alumina, causing worldwide prices for alumina to rise significantly.
Future prospects for growth in China’s production of unwrought aluminum depend on further progress in addressing high-cost and inadequate supplies of alumina and electric power, upgrading outdated smelting technologies, and complying with potentially strict government measures to rein in production overcapacity (Chinese price controls are not unlike OPEC’s price controls of the oil market) in the aluminum industry.
In order to understand the nature of the non-regulatory costs of aluminum production, it is first necessary to understand how aluminum is created and the breakdown of the costs of its manufacture.
Aluminum does not occur in nature as a metal, but in the form of deposits of bauxite ore. Unfortunately, at present there is no domestic source of bauxite, and U.S. aluminum manufacturers import 100% of their bauxite ore from Jamaica, Guinea, Brazil, Guyana, China, Sierra Leone, and Greece.
Bauxite is mined, and by a two-step chemical process, the bauxite is refined into an oxide called alumina – one of the feed-stocks for aluminum metal. The end of this alumina creation is a drying process which requires large quantities of heat energy.
Aluminum is made from alumina, and this process requires enormous amounts of electricity. Alumina and a molten electrolyte called cryolite are combined in a cell. Direct current electricity is passed from a consumable carbon anode into the cryolite, splitting the aluminum oxide into molten aluminum metal and carbon dioxide. The molten aluminum collects at the bottom of the cell and is periodically “tapped” into a crucible and cast into ingots which are then sold to customers which process the metal into its various applications.
The aluminum industry is a major industrial user of electricity. Because the electrolytic process is the only commercially proven method of producing aluminum, the industry has on its own pursued opportunities to reduce its use of electricity. In the last 50 years, the average amount of electricity needed to make a pound of aluminum has been slashed from 12 kilowatt hours to about 7 kilowatt hours, but the aluminum industry is constantly searching for ways in which energy and other production costs can be reduced. Although continual progress has been made over the 125-year history of aluminum processing to reduce the amount of electricity used, there are currently no viable alternatives to the electrometallurgical process.
Electricity is a huge component of the manufacturing cost of aluminum (30% to 40%). As energy costs increase, so does the price of aluminum. This cost increase of aluminum, caused primarily by the rise in electricity costs, results in less aluminum being incorporated in the manufacture of automobiles. This, in turn, increases the weight and lowers the fuel economy of our vehicles, and raises our use of and dependence on imported fossil fuels. The more affordable aluminum is, the less dependent we are on other countries for transportation fuel derived from oil.
Thorium Molten Salt Reactors (THMSR) will revolutionize, for the better, the American aluminum industry in several ways. Most effective is thorium power’s production of electricity at $.02/kilowatt hour, which is one-half the cost of coal ($.04/kilowatt hour), one-third the cost of natural gas ($.06/kilowatt hour), one-fourth the cost of traditional nuclear ($.08/kilowatt hour), and at one-sixth the cost of wind energy ($.12 and greater/kilowatt hour). As stated above, the electricity costs to smelt aluminum constitute 30% – 40% of total manufacturing costs. Depending on the fuel for the production source of the electricity being used, this electricity cost will be cut by at least 50%. Because of the increasing regulatory burden being placed on coal fired power plants, and the turn to natural gas, it is likely that the aluminum smelting electricity costs will be cut by two-thirds by use of THMSRs. This is sure to have a salutary effect on the building of new aluminum plants and the creation of jobs in the industry.
Because electricity this cheap will dramatically reduce the cost of making aluminum, which will lower the market cost of aluminum significantly, aluminum will become more attractive to auto manufacturers. The resultant reduced weight of vehicles will help America conserve transportation fuel and make America less dependent on foreign countries for its transportation fuel needs. Less demand for oil can translate to lower fuel costs.
In addition, in creating this very inexpensive electricity, and unlike with coal and natural gas, the THMSR will be a non-polluting and non-carbon emitting energy source.
Again, there is no American domestic source of bauxite ore to use for aluminum production; it all must be imported. However, there are abundant domestic sources of aluminum other than bauxite. Notable among them is coal ash or fly ash, a “waste” product of the combustion of coal. There are landfills nationwide replete with coal ash from historical burning of coal, and we produce 60 million tons per year.
Aluminum oxide is a major constituent of fly ash (14.8%). If this could be recovered from the fly ash produced in the United States, bauxite would not have to be imported. Coal’s “waste” product is, in reality, a strategic resource important to the United States.
A large part of the process of removing aluminum oxide from fly ash requires the use of a lot of heat. Providing that heat by use of the burning of coal or natural gas is both expensive, and involves a large carbon footprint.
A THMSR produces abundant process heat; it runs much hotter than a traditional nuclear reactor. THMSRs will produce, without any carbon footprint, sufficient heat required for the process of separating aluminum from coal ash.
Combining affordable heat conversion and the affordable electricity necessary to “smelt” aluminum, both being derived from the same THMSR, there then begins to emerge great market potential for the aluminum industry in the “Coal Ash to Aluminum process”.
Coal ash also contains Thorium. If a THMSR is used to drive the process of Aluminum conversion, 100% of the Thorium could be extracted from coal ash and be used to fuel the “Coal Ash to Aluminum” production process.
In addition to thorium, coal ash also includes valuable components of iron, titanium, and vanadium, as well as the hazardous elements mercury and arsenic. Uranium is found in coal ash; it is slightly radioactive, and the thorium is less radioactive than the uranium.
Radioactive materials are rarely found alone in the earth’s crust. The mining of rare earths yields other metals extremely important rare earths in addition to the miners’ targeted elements. As in coal ash, rare earth mining finds the radioactive elements thorium and uranium. Under government regulations, these must be treated as low level radioactive “waste” by the rare earth mining industry, and secured and stored. This requirement, of course, raises the cost of mining rare earths. Millions of dollars are spent in storing and destroying thorium, when instead the thorium should be used to provide energy to us all.
Commentary by Jon Morrow
I have recently been working with a fairly famous science fiction writer, and have been pressing him to include thorium based MSRs (Molten Salt Reactors) in his next movie or television production. Hopefully this would raise the profile of thorium based and help gain interest in the technology. Like many futurist and good scifi writers, he was able to dream up a tantalizing vision of the future, which was not immediately obvious to me. He is very active in the seasteading world. See video below.
When those in the thorium community dream of a future with thorium based power plants we normally dream of them powering space ships and moon-bases as well as powering the planets need for pollution free electricity.
In a previous storyline this author had conceived of a future with a “weather modification net” that prevented extreme weather on Earth. It was integral to preventing tornados and hurricanes in this made up world of his. He had never came up with a plausible explanation of how a weather modification net might actually work. He was toying with the idea of scientist strategically drilling deep sea hydro thermal vents that not only controlled underwater currents like the gulf stream (that keeps the United Kingdom relatively warm) but, would create and power new under water currents. These underwater currents would prevent extreme weather events and could be turned on or off as needed to modify the weather.
My science fiction writing friend is using a “thorium drive” as a precursor to the warp drive. In this future, thorium technology is now obsolete for spaceships but still powers the weather modification net. This net would work by strategically placing underwater thorium reactors in the oceans that would heat the water, and in this future world, would turn on and off underwater currents – which would affect weather patterns and control the weather.
When not used to control the weather they once made synthetic transportation fuel from sea water until a better power source was found for personal transportation. It was thorium energy, in this future world, that was a gateway to all other technologies that allowed man to solve so many of his earth bound problems that allowed him to concentrate on the exploration of space.
The back story of how Thorium MSRs are developed is an entirely different story set in our not too distant future. A billionaire industrialist that is super smart (think Tony Stark from Iron/Avengers movie or perhaps real life Elon Musk) family is killed by terrorists. Determined to stop the geopolitical issues that lead to terrorism he comes up with the “Thorium drive” to create fuel and power but, the nations of the world will not let him build the “Thorium drive” for safety and economic reasons. Frustrated, the billionaire buys an island in the South Pacific and creates an island paradise secretly powered by a thorium reactor and protected by super a fleet of advanced thorium powered submarines. When word gets out the United Nations try to shut him down. When attempts by other nations to steal his technology fail, they launch an all out attack – and fail miserably against the billionaire’s technology. The billionaire then addresses the United Nations and makes his technology available to every nation in the world. This brings about world peace by everyone able to have more of everything. This leads to the nations of the world coming together to build the weather modification net and starting to explore space.
Can anyone in the thorium community add any ideas (this is science fiction) that would help support the plausibility of his future vision? He normally likes to have engineers and futurists familiar with current technology that could envision how it would change the world to comment before adopting an idea because he wants his vision of the future to be plausible. Please comment.
A French Defense firm DCNS has come up with a real life under water nuclear reactor design. Read more here.
The Zumwalt-class destroyers are a class of United States Navy Destroyers, designed as multi-mission ships with a focus on land attack. The class is multi-role and designed for surface warfare, anti-aircraft, and naval fire support. They take the place of battleships in filling the former congressional mandate for naval fire support, though the requirement was reduced to allow them to fill this role. The vessels’ appearance has been compared to that of the historic ironclad warships.
The class has a low radar profile; an integrated power system, which can send electricity to the electric drive motors or weapons, which may someday include a rail gun or free-electron lasers. The total ship computing environment infrastructure, serving as the ship’s primary LAN and as the hardware-independent platform for all of the ship’s software ensembles; automated fire-fighting systems and automated piping rupture isolation. The class is designed to require a smaller crew and be less expensive to operate than comparable warships. It has a wave-piercing tumblehome hull form whose sides slope inward above the waterline. This will reduce the radar cross-section, returning much less energy than a more hard-angled hull form.
The flag ship will be named Zumwalt for Admiral Elmo Zumwalt, and carries the hull number DDG-1000. Originally 32 ships were planned, with the $9.6 billion research and development costs spread across the class, but as the quantity was reduced to 10, then 3, the cost-per-ship increased dramatically. The cost increase caused the U.S. Navy to identify the program as being in breach of the Nunn–McCurdy Amendment on 1 February 2010. While technically classified as a destroyer, the type is only 10.3 feet (3.1 meters) shorter than the WWII-era Deutschland-class ”pocket battleships”, and actually displaces nearly 4000 more tons than a standard-loaded Deutschland. Zumwalt-class destroyers are also both longer and heavier than the Ticonderoga-class cruiser.
Still powered by fossil fuels
The power source of the Zumwalt is a 78 megawatt array of four compressed natural gas-turbine generators, but that’s the extent of the role of internal combustion engines on the ship. Here’s a rundown provided by our friends at the technology association IEEE:
…the Zumwalt’s propellers and drive shafts are turned by electric motors, rather than being directly attached to combustion engines. Such electric-drive systems, while a rarity for the U.S. Navy, have long been standard on big ships. What’s new and different about the one on the Zumwalt is that it’s flexible enough to propel the ship, fire railguns or directed-energy weapons (should these eventually be deployed), or both at the same time.
Speaking of railguns, another energy-intensive weapon system that could come into play is the Navy’s new laser weapons system (LaWS). Unfortunately, the power requirements for the Navy’s Rail Gun appear that the Navy will not be able to do any run and gun maneuvers. It is estimated that the ship would have to be at full stop to rapid fire a rail gun. Of course that could change with a different power plant.
Here’s the money quote:
As the technology advances, and faced with rising and unpredictable fossil fuel costs, the Navy’s next-generation of surface littoral class combatant ship will leverage electric ship technologies in conjunction with new smaller nuclear power plants with the design characteristics of better speed, weight, maneuverability, range, and cost—and capable to power multiple directed energy weapons at full speed.
For the record, the Zumwalt isn’t quite ready for prime time yet. The launch took place on October 28, 2013 at almost 90 percent completion, so there’s more work to be done before it’s fully operational. The Navy expects to have initial shakedowns completed by 2016.
A ship that can fuel its support ships (instead of vice versa) and other tactical vehicles such as helicopters would allow our ships a great tactical advantage in not having to fuel in a port, where ships can be most vunerable to attack.
The “Seawater to Fuels” program is a perfect application for the high heat of a MSR (Molten Salt Reactor) which can crack the carbon trapped in seawater to produce an ultra clean synthetic fuel.
This article is pertinent to how many experts envision a LFTR (Liquid Fluoride Thorium Reactor) producing electricity without the use of water.
Excerpted from an article by Principle Investigator Steven Wright
In most respects, carbon dioxide is an energy problem. The gas is mixed to varying degrees with methane in underground formations and must be stripped before natural gas is injected into pipelines. It’s created by the combustion of carbon fuels and must be vented away from engines. And the build-up of that CO2 in the atmosphere has many implicating it in global climate change.
Carbon dioxide has some interesting properties, however. Blocks of frozen carbon dioxide don’t melt but, rather sublimate into a gas; solid CO2 is known as “dry ice.” Indeed, CO2 won’t liquefy at all unless a pressure greater than five atmospheres is applied. But at a somewhat greater pressure—around 73 atmospheres—and roughly room temperature, CO2 makes a strange transition from a gas to a state known as a supercritical fluid.
Supercriticality is a hybrid state. A supercritical fluid is dense, like a liquid, but it expands to fill a volume the way a gas does. Small changes in temperature near the critical point—31 °C—will cause large changes in density, similar to boiling where the liquid changes to a vapor. The density change, however, is only a factor of three or four, not a thousand as when water becomes steam at atmospheric pressure.
Similarly, it takes a lot of energy to increase the temperature a small amount when the fluid is near the critical point, much the way the heat of vaporization requires energy to convert a liquid to a vapor. Consequently, a large spike in heat capacity occurs near the critical point of CO2 .
There are also viscosity changes that mimic the viscosity difference caused by transitioning from a very dense liquid-like fluid to a vapor-like fluid. And there are no drops and no bubbles because there can be no free surface.
These properties make supercritical carbon dioxide an incredibly tantalizing working fluid for Brayton cycle gas turbines. Sandia National Laboratory that has investigated these sorts of turbines for power generation, and is now moving into the demonstration phase. Such gas turbine systems promise an increased thermal-to-electric conversion efficiency of 50 percent over conventional gas turbines.
The system is also very small and simple, meaning that capital costs should be relatively low. The plant uses standard materials like chrome-based steel alloys, stainless steels, or nickel-based alloys at high temperatures (up to 800 °C). It can also be used with all heat sources, opening up a wide array of previously unavailable markets for power production.
For these reasons the technology is quite promising.
Sandia began studying these turbines more than five years ago as part of the lab’s work on advanced nuclear reactors. They selected supercritical CO2 as the working fluid operating at approximately 73 bar and 33 °C at the compressor inlet. Under those conditions, the CO2 gas has the density of 0.6-0.7 kg per liter—nearly the density of water. Even at the turbine inlet (the hot side of the loop) the CO2 density is high, about 0.1 kg/liter.
The high density of the fluid makes the power density very high because the turbo-machinery is very small. The machine is basically a jet engine running on a hot liquid, though there is no combustion because the heat is added and removed using heat exchangers. A 300 MWe S-CO2 power plant has a turbine diameter of approximately 1 meter and only needs 3 stages of turbo-machinery, while a similarly sized steam system has a diameter of around 5 meters and may take 22 to 30 blade rows of turbomachinery.
Eventually, this compactness will be a design advantage, but as Sandia develops prototypes to study the concept, it presents a distinct challenge. Early proof-of-concept demonstrations are often performed at the 1-to-20 kWe power level because many research labs have sufficient financial resources and support equipment to fabricate and operate power systems on this scale. It is quite easy to estimate the physical size of turbo-machinery if one uses the similarity principle, which guarantees that the velocity vectors of the fluid at the inlet and outlet of the compressor or turbine are the same as in well-behaved efficient turbo-machines.
Using these relationships, one finds that a 20 kWe power engine with a pressure ratio of 3.1, would ideally use a turbine that is 0.25 inch in diameter and spins at 1.5 million rpm! Its power cycle efficiency would be around 49 percent. This would be a wonderful machine indeed.
But at such small scales, parasitic losses due to friction, thermal heat flow losses due to the small size, and large bypass flow passages caused by manufacturing tolerances will dominate the system. Fabrication would have been impossible until the mid-1990s when the use of five-axis computer numerically controlled machine tools became widespread.
The alternative is to pick a turbine and compressor of a size that can be fabricated. A machine with a 6-inch (outside diameter) compressor would have small parasitic losses and use bearings, seals, and other components that are widely available in industry. A supercritical carbon dioxide power system on that scale with a pressure ratio of 3.3 would run at 25,000 rpm and have a turbine that is 11 inches in its outer diameter. It would, however, produce 10 MW of electricity (enough for 8,000 homes), require about 40 MW of recuperators, a 26 MW CO2 heater, and 15 MW of heat rejection. That’s a rather large power plant for a “proof-of-concept” experiment. The hardware alone is estimated to cost between $20 million and $30 million.
Sandia’s development approach was to compromise a bit on the performance, they selected a size that could fit within the Department of Energy’s nuclear energy budget. Sandia currently has two supercritical CO2 test loops. (The term “loop” derives from the shape taken by the working fluid as it completes each circuit.)
A power production loop is located at the Arvada, Colo., site of contractor Barber Nichols Inc., where it has been running and producing electricity during the developmental phase. The loop has the design capabilities to produce 240 kilowatts of electricity.
The turbo-alternator-compressor designed by Barber Nichols relies on such key enabling technologies as gas-foil bearings (both journal and thrust), a permanent magnet motor/generator, advanced labyrinth seals, the use of seal leakage for bearing cooling, and a reduced rotor cavity region to manage and control frictional power losses.
In addition to the turbo-machinery, the other enabling technology for the S-CO2 power cycle is made possible by the use of printed circuit heat exchangers that are manufactured by Heatric. Those heat exchangers are composed of sheets of steel with flow passages etched into them. The parts are diffusion bonded to provide a core-block that can have heat transfer areas exceeding 1,000 square meters per cubic meter. The heat exchangers are very compact and can withstand very high pressure and high temperatures. The high-temperature recuperator and gas chiller also use this technology.
Those technologies and the advanced high power switching electronics that made it possible to build a small proof-of-concept S-CO2 power loop have only recently become commercially available.
In this cycle the peak inlet temperature was selected to be 538 °C, and the pressure ratio was limited to 1.8. The lower pressure ratio increased the volumetric flow rate through the compressor, which increased its diameter and lowered the shaft speed to something that is within the range of gas foil bearings or magnetic bearings.
Other changes to the system were to use two 125 kWe motor/generators rather than one. This choice was made because the high-speed permanent magnet generator power level was limited by rotor dynamics.
The final modification selected was the use of a re-compression Brayton cycle, which uses two recuperators and splits a fraction of the flow. Part of the flow is sent to a re-compressor that increases the temperature rise in the high-pressure leg of the recuperators to assure that the temperature rise there nearly equals the temperature drop in the low-pressure leg. It also reduces the likelihood of a pinch point, which occurs when there is little or no temperature difference between the hot- and low-temperature legs in the recuperator, so no heat flows from one to the other. The re-compression cycle has large amounts of recuperation (note that the recuperators transfer 2.8 MW while the heater only supplies 0.78 MW).
A second loop, located at Sandia, is used to research the unusual issues of compression, bearings, seals, and friction that exist near the critical point, where the carbon dioxide has the density of liquid but otherwise has many of the properties of a gas.
Immediate plans call for Sandia to continue to develop and operate the two small test loops to identify key features and technologies. Test results will illustrate the capability of the concept, particularly its compactness and efficiency; confirm models; and demonstrate the scalability to larger systems.
Down the line, Sandia wants to commercialize the technology. That would entail the development of an industrial demonstration plant at 10 MW of electricity, perhaps in partnership with industry. Sandia would use or modify its loops to study the behavior of various types of components not previously tested (for example, other types of seals or bearings). Alternatively, the Brayton loop could be reconfigured to test the behavior for other types of power cycles that may more optimally couple to nuclear power plants.
Brayton-cycle turbines using supercritical carbon dioxide would make a great replacement for steam-driven Rankine-cycle turbines currently deployed. Rankine-cycle turbines generally have lower efficiency, are more corrosive at high temperature, and occupy 30 times as much turbo-machinery volume because of the need for very large turbines and condensers to handle the low-density, low-pressure steam. An S-CO2 Brayton-cycle turbine could yield 10 megawatts of electricity from a package with a volume as small as four to six cubic meters.
The turbines would have advantages in coal-fired plants. If carbon capture and sequestration become a requirement for coal power, a fraction of the electricity generated will be diverted to run the CCS equipment. The high efficiency that can be achieved in an advanced pressurized oxy-combustion process with pulverized coal when coupled to a supercritical CO2 power plant could make up for those losses, and thus keep zero-emission coal power plants economically competitive.
Finally, supercritical carbon dioxide Brayton-cycle turbines would be natural components of next generation nuclear power plants using liquid metal, molten salt, or high temperature gas as the coolant. In such reactors, plant efficiencies as high as 55 percent could be achieved. Recently Sandia has explored the applicability of using S-CO2 power systems with today’s fleet of light water reactors.
Replacement of the steam generators with three stages of S-CO2 inter-heaters and use of inter-cooling in the S-CO2 power system would allow a light water reactor to operate at over 30 percent efficiency with dry cooling with a compressor inlet temperature of 47 °C. Compared to power systems such as gas turbines and steam plants, the supercritical carbon dioxide Brayton system can increase the electrical power produced per unit fuel used by up to 50 percent, provided the cycle is correctly designed for the heat source and the heat source combustor/heater is efficient at getting the energy into the CO2 . In addition, very compact, transportable, and affordable systems are possible due to the combination of low-to-modest turbine inlet temperatures (which enable the use of standard engineering materials such as stainless steel) together with high efficiency and high power density. The small overall size of the system will allow for advanced-modular manufacturing processes and a smaller footprint, both of which ought to decrease costs.
S-CO2 power systems can use all heat sources and can operate at power levels ranging from a single megawatt to hundreds of megawatts. That flexibility should provide for applications in a variety of systems, improving the economics and marketability of the power cycle.
Sandia is not alone in this field, but are, however, among the leaders in developing this technology. They’re past the point of wondering if these power systems are going to be developed and commercialized; the question is who will be first to market. Sandia and the U.S. Department of Energy have a wonderful opportunity to support the United States power needs by fostering this commercialization effort.