Thursday, July 25, 2019

Spacenuke Post #12: HBO Chernobyl Miniseries Technical Review

The Chernobyl miniseries was grossly and irresponsibly inaccurate on the effects of radiation. It furthered the public’s irrational fear of radiation and emboldened/legitimized the people who have fear-mongered radiation for decades, generally starting in 1979 – via the timing of the sensationalized China Syndrome movie with the over-hyped response to the TMI accident (which actually demonstrated system safety by not allowing any significant radiation release).

I am a fission reactor designer from Los Alamos National Laboratory. I have worked on the design of large commercial power reactors (at GE) all the way down to small specialized reactors: I led the design of the successfully tested DUFF and KRUSTY reactors. I also visited the Soviet Union twice in the 1980s (’85 and ’87). Overall, the miniseries depicted the people and ethos of the Soviet Union very well (except for most actors’ teeth). It also did well describing all of the factors that caused the accident, but failed miserably on the radiation effects. I wish I had watched the miniseries earlier to chime in when the debate was trending, but here’s my take.

RBMK Reactor Instability
The only episode that I came close to enjoying was Episode 5, because of the excellent depiction of all factors that played into the accident – an “unstable” reactor design accompanied by a huge number of human errors (mostly in blatant disregard of procedure). The RBMK reactors can be considered unstable because of a large number of competing reactivity coefficients (some positive, some negative) combined with a very strong Xe-135 reactivity worth (as well described with the red and blue placards in Episode 5). The physical size of the reactor, combined with a moderated neutron spectrum, also plays a major role in making the reactor unstable. The average travel distance of a neutron (mean free path) is very short relative to the size, such that opposite regions of the core do not “talk” to each other very well, which also promotes instability.

The Physics that Caused the Explosion
At the time of the explosion the core was net supercritical (power increasing) but the bottom region was highly supercritical (power increasing rather fast). The insertion of the scram rods did probably not increase the reactivity of the overall reactor, but likely did increase the reactivity in the bottom region of the core; which at that point was acting as its own reactor, mostly oblivious to what the rest of the reactor was doing. Shortly after the insertion of the scram rods, the loss of water in the bottom region increased reactivity (via positive void coefficient) to cause a very rapid rise in power (apparently doubling power more than once per second to at least >10 times normal power, which severely damaged the core). It is not 100% certain that the movement of the scram rods were the primary cause of this last reactivity insertion, because the bottom of the core may have already been heating fast enough to cause water to vaporize on its own (but the prevailing opinion, i.e. that the biggest effect was the rods displacing the water with graphite, seems more likely).

Shoddy Soviet Engineering
My wife spent a year living in the Soviet Union and has lots of interesting stories about poor Soviet engineering; noting that the fault was not the actual engineers but the managers – meeting milestones at low cost is what advanced careers, not quality (another thing HBO does well in portraying). Even with the RBMK instabilities, the “poor” rod design (use of graphite tip/displacer/follower) and the blatant human error, an adequately engineered scram system could have saved the reactor; i.e. if the rod insertion rate had been significantly faster and more robust. The slow movement of the rods did not quench reactivity fast enough. As a result fuel damage occurred to distort the geometry and prevent the rods from moving in any further. If the rods could have gotten most of the way into the core within a few seconds after pushing the scram button, the explosion would have been avoided (all modern reactors can do this, as well as the retrofitted RBMKs). Note: when I or others refer to explosion it is not what could be called a nuclear explosion, i.e. an explosion of the fuel, it was an explosion caused by high gas/vapor pressure, initially caused by steam and later by the rapid burning of graphite (and maybe zirconium as well). To make things even worse, this was a high-power unstable reactor with no reactor containment! Terrible, no other country has ever done anything close to that careless. The trial should have also blamed whoever approved design, construction and operation in the first place. The Soviets were asking for it, and their incompetence and disregard crippled the nuclear industry worldwide. So, well done HBO for pointing out all of this.

Good job so far HBO, but…
The problem with the miniseries was the exaggeration of radiation effects; not by factors of 2 or 3, but by factors or 100 to 1,000. Most of the supposedly radiation induced gore was severely overdone (e.g. the spontaneous full-body bleeding or skinless bodies) – there are actual pictures of victims of Chernobyl, but nothing gory. The bloody cow massacre was exceptionally ludicrous. On the other hand, dose release comparisons to Hiroshima were accurate (Chernobyl released several hundred times more radioactivity), but totally misleading; the deaths and illness caused by Hiroshima radiation were far worse than Chernobyl (because far more people were exposed to high dose). If you want to fixate on raw numbers, consider that the ocean naturally contains well over 100,000 Chernobyls of radiation (over 10 million Hiroshimas), but of course that is misleading too.

Killing the Continent?
Several quotes by our trusted expert, the scientist protagonist with the glasses, were ludicrous: e.g., “The fire will spread this poison until the entire continent is dead” – nothing could be further from the truth. The majority of dangerous core radioactivity (that could provide high dose rates) would have been released in the first week (e.g. I-131 has a half-life of 8 days and most of would have been released in the first day). The fire burned for 10 days, so most of the damage that could have happened, did in fact happen. Continued burning could have released more the solid waste (fuel actinides and solid fission products) but these particles have lower dose rates and do not travel as far from the site as gasses and volatiles. None of <50 potentially radiation related deaths recorded through 2006 were likely influenced by the release of solid radionuclides. So, even though only 3.5% of the solid radioactivity was released, and burning the rest of the radionuclides out of the core would not have had much effect either (except for increasing the size and/or time limits of post-accident exclusion zones); i.e., even it if the fire burned long enough to release it all, the vast majority of damage would have remained to be the short-lived gases and volatiles. Also note that the 30-km Chenobyl evacuation zone was actually scientifically conservative, and the protagonist was once again wrong when he yelled at the boss declaring that it was a stupid uninformed decision made by party officials.

The Helicopters:
Another dumb quote from the protagonist was “If we fly over the reactor core we’ll be dead within a week”. In reality, even if they hovered over the reactor for a minute, the dose rate at that elevation (above adjacent roof height) would not have provided acute health risk. If they left the windows open at hovered within the plume without a mask for several minutes maybe, but for the most part the flying-over scenario (even just 12 hours after the explosion) would have been less dose than workers that spent 90 seconds outside on the roof – which delivered a benign occupational-limit level of dose. Then, after this highly exaggerated warning about the human risk of flying over the fire, HBO outdid itself. It implied that radiation caused the blades on a helicopter to break apart – I am actually glad they put that in, because you’ve got to hope that most people would know this scene was asinine, and hopefully then question some of the other claims.

The Miners:
The insinuation of the miner scenes (recruitment and on-site) make it seem like they were going on suicide missions, which of course they weren’t. The on-site trailer meeting starts with the protagonist telling the boss that he’s not good a lying, implying that the “lie” would be to tell the miners that they don’t face grave danger. The boss tells him that lying to miners is a bad idea, so instead he doesn’t “lie”. The miner crew chief (my favorite character) asks about the mask and the protagonist’s response implies that it won’t make much difference in the big picture, giving the vibe that you guys are doomed. In reality, their situation was not overly dangerous (maybe their usual risk of cave collapse or toxic inhalation, but not from radiation, especially if they wore a mask), because breathing in radionuclides was their only significant radiation risk. Still, HBO states "It is estimated that at least 100 of them died before the age of 40", but there doesn't appear to be any basis for this statement  Even with no mask, and no clothes, it is unlikely that any of the 400 miners experienced serious health effects from radiation, because their doses would have been orders of magnitude lower than the first responders.

The Pregnant Mother:
The most egregious story was that of the pregnant mother. They go at length to emphasize how important it is that she keep her distance, which grossly misrepresents that nature of radiation. The damage done to the workers who approached the core was both internal (cellular) damage caused by ionizing radiation from the exposed reactor core, and external damage to the lungs/skin caused by alpha/beta emissions from radioactive particles. After that point, it is almost certain that all victims were showered and cleaned to remove contamination, so there was subsequently no threat. So back to the pregnant mother. Her husband was likely burned very badly and had grave radiation damage, but he posed no threat to his wife (actually, much lower than if she was home back in Pripyat). My heart sank when the apparently technical credible character (the women scientist from Belarus) reaffirmed this gross technical error, when she by yelled at the nurse for not preventing the mother from getting too close. The craziest claim was that the baby absorbed the radiation and saved the mother. If anything it would be the opposite, the mother’s body would slightly shield the baby from ionizing radiation (i.e. the baby is inside the mother) and from the radioactive particles that came in contact with the mothers skin and lungs. Some radioactivity would enter the bloodstream, but even then, the placenta should have kept the concentration of absorbed contaminants lower in the baby’s blood than the mothers (the only truth would be that the fetus would be more susceptible to radiation damage).

Bridge of Death:
Initially, I didn’t mind the bridge scene because they were slightly increasing their chance of cancer, if they were indeed standing outdoors under the plume, because of the increased chance of inhaling highly-radioactive particles -- dangerous for the young kids, especially only a few hours after the explosion. The only statistically significant deaths from Chernobyl radiation, other than first responders, were children that developed thyroid cancer by ingesting radioactive iodine (note: the purpose of taking iodine pills is the saturate the thyroid with “good” iodine so that less radioactive iodine is absorbed). As expected, the number of thyroid cancers increased significantly for the population surrounding Chernobyl, mostly children: 9 of these patients died, many likely due to the radiation. So back to the bridge of death – at the end HBO implied that most of them had died via radiation "Of the people who watched from the railway bridge, it has been reported that none of the survived", but there is no evidence of this, and given the actual scientific facts it is extremely unlikely that any of them did (unless some of the 9 children that died were on that bridge).

The Scientific Consensus:
There were 134 documented emergency first responders with high exposure: 29 first responders died within months from Acute Radiation Sickness (ARS). There were 19 fatalities among this group over the next 20 years, but that was within the expected morbidity range (so no significant radiation effect). None of the 3 “suicide” squad guys sent to open the valves under the reactor died from radiation: 1 died from heart attack and the other 2 are alive today. The troops sent to clear off the roof got doses within the traditional standards for rad workers, which has not been shown to increase cancer risk. There has been no demonstrated statistical increase in cancer mortality with the public in surrounding areas (except for the aforementioned 9 deaths), which is remarkable because so many people have been looking hard find a correlation for decades (usually when people are looking for some sort of statistical anomaly they will find it). From the comprehensive 2006 World Health Organization (WHO) report – “The recent morbidity and mortality studies of both emergency workers and populations of areas contaminated with radionuclides in Belarus, Russia and Ukraine do not contradict pre-Chernobyl scientific data and models”; i.e. no statistical evidence of increase deaths that could be caused by radiation.

Over-Predicted Future Cancer Deaths:
HBO declares "most estimates range from 4,000 to 93,000 deaths", but the scientific data thus far completely debunks this, and the final tally will probably be <100. The low end of the HBO range, 4,000 deaths, was indeed in the range of the "scientific" estimations made in 1986, some of which were included in the 2006 WHO report (only for reference, i.e. they were not endorsed by the WHO). The "scientific" prediction of several thousands dead used the Linear No-Threshold (LNT) model, which is increasingly being identified as incorrect at low dose rates. The LNT model is mostly based on Japanese A-bomb high-dose survivors, which as previously mentioned is apples and oranges. The more-recent scientific data clearly disputes the validity of the LNT model at low dose rates; i.e. these studies have investigated low additional doses (within the annual natural background) have found no negative impacts. Many will actually argue that low doses are beneficial (look up hormesis), but either way the effects are minimal. The cancer predictions displayed in the 2006 WHO report have thus far grossly exceeded what has actually happened for the first 30 years. The same prediction methodology that projected the thousands of eventual deaths, also predicted a 375% increase in cancer+leukemia deaths among the liquidators in the first 10 years (190 deaths versus 40) – this should have been clearly visible in the statistics of this controlled dataset, yet no increase was found. The latency period of solid cancer is decades, so it can still be speculated that a spike will eventually appear, but 30 +years after the accident no increase (certainly not thousands) has been scientifically demonstrated.

Fear of Radiation is Worse than Radiation Itself:
As horrific as ~50 deaths are, there are thousands of manmade, industrial accidents that have caused more fatalities (google it for yourself). In addition, nuclear is proven to be far safer (fewer deaths per energy generated) than any other energy source (cold, oil, natural gas, biofuel, wind, hydro, solar - you can google this too). Why a miniseries? because fear of radiation sells. Tell that to the victims of Fukishima; the tsunami killed thousands, but coverage by the media focused on the reactor and radiation harmed no one (except for some casualties caused by the evacuation due to dear of radiation). The same affect is evident in the regions surrounding Chernobyl: coverage by the media convinced people their bodies were ticking cancer time bombs and fear of radiation made them leave their homes, with extreme detrimental sociological effects (depression, suicides and abortions). Again from the 2006 WHO report “Consensus: The mental health impact of Chernobyl is the largest public health problem caused by the accident to date”; so even for the worst nuclear accident of all time, fear of radiation was worse than the radiation itself. Now, HBO has created a show that spreads the irrational fear of radiation to new extremes (I wish the actors that played the scientists weren't so good!) – their highly successful show has misinformed 10s of millions. This fear of radiation is holding back humanity, because public/political opposition and the corresponding inflated cost (over regulation) has strongly curtailed the use of nuclear energy.

Sunday, November 16, 2014

Space Fission Power Post #11:

Update -- Some Progress and Lessons Learned

So, this “blog” has turned out to be mostly an archive – that’s really the intent, but I’ve just noticed that it gets no love from the Google search algorithms, probably because there’s nothing new ever said.  Actually, there have been more recent hits from China and India than the US.  I’d hope that the US would lead the charge to becoming a space faring civilization (less and less likely to be the government, while there’s increasing hope in entrepreneurism), but if other countries are interested that’s great.  Overall, nothing in the message has changed significantly; I would write some things a little differently today, but maybe it gives the message more integrity if I leave it as written 2.5 years ago (i.e. I am not spinning for today’s prospective audience – actually most of the key references are 5 to 10 years old).

When I wrote this blog (I’m sticking to my guns on that term), I was at a crossroads of whether to stay at LANL or go somewhere where I might have a better chance of getting fission power established in space. I had signed the paperwork to accept a severance package, but my Division Leader came by 15 minutes before the rescission deadline and said the Lab would fund an experiment that David Dixon, Pat McClure and I had proposed. I faxed in my rescission form 5 minutes before the deadline, and 6 months later the DUFF experiment was completed, and now KRUSTY is proceeding.

Update on Low Power Space Reactors

DUFF (Demonstration Using Flattop Fissions) was an amazing success and the highlight of my career thus far (choose your favorite meaning... Homer's favorite beer, Dixon's middle name, or getting off your duff).  For less than the cost of 1 LANL FTE (~$600K), we were able to conceive and pull off the first fission power system test within the DOE (if you’re waiting for the proviso, e.g. in several years or within the last decade, there isn’t one – every power reactor that has operated within the DOE was completed or under construction before DOE was formed in 1977.  Now, DUFF was clearly a very simple test, but that’s by design – we had to prove to NASA, DOE, LANL, and especially ourselves that is was still possible to do anything “real”.  

DUFF was a rudimentary test of a fission power system which linked a compact fast reactor with Stirling converters via heat pipes (DUFF_Final_Report). Other “firsts” were the first test of a reactor that used heat-pipes to transfer fission power, and the first fission system to using Stirling energy converters.  In addition to simply producing electricity (lighting the panel) with fission energy, and numerous lessons learned (DUFF_Lessons_Learned), the thing that impressed me most was how well the experimental results matched the codes that others (MCNP) and I (FRINK) have written. Prior to DUFF I had always wondered in the back of my mind if we were fooling ourselves with our confidence in our models (we still may be, but it’s much less likely than before).

KRUSTY (Kilowatt Reactor Using Stirling TechnologY) takes DUFF to the next level, and very close to prototypic of the proposed Kilopower reactor (Kilopower_Rx_Concept). The entire core, structure, heat pipes, and power conversion will be flight prototypic, including the power levels and temperatures. If it succeeds there will be no questions about reactor operation and dynamics; the key remaining risks will be lifetime risks with the materials and components.

When we were proposing DUFF 2.5 years ago, the vast majority of people we told said we were nuts (stupid, na├»ve, etc.), and those were the people we trusted enough to tell.  In retrospect, if we had operated business-as-usual they would have been correct.  We basically plowed forward without taking no for an answer.  We had daily obstacles that threatened success, but instead of pausing to rethink what we were doing or why, we took each obstacle head-on and stuck to our goal of light-the-bulb-or-bust.  In the end, it turned out that almost every obstacle was not actual policy and regulation, it was people’s long standing expectations and interpretations of policy and regulation, which had morphed into a potentially encumbered bureaucratic process (where there is no real incentive for a decision maker to approve anything new or different).

One of the many reasons DUFF succeeded was by keeping the team small.  If we had gone to the DOE office that “specializes” in space reactors, at best they would have said we need to spend millions to ensure that we’re testing the right system (i.e. trade studies) and millions more to prove that the test would work (because failure might make them look bad). Why not spend far less to simply see if it works?  I think the answer is that wasting millions on a paper studies cannot get a bureaucrat in trouble (in fact the more money that flows through their office the better), while making a decision and taking a risk can; note:  I’m not talking safety risk here, we spent the time and money to meet all safety requirements within our $600K, it’s that “decision makers” these days get kudos for compliance and not upsetting the apple cart rather than accomplishment, so there’s little incentive to do anything new.  DOE has a “process” for doing things the “right way”, which requires that every decision is backed by reams of paper and the consensus of grey beards.  The aforementioned DOE office claims we are “cowboys that will eventually mess-up and therefore kill space reactor interest for good”; the grand irony is that this office has done absolutely nothing to bring life to space reactors, and if it wasn’t for a few cowboys there would be life to kill.  Thankfully NASA and NNSA are very supportive of KRUSTY  Better yet, the KRUSTY team from GRC, MSFC, Y12 and LANL is perfect for the job (lean, driven, and expert in their fields), so if we can navigate the bureaucratic/political obstacles then we might have an operating prototypic space reactor in <3 years. Finally, the “market” for 500 W to 1 kW fission power systems may be growing, as it is becoming clearer how expensive future Pu238 will be (assuming that the capacity factors of the ~50 year old reactors can meet the goals for Pu238 production in the first place).

Update on Surface Power

What I’ve learned over the past 2.5 years has made me favor the heat-pipe (HP) reactor even more as compared to the pumped-liquid-metal (LM) reactor.  Even back then, I thought the choice of a pumped-liquid metal system for FSP was ill-advised. I preferred the HP reactor, based on my initial 2005 pros and cons list (FSP_Cooling_Pros&Cons), but agreed to go along with LM because I was assured the ALIP pump was “proven”, plus the LM reactor does indeed integrate better with Stirling engines at the FSP power level.  The latter may still be true, but I think KRUSTY/Kilopower will help alleviate some integration concerns, and at higher Stirling powers a boiler (operating with gravity) should be lower risk than a pumped-loop.  More importantly, the electromagnetic ALIP pump has performed so poorly that it provides a major question mark for the future. The pump needs an efficiency of ~15% to design an attractive system (~10% to have a fighting chance), and the actual pumps have run at <5%; fortunately some testing can proceed (e.g. testing of the Stirling converter) due to the existence of wall sockets, although at efficiencies that low, the pump could heat the flow as much as the reactor simulator.  

In addition, the LM selection for FSP was for a lunar mission, whereas emphasis has subsequently changed to Mars.  A Mars vs Moon comparison shifts some of the pros and cons discussion between technologies: a) long transit time brings the substantial NaK loop freeze-thaw issue into play, b) the pumped system needs considerably more startup up power, which will be tougher to provide for a Mars mission, c) the long communication delay is more significant for the pumped system because startup/control is more complex, d) increased gravity makes a HP-to-Stirling boiler HX look much better, and e) atmosphere increases HP system reliability because CO2 can fill gaps between HPs and HX (to provide gas conduction) if the braze fails. 

Of course another big thing that has changed is the success of DUFF.  The “no one has ever operated a heat pipe reactor before” opposition is gone; I don’t think that DUFF really solved a major unknown, but it removed this perpetual red herring (why it was a red herring is explained elsewhere in the blog/references).  The other exciting development is the explosion of 3D printing (additive manufacturing), which has quickly moved beyond the plastic regime to high temperature metals.  This could help all concepts, but clearly benefit the HP reactor the most.  If you look at aforementioned pros and cons list, manufacturing was one of the major disadvantages of the HP reactor.  3D printing might effectively eliminate this disadvantage, for the core block, HP integration, and perhaps most notably a HP-to-gas heat exchanger.  A traditionally fab’d HX would have100s of welds (doable, but with potential failure modes), a 3D printer could do it with none!

Lastly, politics and policies have continued to evolve that make dealing with HEU (highly enriched uranium) increasingly more costly and with more programmatic risk.  In my opinion, the US is wasting several billion dollars a year in the name of safeguards, but I’ll leave that blog for someone else to write.  Certain space reactor applications will always require HEU to be attractive on a mass basis, but others could be revisited.  In general, the higher power the reactor, the lower the penalty of going to LEU – still a major mass penalty for a fast reactor, but maybe less than a factor of 2, as opposed to in some cases a factor of 4 or more.  I am also sticking to my guns on avoiding moderated systems (hydrogen bearing) like the plague, although in some cases some spectral softening via BeO or graphite could make some sense (but I’d bet against it).  

A high-power Mars surface reactor, say 1 MWt (~200 kWe Brayton), might be in the range where LEU would not create a show-stopping mass hit, while it could remain near the “sweet-spot” I’ve previously referred to in development (because of the increased fuel inventory, lower power density).  In this power range the trade would be between a HP reactor and a gas-cooled (GC) reactor – my gut tells me the HP would be lower risk, but the GC would have to be seriously considered (assuming any lander would be impressive enough to handle the mass of an LEU GC reactor).  The lowest-risk system might be a low-temperature Brayton (~900 K) with a SS/UO2-HP reactor (maybe UN and/or maybe super-alloy if the potential loss-of-ductility risk was worth another 100 K).  We’ve considered >1 MW HP reactors in the past for use on Earth (MW_HP_Reactors), and while they might provide several headaches during construction, MW Mars surface HP reactors would be the easiest to qualify (without a GNT) and offer very high system reliability.  If a high temperature Brayton (>1100 K) was available, then I might consider the GC concept that we designed for JIMO – a Nb1Zr/Re/UN core with a super-alloy pressure shell. The need for a higher-temperature concept would depend mostly on mass limitations and the difficulty of deploying large radiators on the Mars surface.

Update on Nuclear Thermal Propulsion (NTP)

I have spent a lot of time over the past 2 years designing NTRs for NASA.  Honestly, there nothing more interesting to work on as a reactor designer; attempting to balance neutronics, temperatures, peaking factors, flow rates, Mach numbers, and pressures while pushing the materials to their limits.  The fact that they were able to build and test some of these machines in the 60s is truly astounding – the people working on ROVER/NERVA were the super heroes of nuclear engineering!  However, now that I am even more familiar with what was actually accomplished, I am even more pessimistic about the ability to successfully develop an NTR in today’s environment.  The old photos and videos might make NTRs look like a done deal, but in addition to not solving fuel erosion after repeated nuclear test iterations, they never got to the point of testing an NTR that was prototypic to the proposed flight system.  Also, the closest-to-prototypic rocket test (XE) had a substantially lower pump outlet pressure and chamber temperature than the designs currently being touted by NASA and DOE (proposing 1900 psi/2750 K whereas XE was 970 psi/2280 K, actually <2100 K net if you include the turbine exhaust); the Peewee reactor hit an impressive 2550 K (still a ways from 2750), but did not integrate this performance with potential engine dynamics.  More so, the XE test did not include the dynamic effects of returning turbine exhaust to the system, and more importantly the effect of moderated tie-tubes, which have extremely large neutronic feedback coefficients (the hydrogen worth is huge, especially at the proposed pressures).  I’m fairly confident in these “facts”, but not 100% sure, so if I’m wrong on something let me know.  As for my opinions, think about this… in the 60s, when nuclear testing was orders of magnitude simpler/cheaper to pull off, they performed more than a dozen nuclear tests and still hadn’t tested a prototypic system or solved fundamental problems with the fuel.  Then, add that the previous testing was at substantially lower temperatures and pressures than what is currently being sold as a heritage technology.  My opinion is that a cermet-fueled NTR would have a better chance of being developed that a composite-fueled concept (for various reasons that I won’t expand on now), but both should not be considered at all until we take several successful steps in developing simpler space reactor systems; although by then it will probably make more sense to pursue NEP than NTP regardless.

Finally, I've been dismayed as proponents of NTP and NEP have overemphasized the risks of radiation in space. In response, over a year ago I wrote a response entitled "A Trip To Mars Reduces an Astronaut's Risk of Cancer" (Space_Radiation_Risk).

Saturday, March 10, 2012

Space Fission Power Post #0: Introduction

Space Fission Power: A Series of Blog Posts and References

David Poston: spacenukes@gmail.com

The following series of blog posts and references intends to make the case that the US and/or other entities should be investing significantly in space fission power, and how best to establish the use of fission power in space. The first few posts are more philosophical in nature, and as such are more opinion and less fact, while the latter posts are based on historical and technical arguments. The meat of the content is in posts 6 and 7 (and the references referred to within), which detail the technologies that should be invested in now.  While I am the leader of the space fission reactor team at Los Alamos National Laboratory, the opinions in these papers are mine alone.  The papers, and especially the references, are largely a collection of work that I and others have done over the years; as such there are redundancy and transition issues between papers and themes. I have drawn upon the good work and ideas of many others in this text, but probably the two greatest contributors to this philosophy and content are Mike Houts and Lee Mason, and past/current members of the LANL space reactor design team, most notably Rick Kapernick and David Dixon. The actual list of contributors and space reactor enthusiasts would take pages, which I might eventually complete as part of this project.

My career has been dedicated to space fission power, from my PhD thesis in nuclear thermal propulsion, to working on the SP-100 reactor at GE, to my 17 years at Los Alamos trying to get reactors used in space. So far, I have been unsuccessful, which is why I am creating this series of blog posts as a different approach.  My experience helped me write these papers, but I am the last person to suggest that experience makes you more qualified to be an expert; all too often I’ve found cases where a young, bright student has better ideas and can perform better work than a person with a resume that shows 20 years of experience in a particular field.  Likewise there is “bias” in these documents, but my goal it that this bias is based on good information, technical facts, and logical conclusions.  My only true bias is that we should spending far more time and resources getting ourselves established in space, and the rest of my opinions stream from there.

Below is the list of blog posts. I’d appreciate it if anyone that comes across these documents (and chooses to read them) gives me feedback on how to change or improve.

Spacenuke Post #1: We need to explore and expand our presence in space
Spacenuke Post #2: Abundant power is the key to space exploration
Spacenuke Post #3: Fission power is the best option to produce abundant power in space
Spacenuke Post #4: Small steps are needed to utilize fission reactors in space
Spacenuke Post #5: First step: affordable, entry-level space fission systems
Spacenuke Post #6: Entry-Level Option: Fission Surface Power (FSP) for Mars/Moon
Spacenuke Post #7: Entry-Level Option: Low Power Space Reactor (LPSR) Systems
Spacenuke Post #8: Second Generation Space Fission Power (SFP) Systems
Spacenuke Post #9: Nuclear Thermal Propulsion (NTP)
Spacenuke Post #10: Ultimate Goal: High-Power Nuclear Electric Propulsion (NEP)

Friday, March 9, 2012

Space Fission Power Post #1:

We Need to Explore and Expand our Presence in Space.

Our inherent desire to question and explore is at the root of being human.  This statement in itself, if you adhere to it, should be enough to justify a significant expenditure of societal resources on space exploration.  Unfortunately, it is not that simple. It is also human nature to have compassion for those in need, creating a constant struggle within individuals and society regarding the allocation of finite resources.  One must not only weigh the value of helping unfortunate individuals against helping society as a whole, but also the value of people alive today against people in generations come.  The comparison becomes more complex upon realization that an investment that increases the long-term viability of humanity means that the number of individuals helped could be countless, and the value of that investment priceless.  In this regard, the benefits of space exploration go far beyond our inherent curiosity—perhaps to the outright survival of the human race. 

Some of the benefits of space exploration are summarized below as they relate to four themes: curiosity and inspiration, science and technology, philosophy and enlightenment, and viability and preservation. The categorizing of the benefits, and to a larger extent what constitutes a benefit, is of course highly dependent on the individual.

Curiosity and Inspiration

It is human nature to be curious in all facets of life, and when possible to explore and expand into new frontiers. Curiosity is at the root of most paradigm shifting discoveries and advances in the human condition. Many would adhere to the philosophy that as soon as we stop asking questions we stop being human. It is because of our curiosity that space exploration is viewed favorably by almost everyone worldwide. Our relatively short history of space exploration has successfully answered many questions, while posing even more interesting ones. More so, exploration inspires people, particularly young people because their curiosity burns brightest.  It is no coincidence that the U.S. had a boom of technical innovation from the generation that grew up during the Apollo program. The Apollo program inspired many youth to pursue careers in math, science, and engineering with dreams of being a part of something even bigger when they were adults.  It can be argued that the current generation of American kids is becoming inferior in math and science, and perhaps more apathetic in general, because they lack such inspiration.  History also shows that the most successful civilizations continually explore beyond their known boundaries. Countries such as China, Russia and India must see some value in these respects, because in 2012 they continue to expand their human space programs, despite having smaller economies than the U.S.  Finally, in addition to inspiration, space exploration unifies all with a sense of pride and purpose, and a realization that our fates and those of future generations are to some extent are all linked together.

Science and Technology. 

Space programs enable the development and acquisition of technology, information, and resources that benefit all of humankind.  Technology spin-offs, whether direct or not, are usually the most talked about and documented benefit of space exploration.  Many compelling arguments have been made that justify investment in space exploration based solely on these tangible benefits. Some of the most obvious benefits are provided by satellites in orbit of the Earth, for example weather/climate prediction.  NASA specifically has a long list of technologies that were direct spin-offs from their space programs, ranging from transportation to health care.  The space program also had major influence on the development of the microprocessor and how computers are used.  In addition to technology, the scientific information learned from space exploration, i.e. the evolution of planets and stars, the predictability of solar flares, probabilities of asteroid impacts, etc., can be used to help us live within our environment and mitigate possible changes to our environment.  Beyond science and technology, space exploration has the potential to provide us resources as well, e.g. solar power beamed from space, rare materials from the moon or asteroids, or future resources that we are not even aware of.

Philosophy and Enlightenment

The most profound reason to explore the universe is to investigate the origin and nature of our existence. This is also the most contentious reason, because the issue of philosophy is entangled with religious beliefs.  A large fraction the people alive today believe that we already know the origin and purpose of our existence. A smaller fraction of those people could be correct, but they should have no objection to learning as much as we can about the physical nature of our existence.  In most cases, scientific discoveries could either confirm or shake the foundations of a specific faith (e.g. ancient gods lost followers when science displaced the use for that god).  Some of today’s religions preclude the possibility of life evolving elsewhere in the universe.  In this case, if life is found elsewhere they would have to rethink their faith, while if life is not found elsewhere then it would likely strengthen their faith and draw others to it (because this result would be in contrast to most scientific expectations). Likewise, those that believe everything ultimately has a natural explanation are not immune to this line of thinking; future discoveries, or lack thereof, could cause them to rethink their faith that science will explain everything.  Regardless, for many people enlightenment is the ultimate goal of life (even if enlightenment is simply knowing the proper questions to ask), and space exploration could help immensely in this endeavor.

Viability and Preservation.

The all-or-nothing benefit of space exploration is the long-term survival of the human race; although the extended timeframe of this benefit makes it very hard to quantify.  We know that our life on Earth is finite, but the preservation benefit of sustained civilization outside of the Earth could range from enormous to miniscule depending on whether viability of human life on Earth ends in <1 thousand years or >1 billion years.  There is a long list of potential calamities that could end human civilization, including asteroid/comet, super-virus, excess volcanism, socioeconomic collapse, environmental changes, weapons of mass destruction, or maybe something we’ve never envisioned.  Some of these initiating events can be mitigated or prevented as a result of space exploration; most notably the ability to deflect or destroy a potential extinction causing asteroid or comet.  The ability to deflect an asteroid could actually be developed within a decade using existing technology, the question is would we have enough warning time to successfully develop and deploy it.  Space exploration could also uncover currently unknown threats, such as looming changes in the behavior of the sun, or maybe astronomical threats such as nearby black holes, supernovae, dark matter, or something our current understanding of physics is not aware of. 

The ultimate defense against human extinction would be to establish permanent, self-sustaining colonies of humans beyond the Earth.  The path to this type of existence does not require a huge leap in science and technology; most engineers agree that abundant, reliable energy (probably nuclear) is the key to expanding into space. In the near term (decades) exploration could focus on where and how to develop sustainable communities away from the earth, including quasi-sustainable outposts on the moon and Mars.  In the mid-term (centuries) sustainable outposts could be created on Mars, Titan, asteroids, etc. that could be considered planetary lifeboats, as a safeguard against major calamities that could end human civilization.  In the long term (millennia), the concept of the “planetary lifeboat” could transform to a “celestial Mayflower”, taking us to new worlds outside of our solar system.  The benefits of this scenario are not limited to merely saving the human race.  Even if humanity continues to thrive on Earth, there would be the possibility for a nearly unlimited number of humans to experience existence (in addition to the increased population that Earth could support by importing resources) and expand the extent of human condition (e.g. well-being, knowledge, and enlightenment).  If new opportunities and experiences emerge, people will migrate to them, just as they did to the New World ~500 years ago.

Wednesday, March 7, 2012

Space Fission Power Post #2:

Abundant Power is the Key to Space Exporation.

On Earth, life and humanity has benefited from the energy of the sun (plants, animals, fuels, heat, etc.), but in space you’re essentially on your own.  Power is the single most important element to the survival of both spacecraft and humans in space, and an abundance of power is essential to providing safe and reliable missions – without power there is no existence. NASA science missions have been extremely successful since the 1960s, but since then there has been no fundamental change in where we can go and how much power we have when we get there. We could increase the benefits of science missions by orders of magnitude with a robust, low-mass source of abundant power. For human exploration, the need for abundant power provides a different paradigm that we’ve come to know on Earth.  An astronaut is not concerned with the over-hyped health effects of radiation (from a reactor, or the sun and cosmic rays); rather, an astronaut’s concern is whether the power system will provide power when it is needed. In addition, these astronauts will want to be “power rich”, a term frequently used by NASA astronauts (e.g. discussion I’ve had with Scott Horowitz, Ed Lu, John Grunsfeld, and Franklin Chang Diaz) when discussing human space exploration.  The most important asset is the one that you’d prefer to be the most in abundance. Power can be binned into three areas when discussing space exploration: electricity, propulsion and heat.

Electricity

Electricity is the most valuable asset that space power systems can provide. Almost all terrestrial technologies have evolved within the paradigm of available electricity, and with enough electricity almost anything can be accomplished.  It is hard to imagine any spacecraft or human outpost in which electricity will not be the lifeblood of the mission.  Nomads and Pioneers could live off the land to explore and settle the Earth; space explorers can potentially do the same thing sans one component: electricity.  Some things can be done more efficiently without our favorite energy “middleman,” but there is almost nothing we can’t do if we have abundant electrical power.  For near-term scientific missions, increased electricity allows more capable instruments, increased instrument duty cycles, onboard scientific analysis, higher data-rate communications, and smaller antennas. For human missions, abundant electricity enables science and exploration, but more importantly it is the foundation of their safety and life support.

Space Propulsion

The first order physics of space propulsion is very simple – throw something off the back off your spaceship and you will go faster (Newton’s third law of motion).  In this respect, the brute force approach to space propulsion is to launch as much propellant as you can into Earth orbit (and if launch costs can be made affordable this is a great option for relatively low delta-V missions, like a conjunction class mission to Mars).  If launch costs are high, and/or you need to get somewhere fast, then the key to making propulsion effective is to gain as much thrust as you can for every unit of mass that you eject (Specific Impulse).    On top of this, the thrust to weight of your propulsion system and spacecraft is very important.  Ultimately, a power source can provide a propulsion system in three ways: the byproducts of the power source are used directly as the propellant (direct propulsion), the power source provides heat directly to the propellant (thermal propulsion), or the power source creates electricity that is used to accelerate an ionized propellant (electric propulsion).  In all cases, abundant energy/power is needed to enable effective human space propulsion (with due concern given to the effort required to place those energy sources and propellants into Earth orbit).

Heat

Heat is needed to keep things warm and to drive chemical processes – these functions can be completed via electricity, but they are done more efficiently without it (because heat generally creates the electricity in the first place).  In space, thermal heating is needed for systems ranging from the smallest spacecraft to large human outposts.  Where adequate sunlight is not available, NASA has routinely used the Radioisotope Heater Unit (RHU) to keep components warm in space, while waste heat from a large thermodynamic power system could keep entire spacecraft or human outposts warm at essentially no “cost” to the infrastructure.  Process heat is very important to create practical and sustainable human outposts elsewhere in the solar system.  Many technologies have been envisioned, and in some cases developed to transform in-situ materials to usable consumables (air, water, rocket fuel) and/or construction materials (ceramics and metals).  Again, if a large-scale power plant is used to create electricity, the process should reject enough “free” heat to meet most of these needs.

In the very long term, we would need to use in-situ resources to power a sustained space-faring existence. These in-situ resources could simply be the materials to fuel our previously developed technologies, or the in-situ resource might provide a unique energy alternative (using site specific chemical or geothermic resources). In all cases, an Earth-based energy technology that can provide abundant power is needed to 1) get us to our destination and 2) establish a semi-autonomous outpost/colony which can eventually evolve to a self-sufficient location.  For more information on the need for, and possible uses of abundant power to enable our expansion into space, I recommend the writings of Robert Zubrin.

Monday, March 5, 2012

Space Fission Power Post #3:

Fission Power is the Best Option to Produce Abundant Power in Space.

Abundant power is needed to significantly explore and expand into space. A robust power source is needed that can provide high energy and power density while being independent of location, environment and application.  The primary energy alternatives that are available in the near-term are discussed below.

Solar Power

Using the sun as an energy source in space is often the preferred option, but is limited in power density and is location specific; if you’re ever in the sun’s shadow or in a dusty/cloudy environment, or as you move deeper into the solar system the sun becomes a poor energy source.  Even on Earth, the sun is not practical as a source of baseload electricity.  Solar power on the Moon is significantly hampered by the 28 day cycle (i.e. 14 days of darkness).  The technology to efficiently and reliably store energy in the deep cold on the lunar night presents substantial challenges, and would likely be more complex and heavier than the solar power system itself.  On Mars, the diminished sunlight decreases the value of solar power, but it is the dust, and more importantly dust storms that make solar power unattractive.  The potential of a protracted dust storm will require more baseload power and a longer term storage system, presenting the same challenges as a lunar system (in addition to the decreased solar insolation during the day). Finally, the lower power density of sunlight beyond the asteroid belt would make the required size of a solar powered system excessive for a high power mission.

Chemical Power

Since the dawn of the human race, our favorite energy alternative to the sun has been to burn things (primarily wood until the industrial revolution). On Earth we are surrounded by materials laden in hydrogen and carbon, fortuitously surrounded by a gas that contains oxygen (air).  When we move about, we generally worry about our fuel supply but take the oxidizer for granted.  In space, we have to bring both fuel and oxidizer with us, or have a means of generating either in-situ (which requires power). Space does not change fundamental chemistry (except for gravitational effects on combustion), but using chemical power in space is extremely inefficient because of all the power needed to transport or create constituents in-situ. In addition, storing these constituents in the extreme temperatures of space presents problems in logistics and reliability.

Energy Storage

Energy-storage systems (e.g. batteries) have a role for almost any space mission, but generally only in the very early stages. The advantage of batteries is that we can use low-cost Earth-based technologies to generate the energy, and then extract it from the storage technology to utilize it in space. There is considerable ongoing research in fuel-cells and alternative means of storing energy, which could be much more mass efficient than traditional batteries; however, there is no chance that a chemical energy storage system would ever be practical to power ambitious space exploration. Some nuclear sources, such as anti-matter and even 238Pu might fit within the definition of an energy storage system, but they are better classified as nuclear power sources.

Nuclear Power

Nuclear power can be used to provide electricity, propulsion and/or heat in space.  The major benefit of nuclear technology is that power can be provided in a robust form that is independent of location or application.  There is a great deal of experience in space radioisotope power systems, and limited experience in space fission power systems. On Earth, the physics of fission are well understood, and engineered systems are well established. Other nuclear power types like fusion, anti-matter, or options like triggered isotopes are not well developed for either terrestrial of space application.
“Nuclear power in space” encompasses a wide range of sources, technologies and applications.   The physical sources of nuclear energy that can be utilized are radioactive decay, fission, fusion, and antimatter.  The technologies to harness these forms of energy are almost limitless, but the list of practical near-term technologies rather small.  The applications of nuclear technology include electricity, propulsion, and/or heat (for warmth or to drive chemical processes).  There are numerous combinations of nuclear power sources, technologies, and applications that can be used to enable our exploration and expansion into space.  In the end, the attractiveness of nuclear energy in space is the ability to provide robust, long-lived, high power density (kW/kg) systems at any location.

The current forms of nuclear power that we know of and can reproduce on some level are radioactive decay, fission, fusion and anti-matter. 

Energy Source
Energy Density
Max Power Density
LH2-LOx Combustion
                         13 MJ/kg
limited only by engineering
Pu-238 Decay
             2,100,000 MJ/kg
0.54 kW/kg
U-235 fission
           82,000,000 MJ/kg
limited only by engineering
D-He3 fusion
         354,000,000 MJ/kg
limited only by engineering
Antimatter
     90,000,000,000 MJ/kg
limited only by engineering

The 2 key points of the above table is that 1) nuclear power sources offer enormously higher energy densities than chemical systems, and 2) that power density is limited for Pu-238.   Fission, fusion and antimatter can all provide energy and power densities beyond what we could feasibly utilize in the foreseeable future.  However, discussions of fusion and anti-matter are essentially moot for decades or perhaps centuries to come, because even if we engineer the technologies required to make these power sources practical, we will not have the technologies to engineer the high power density systems needed for space application. Restated, fission has the ability to provide energy densities so high that it will take many generations of technology advancement (materials and fabrication) until we could take advantage of a higher energy density power source (even if that source itself was off the shelf). Therefore, since fission technology is established and well-understood, it is the obvious technology to focus on for space nuclear technology development.

It is possible that once we establish large scale settlements on Mars or elsewhere, that we could shift away from nuclear power to an in-situ source (geothermal power, solar power with an in-situ energy storage mechanism, wind power where there is an atmosphere, or perhaps nuclear if we could mine thorium, uranium, D, He-3, etc.). Until then, we need to have a technology that can enable the energy intensive missions and operations that could get us to that end goal, and that power source is fission power.