In the previous installment of this series, Jonathan Tennenbaum explained how the nuclear fusion reaction between hydrogen and boron could provide a basis for highly-efficient, radioactivity-free generation of electricity, with virtually unlimited reserves of fuel.


Unfortunately, hydrogen boron reactions occur in significant numbers only under very extreme physical conditions – even much harder to realize than the deuterium-tritium (D-T) reaction, which has until now been at the focus of fusion research. Meanwhile, after well over half a century of worldwide efforts, and investments ranging into the tens of billions of dollars, it is still not possible to predict with any degree of certainty when power stations based on the D-T reaction might actually come online. New approaches are evidently needed, in fact, a new paradigm, which one might call the “non-thermal paradigm.”

Plasma physics: paradise or nightmare?

In the fusion business, until now, one speaks mainly of thermonuclear fusion: fusion reactions induced by raising the fuel to million-degree temperatures. This is also the origin of the term thermonuclear weapon to denote what is more popularly known as a “hydrogen bomb.”

Much more than heating is involved, though. Already at much lower temperatures, the fuel turns into a plasma: the electrons (or most of them) are no longer bound to the nuclei, but swarm around more or less freely, as do the nuclei, albeit still pushed and pulled around by the forces of attraction and repulsion among them. Their motion gives rise to electric currents and magnetic fields, which in turn act upon the whole plasma. One speaks of “magneto-hydrodynamics.” Plasma behavior is by its very nature highly nonlinear. Plasmas exhibit an enormous variety of different types of waves and oscillations; they emit electromagnetic radiation, they display collective, self-organizing properties. There are collisional and quantum effects etc.

It’s all there. For the physicist a paradise, or a nightmare, depending on how you look at it.  

Predicting and controlling the behavior of plasmas at high energies is a formidable task, even with the help of the fastest supercomputers.

Magnetic versus inertial fusion

Million-degree temperatures generate astronomically high pressures. Without mechanisms to confine it, the heated fuel will expand explosively and quickly lose the density required for significant numbers of reactions to take place. The effort to solve this problem has led to two very different strategies.

The first strategy is to confine the hot plasma in a “magnetic bottle,” i.e. using magnetic fields to counteract its enormous force of expansion. Today the scene is dominated by the gigantic International Thermonuclear Experimental Reactor (ITER) project, now under construction in Cadarache, France. In my opinion, the ITER is valuable primarily as a platform for plasma research, technology development and as a means of supporting an ecosystem of scientists and engineers working in relevant areas. From the standpoint of actually realizing fusion as a commercial power source, though, the ITER looks very much like a dead end.

Construction of the International Toroidal Experimental Reactor (ITER) (left). Model of ITER reaction chamber. Photo/illustration: Wikimedia

Much more promising are much smaller devices using highly nonequilibrium, pulsed regimes, such as the so-called dense plasma focus (DPF). The DPF exploits self-organizing processes in the plasma to achieve extremely high energy densities.

In the second main approach, which I shall concentrate on for the rest of this article, is called inertial confinement fusion (ICF). In ICF we do not try to limit the plasma’s expansion; but before the process starts we compress the fuel to such high densities, that large numbers of reactions occur already in the first moments before it has time to expand. During that tiny instant, the energy released by each reaction heats the mixture even more; the combustion process becomes self-sustaining – ignition. We get a miniature thermonuclear explosion. A future ICF reactor would operate in a pulsed regime, with tiny fuel pellets being dropped one after another into an explosion chamber and ignited by laser pulses.

NIF explosion chamber (left). NIF laser bay, generating 192 beams. Photos: Wikimedia

Needless to say, the basic physics of ICF was developed in the context of nuclear weapons and still overlaps significantly with the domain of classified military research. 

There would be a lot to say about the politics of magnetic and inertial fusion, but that is not my subject here.

From the “Super” to radiative implosion

So far the only technology available to generate large amounts of excess energy from fusion reactions is the hydrogen bomb, otherwise known as the thermonuclear bomb. This technology was first successfully tested on October 31, 1952.  

At the time of the US Manhatten Project to build an atomic (fission) bomb, the physicist Edward Teller conceived of a potentially far more destructive weapon, based not on the fission of uranium, but on the fusion of hydrogen isotopes. It was referred to as the “Super.”

Since it was clear that chemical explosives could not generate the temperatures of tens of millions of degrees required to ignite fusion reactions, the only option was to use a fission bomb.

In 1946 the physicists John von Neumann and Klaus Fuchs submitted a US patent application for a modified approach to the Super. The title of the invention was “Improvement in the Methods and Means for Utilizing Nuclear Energy.” Needless to say, the device was not intended for civilian use!

The contents of the von Neumann-Fuchs patent are still officially a US government secret, but they can be found in a fascinating series of volumes published in Russia in 2008 (Атомный Проект СССР – Документы и Материалы), containing declassified Soviet documents. There one finds a detailed text with calculations and diagrams, in English and in Russian translation, as well as commentaries on it by leading Soviet researchers from the year 1948. How is that possible? Klaus Fuchs later admitted he had been a Soviet agent!

The von Neumann-Fuchs design already incorporated what became the basic operating principle of the hydrogen bomb: “radiation implosion.” Instead of wrapping the fusion fuel around the fission bomb, as was originally conceived for the Super, put the fuel in a separate container and exploit the intense pulse of radiation, generated by a fission explosion, in order to heat, compress and ignite it.

The device finally used in the successful 1952 test relied on this radiation implosion in a morapproach in a more advanced form, by Edward Teller and Stanislav Ulam. This is the famous two-stage “Teller-Ulam configuration” illustrated in the accompanying diagram. It became a kind of model for the later development of laser-driven fusion.

The Teller-Ulam configuration (left). First hydrogen bomb test, “Ivy Mike.” Illustration/photo: Wikimedia

Getting rid of the fission trigger

Given the success of the hydrogen bomb in releasing large amounts of fusion energy, it is natural to ask, to what extent thermonuclear explosions might be scaled down to the point they might be usable for commercial power generation.

The fusion process itself poses no intrinsic barrier to miniaturization: there is no lower limit to the amount of fuel that might be used to power a fusion “microexplosion.” By contrast, the first stage of the hydrogen bomb cannot be arbitrarily scaled down, at least not in any straightforward way. A self-sustaining fission reaction requires a certain minimal critical mass, resulting in an unpleasantly large explosion. Even if we could make fission microexplosions, they would generate significant radioactivity, whose avoidance is a major motivation for pursuing fusion in the first place.

Accordingly, insofar as we choose the hydrogen bomb as the starting-point for developing fusion reactors – including the hard-won physical knowledge behind the bomb – it is imperative to find a replacement for the fission trigger.

Enter the laser

One of the most useful properties of lasers lies in the fact, that a laser beam can be focused down to a tiny spot, comparable in dimension to the wavelength of the light. Concentrating the beam energy in this way makes it is possible to reach very high intensities. Laser systems are commercially available, which can instantly vaporize any known material. What is the limit of this capability? Can one reach temperatures in the range of 100 millions of degrees needed to cause fusion reactions?  The answer is yes.

Already in 1968 – a mere eight years after the invention of the first laser – the group of Nikolai Basov of the USSR’s Lebedev Institute of Physics reported the first observation of fusion reactions triggered by laser irradiation of a lithium hydride target. The Soviet results were quickly repeated at laboratories in France and the United States. In the US, John Nuckolls was thinking along somewhat parallel lines, about how to miniaturize thermonuclear explosions to the point that these could be triggered without the use of an atomic bomb as a  “driver.”

Nikolai Gennadievich Basov (left). John Nuckolls (right) with John Emmett, pioneers of the US laser fusion effort. Photos: Wikimedia; US Govt

The basic approach pursued to laser fusion, which has emerged since the original proposals of Basov and Nuckolls, employs fundamentally the same principle of radiative implosion utilized in the Teller-Ulam hydrogen bomb. We bombard a spherical fuel pellet from all sides by simultaneous laser pulses. The laser energy is initially absorbed by the outer layer of the pellet, causing it to expand explosively. By the principle of action-reaction (the “rocket effect”), the adjacent layers of the pellet are pushed inward with enormous force, generating shock waves that compress the core to a super-high density, and generating the hundred-million-degree temperatures required to trigger the fusion process.

This is the strategy that has been pursued in countless experiments with high-power lasers, carried out in the US, Europe, Japan, and the former Soviet Union, culminating in the giant 192-beam National Ignition Facility at the US Lawrence Livermore Laboratory – the world’s biggest laser. In its decisive experiments, the NIF used a modified set-up in which the laser energy is converted first to X-rays by the interaction of the laser light with a metallic enclosure around the fuel pellet. It was hoped that this would promote a much more efficient implosion of the fuel target.

There are immense difficulties however: the radiative implosion process is plagued by hydrodynamic instabilities which prevent an effective compression, the efficiency of the coupling of the laser energy into the target is low, there are radiative losses, etc.

The most successful experiments carried out so far on the NIF were able to produce sizeable amounts of fusion energy, but still much less than required to “pay back” the energy consumed in producing the laser pulse. Disappointingly, the National Ignition Facility has also failed to reach the promised goal of reaching ignition. Without ignition we get a “fizzle” instead of a full-fledged microexplosion.

Casting off the thermal paradigm

Today new strategies have emerged, which promise to sweep aside technological as well as conceptual stumbling-blocks that have hampered the progress of fusion energy since the very beginning. 

One of them, pursued by Prof. Heinrich Hora and his collaborators departs from the standard recipe, “heat and compress the fuel.” See: Hydrogen-boron fusion could be a dream come true.

Naturally, putting forward a new paradigm means little unless one has the technological means to realize it in practice. In this case the essential means have been provided by the invention of “chirped pulse amplification” of laser pulses – which I shall explain in the next article.