April 8, 2010

Blue Skies and Existential Risks

[This is a revised version of an article previously published on the Institute of Ethics and Emerging Technology website.]

Basic research is what I'm doing when I don't know what I'm doing.
– Wernher Von Braun

CERN's Large Hadron Collider (LHC), a product of the biggest Big Science project in human history, has recently been in the news for having “smashed beams of protons together at energies that are 3.5 times higher than previously achieved.” This achievement stimulated thought, once again, about my ambivalence towards the LHC. The feeling arises from a conflict between (a) my “epistemophilia,” or love of knowledge, and (b) specific moral considerations concerning what sorts of pursuits ought to have priority given the particular world we happen to inhabit. In explaining this conflict, I would like to suggest two ways the LHC's funds could have been better spent, as well as respond to a few defenses of the LHC.

Moral and Practical Considerations

In 2008, the former UK chief scientist Sir David King criticized the LHC for being a “blue skies” project[1], arguing that “the challenges of the 21st Century are qualitatively different from anything that we've had to face up to before,” and that “this requires a re-think of priorities in science and technology.” In other words, couldn't the >$6 billion that funded the LHC have been better spent on other endeavors, projects or programs?

I am inclined to answer this question positively: YES, the money could have been better spent. Why? For at least two reasons[2]:

(1) Morally speaking, there is an expanding manifold of “sub-existential risk” scenarios that have been and are being actualized around the globe – scenarios that deserve immediate moral attention and urgent financial assistance. Thus, one wonders about the moral justifiability of “unnecessary” research projects in the affluent “First World” when nearly 16,000 children die of avoidable hunger-related illnesses every day; when water pollution kills more humans than all violence worldwide; when unregulated pharmaceuticals pollute public drinking water; when the Great Pacific Garbage Patch, superfund sites and eutrophication threaten the very livability of our lonely planet in space.

Ascending from the “personal/local/global” to the “transgenerational” level of analysis, there exists a growing mass of increasingly ominous existential risks that demand serious scientific and philosophical study. Such risks are the most conspicuous reason why, as King observes above, the present moment in human history is “qualitatively different” from any prior epoch. Just 65 years ago, for example, there were only one or two “natural” existential risks. Looking at the present moment and into the near future, experts now count roughly 23 mostly anthropogenic types of existential risks (to say nothing of their tokens). Yet, as Nick Bostrom laments, “it is sad that humanity as a whole has not invested even a few million dollars to improve its thinking about how it may best ensure its own survival.” If any projects deserve $6 billion, in my opinion, it is those located within the still-incipient field of “secular eschatology.” More on this below.

(2) Practically speaking, one could argue that LHC's money could have been better spent developing “enhancement” technologies. Consider the fact that, if “strategies for engineered negligible senescence” (SENS) were perfected, the physicists now working on the LHC could have significantly more (life)time to pursue their various research projects. The development of such techno-strategies would thus be in the personal interest of anyone who, for example, wishes to see the protracted research projects on which they're working come to fruition. (As one author notes, the LHC extends beyond a single professional career[3].) Furthermore, healthspan-extending technologies promise to alleviate human suffering from a host of age-related pathologies, thus providing a more altruistic public good as well.

A similar argument could apply to the research domain of cognitive enhancements, such as nootropics, tissue grafts and neural implants. Again, in terms of the benefits for science, “a 'superficial' contribution that facilitates work across a wide range of domains can be worth much more than a relatively 'profound' contribution limited to one narrow field, just as a lake can contain a lot more water than a well, even if the well is deeper.”[4] Cognition-enhancing technologies would thus provide an appreciable boost not just to research on fundamental physics issues – the first billionth of a second after the Big Bang, the existence of the Higgs boson particle, etc. – but to the scientific enterprise as a whole.

Second, there may exist theories needed to understand observable phenomena that are in principle beyond our epistemic reach – that is, theories to which we are forever “cognitively closed.” The so-called “theory of everything,” or a theory elucidating the nature of conscious experience, might fall within this category. And the only plausible route out of this labyrinth, I believe, is to redefine the boundary between “problems” and “mysteries” via some techno-intervention on the brain. Otherwise, we may be trapped in a state of perennial ignorance with respect to those phenomena – floundering like a chimpanzee trying to conjugate a verb or calculate the GDP of China. Yet another reason to divert more funds towards “applied” enhancement research.

Furthermore, upgrading our mental software would augment our ability to evaluate the risks involved in LHC-like experiments. Physicists are, of course, overwhelmingly confident that the LHC is safe and thus will not produce a strangelet, vacuum bubble or microscopic black hole. (See the LSAG report.) But it is easy – especially for those who don't study the history and philosophy of science – to forget about the intrinsic fallibility of scientific research. When properly contextualized, then, such confidence appears consternatingly less impressive than one might initially think.

Consider, for example, Max Plank's oft-quoted comment that “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” Thus, for all we know at present, the next generation of physicists, working within a modified framework of more advanced theory, will regard the LHC's risks as significant – just as the lobotomy, for which Egas Moniz won the science's most prestigious award, the Nobel Prize, in 1949, is now rejected as an ignominious violation of human autonomy. This point becomes even more incisive when one hears scientists describe the LHC as “certainly, by far, the biggest jump into the unknown” that research has ever made. (Or, recall Arthur Clarke's famous quip: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.")

Critics of Critics

In response to such criticism, many scientists have vehemently defended the LHC. Brian Cox, for example, riposts “with an emphatic NO” to contrarians who suggest that “we [can] do something more useful with that kind of money.” But Cox's thesis, in my opinion, is not compelling. Consider the culmination of Cox's argument: “Most importantly, though, the world would be truly impoverished without all the fundamental knowledge we've gained” from projects like the LHC[5]. Now, at first glance, this claim seems quite reasonable. Without the “fundamental knowledge” provided by Darwinian theory, for example, it would be difficult (as Dawkins contends) to be an “intellectually fulfilled atheist.” This is an instance of science – by virtue of its pushing back the “envelope of ignorance” – significantly enriching the naturalistic worldview.

But Cox's assertion could also be construed as rather offensive. Why? Because the fact is that much of the world is quite literally impoverished. Thus, from the perspective of millions of people who struggle daily to satisfy the most basic needs of Maslow's hierarchy – people who aren't fortunate enough to live lives marked by “the leisure of the theory class,” with its “conspicuous consumption” of information – Cox's poverty argument for blue skies research is, at worst, an argument from intellectual vanity. It considers the costs of postponing physics research from a rather solipsistic perspective; and considering issues from the perspectives of others is, of course, the heart of ethics[6]. Surely if the roles were reversed and advocates of the LHC suddenly found themselves destitute in an “undeveloped” country, they would agree that the material needs of the needy (themselves) should take precedence over the intellectual needs of the privileged.

Furthermore, consider Sir Martin Rees' defense. “It is mistaken to claim,” Rees argues, “that global problems will be solved more quickly if only researchers would abandon their quest to understand the universe and knuckle down to work on an agenda of public or political concerns. These are not 'either/or' options – indeed, there is a positive symbiosis between them.”

But, in my view, the existence of such symbioses is immaterial. The issue instead concerns how resources, including money and scientists, are best put to use. A retort to Rees could thus go as follows: there is a crucial difference between making every effort, and making merely some effort, to exorcise the specter of existential and other related risks that haunts the present millennium. Thus, if one is seriously concerned about the future of humanity, then one should give research directly aimed at solving these historically unique conundra of eschatological proportions strong priority over any project that could, at best, only “indirectly” or “fortuitously” improve the prospects of life on Earth.

Rees' defense of the LHC is perplexing because he is (at least ostensibly) quite concerned with secular eschatology. In his portentous book Our Final Hour, Rees argues that “the odds are no better than fifty-fifty that our present civilisation on Earth will survive to the end of the present century.” Similar figures have been suggested by futurologists like Bostrom, John Leslie and Richard Posner[7]. Thus, given such dismal probability estimates of human self-annihilation, efforts to justify the allocation of limited resources for blue skies projects appear otiose. As the notable champion of science, Bertrand Russell, once stated in a 1924 article inveighing against the space program, “we should address terrestrial problems first.”

In conclusion, it should be clear that the main thrust of my criticism here concerns the moral issue of existential risks. The present situation is, I believe, sufficiently dire to warrant the postponement of any endeavor, project or program that does not have a high probability of yielding results that could help contain the “qualitatively different” problems of the 21st century. This means that the LHC should be (temporarily) shut down. But even if one remains unmoved by such apocalyptic concerns, there are still good practical reasons for opposing, at the present moment, blue skies research: money could be better spent – or so the argument goes – developing effective enhancement technologies. Such artifacts might not only accelerate scientific “progress” but help alleviate human suffering too. Finally, I have suggested that some counterarguments put forth in defense of the LHC do not hold much water.

If we want to survive the present millennium, then we must, I believe, show that we are serious about solving the plethora of historically unique problems now confronting us.

[1] And as Brian Cox states, “there can be no better symbol of that pure curiosity-driven research than the Large Hadron Collider.”
[2] These are two distinct reasons for opposing the LHC – reasons that may or may not be compatible. For example, one might attempt to use the moral argument against the enhancement argument: spend money helping people now, rather than creating “enhancements” with dangerous new technology. Or, one might, as Mark Walker does, argue that the development of enhanced posthumans actually offers the best way of mitigating the risks mentioned in the moral argument.
[3] See this article, page 6.
[4] From this article by Bostrom.
[5] Space prevents me from considering Cox's additional arguments that “the world would be a far less comfortable place because of the loss to medicine alone, and a poorer place for the loss to commerce.” I would also controvert, to some extent, these assertions as well.
[6] This is the so-called “moral point of view.”
[7] Although Posner does not give an actual number.