March 25, 2009

An Existential Risk Singularity?

(Words: ~846)
The gravest existential risks facing us in the coming decades will be of our own making
. --Transhumanist FAQ, Section 3.3

In the lexis of future studies, the term 'singularity' has numerous different meanings. For the present purposes, one can think of the singularity as, basically, the point at which the rate of technological change exceeds the capacity of any human to rationally comprehend it. (There are further questions about why this will occur, such as the total merging of biology and technology, the emergence of Strong AI, etc.) The postulation of this future event, which Ray Kurzweil expects to occur circa 2045, is based on a manifold of historical trends in technological development that evince an ostensibly exponential rate of change. Moore's Law (formulated by Gordon Moore, co-founder of Intel Corporation) is probably the most well-known "nomological generalization" of such an exponential trend (Figure 1).

Figure 1: Graph of Moore's Law, from 1971 to 2008.

In The Singularity is Near, Kurzweil plots a number of "key milestones of both biological evolution and human technological development" on a logarithmic scale, and discovers an unequivocal pattern of "continual acceleration" (e.g., "two billion years from the origin of life to cells; fourteen years from the PC to the World Wide Web"; etc.) (Figure 2). (See also Theodore Modis.) And from this trend, Kurzweil and other futurists extrapolate a future singular event at which the world as we now know it will undergo a radical transmogrification. (Indeed, the singularity can be thought of as an "event horizon," beyond which current humans cannot "see.")

Figure 2: Kurzweil's "Countdown to the Singularity" graph.

What I am interested in here is the possibility of an "existential risk singularity," or future point at which the rate of existential risk creation exceeds our human capacity for rational comprehension--as well as mitigation and control (yet another reason for developing posthumans). Consider, for example, Nick Bostrom's observation that existential risks (which instantiate the 'X' in Figure 3) "are a recent phenomenon." That is to say, nearly all of the risks that threaten to either (ex)terminate or significantly compromise the potential of earth-originating intelligent life stem directly from "dual use" technologies of neoteric origin. In a word, such risks are "technogenic."

Figure 3: Bostrom's typology of risks, ranging from the personal (scale) and endurable (intensity) to the global (scale) and terminal (intensity). The latter, global-terminal risks are "existential."

The most obvious example, and the one probably most vivid in the public mind, is nuclear warfare. But futurists unanimously expect technologies of the (already commenced) genetics, nanotechnology and robotics (GNR) revolution to bring with them a constellation of brand-new and historically unprecedented risks. As Bostrom discusses in his 2002 paper, prior to 1945, intelligent life was vulnerable to only a few, extremely low-probability events of catastrophic proportions. Today, Bostrom identifies ~23 risks to (post-)human existence, including disasters from nanotechnology, genetic engineering, unfriendly AI systems, and possible events falling within various "catch-all" categories (e.g., unforeseen consequences of unanticipated technological breakthroughs). Thus, since many of these risks are expect to arise within the next three decades, it follows that within only 100 years--from 1945 to 2045--the number of existential risks increased roughly 12-fold (Figure 4).

Figure 4: Rapid increase in the number of existential risks, from pre-1945 to 2045.

But what about the probability? This issue is much more difficult to graph, of course. Nonetheless, we have three basic (although not entirely commensurate) data points, which at least gesture at a global trend: (i) the probability of a comet or asteroid impact per century is extremely low; (ii) John F. Kennedy once estimated the likelihood of nuclear war during the Cuban Missile Crisis to be "somewhere between one out of three and even"; and (iii) experts today estimate a subjective probability that Homo sapiens (the self-described "wise man") will self-immolate within the next century between 25% (Nick Bostrom) and 50% (Sir Martin Rees). In other words, our phylogenetic ancestors in the Pleistocene were virtually care free, in terms of existential risks; mid-to-late-twentieth century humans had to worry about a sudden and significant increase in the likelihood of annihilation through nuclear war; and future (post-)humans will, at least ostensibly, have to worry about a massive rise in both the number and probability of an existential catastrophe, through error or terror, use or abuse (Figure 5).

Figure 5: A graph sketching out the approximate increase in the probability of an existential disaster from 1945 - 2045. (The Cold War period may be an exception to the curve shown, which may or may not be exponential; see below.)

Thus, given the apparent historical trends, it appears reasonable to postulate an existential risk singularity. This makes sense, of course, given that (a) nearly all of these risks are technogenic, and (b) as Kurzweil and others show, the development of numerous technologies is occurring at an exponential (even exponentially exponential) rate. One is therefore led to pose the question: Is the existential risk singularity near?

2 comments:

  1. interesting article ~ Goolam Dawood, South Africa

    I think you're looking at the human race as one mass. Its conceivable that in pockets of society and small nations, that the number of irreconcileable existential threats may already have reached the level of singularity.

    ReplyDelete
  2. Nothing more to add but...love the article, man. I read your paper on this too.

    ReplyDelete