March 28, 2009

Cyborgs and Metaphorology: Mapping Technology onto Biology

(Words: ~2802)
How is human cognition structured? One intriguing answer comes from the work of George Lakoff and Mark Johnson. In their two co-authored books, published in 1980 and 1999, Lakoff/Johnson argue that human cognition is metaphorical in structure--that is, most (but not all) thinking involves mapping concepts from more familiar domains of experience to less familiar ones. Such conceptual mappings are what Lakoff/Johnson call "conceptual metaphors," itself a metaphorical term. (Indeed, as Steven Pinker notes, this theory is based on a “metaphor metaphor” pleonasm.) Thus, the Lakoff /Johnson conception of metaphor contrasts with traditional accounts, which universally identify language as the locus of metaphor. On this “old” view, metaphors are false propositions, although they may prove fecund for the imagination, stimulating one to think about concepts in new and original ways. Rather than focusing on language, though, Lakoff/Johnson argue that metaphors in language are merely external manifestations of underlying cognitive phenomena. In other words, we speak metaphorically because we think metaphorically.

In fact, when one examines human speech--both colloquial and technical, in all languages around the world--one finds it saturated with metaphor, although not of the same type. Indeed Lakoff/Johnson distinguish between a number of different kinds of metaphors, such as: (i) metaphors that map an orientation onto a target domain, (ii) metaphors that confer entityhood to objects in a domain, and (iii) metaphors that map structure from one domain to another. These are, respectively, orientational metaphors, ontological metaphors, and structural metaphors. Now, consider the following quotidian statements, the metaphoricity of which few would normally notice:

--"They greeted me warmly" (based on AFFECTION IS WARMTH)
--"Tomorrow is a big day" (based on IMPORTANT IS BIG)
--"I'm feeling up today" (based on HAPPY IS UP)
--"We've been close for years, but we're beginning to drift apart" (based on INTIMACY IS CLOSENESS)
--"This movie stinks" (based on BAD IS STINKY)
--"She's weighed down by responsibilities" (based on DIFFICULTIES ARE BURDENS)
--"Prices are high" (based on MORE IS UP)
--"Are tomatoes in the fruit or vegetable category?" (based on CATEGORIES ARE CONTAINERS)
--"These colors aren't quite the same, but they're close" (based on SIMILARITY IS CLOSENESS)
--"John's intelligence goes way beyond Bill's" (based on LINEAR SCALES ARE PATHS)
--"How do the pieces of this theory fit together?" (based on ORGANIZATION IS PHYSICAL STRUCTURE)
--"Support your local charities" (based on HELP IS SUPPORT)
... and so on.

With such examples (Lakoff/Johnson give many more), the Necker cube begins to switch--in Kuhnian fashion--toward a new "way of seeing" human language and thought as fundamentally structured by metaphor. But this is just a synchronic look at metaphor and language (we examine language because language is our primary source of empirical evidence for the existence of conceptual metaphors); what about a diachronic perspective? What can history tell us about conceptual metaphor theory? As Lakoff/Johnson point out, a major source of corroborative evidence for their approach comes from distinct patterns of "historical semantic change." Indeed, in her dissertation--written under Lakoff at Berkeley and later published as a book in 1990--Eve Sweetser argues that human languages, stretching across cultural space and time, evince similar or identical etymological patterns. For example, words initially used to denote the activity of physical manipulation consistently acquired (usually through an intermediate stage of polysemy) meanings relating to mental manipulation. For example, when we "comprehend" a thought, we etymologically grasp it by the mind. The same goes for vision and mentation, the latter of which is often understood as a kind of seeing (thus, we have the words 'elucidate', 'obscure', 'enlighten', 'benighted', 'transparent', 'opaque', etc.). According to Sweetser, such repeated patterns of change in different parts of the world, and at different times throughout history, stand as further evidence that the Lakoff/Johnson theory is robust (to speak metaphorically, of course).

Now, a second diachronic perspective concerns biological evolution. This angle too supports Lakoff/Johnson's thesis that human cognition is metaphorically structured. Consider, for example, the following passage from Richard Dawkins:

The way we see the world, and the reason why we find some things intuitively easy to grasp and others hard, is that our brains are themselves evolved organs: on-board computers, evolved to help us survive in a world--I shall use the name Middle World--where the objects that mattered to our survival were neither very large nor very small; a world where things either stood still or moved slowly compared with the speed of light; and where the very improbable could safely be treated as impossible.

Thus, on the Lakoff/Johnson view, humans evolved cognitive mapping mechanisms that allow(ed) us to understand less familiar, abstract or poorly delineated domains of thought/experience in terms of more familiar, concrete, or better delineated domains. In other words, we evolved to our highly circumscribed, mesoscopic "Middle World," and yet we succeed in understanding abstracta at the most micro- and macro-scopic levels of reality. Indeed, as this suggests, metaphor does not just structure our thinking about ordinary, quotidian matters, but the most abstruse, theoretical issues as well. It is of course true that humans use the same brains for both activities. Thus, following Lakoff/Johnson, Theodore Brown argues that Lakoff/Johnson-style metaphors form the conceptual foundations of science. For example, Brown claims that modern chemistry is based (in part) on the metaphor that ATOMS ARE CLOUDS OF NEGATIVE CHARGE SURROUNDING A POSITIVE CENTER. And, similarly, the cognitive metaphorologist Geraldine Van Rijn-van Tongeren argues that modern genetics is based on the metaphor GENOMES ARE TEXTS, given the systematic multiplicity of polysemous textual terms in the lexis of genetics--e.g., 'transcribe', 'translate', 'palindrome', 'reading frame', 'primer', etc.

Now, let's examine the extent to which our modern thinking, both inside and outside of academic biology, is structured by the metaphor ORGANISMS ARE ARTIFACTS. The hypothesis here considered--that the metaphorical mapping of ARTIFACT ORGANISM lies at the conceptual foundations of modern biology, and even informs our pre-theoretic conception of living mater--constitutes nothing more than incipient theorization. Thus, I do not necessarily accept the conclusions arrived at, and indeed there is much to be ambivalent (and excited) about in Lakoff/Johnson's cognitivist metaphorology. Still, looking at biology from this particular angle, I believe, is a worthwhile intellectual endeavor.

To begin, philosophers and biologists have long noted a persistent and rather common metaphorization of organisms as artifacts in modern evolutionary biology. Tim Lewens, for example, uses the term “artifact analogy” to denote this mapping; but Lewens' account treats the analogy (or metaphor) as a purely linguistic, rather than cognitive, phenomenon. (He explicitly adopts Donald Davidson's conception of metaphor.) Indeed, no philosopher has yet provided a detailed interpretation of this organism/artifacts metaphor using Lakoff/Johnson's apparatus, although some, like Michael Ruse, do mention it. This is precisely what I want to do. Now, as alluded to above, there are several distinct phenomena, along both the synchronic and diachronic axis, that one could examine for evidence for/against hypotheses about particular conceptual mappings. In the following paragraphs, I will (i) consider historical semantic change; (ii) examine terminological polysemy and identify other metaphors in biology that systematically cohere with the ORGANISMS ARE ORGANISMS mapping; and finally (iii) I will suggest a possible link between this metaphor and other phenomena discussed outside of biology, such as Langdon Winner's notion of "reverse adaptation" and the medicalization of "deviance" and "natural life processes." (Some transhumanists actually advocate "mak[ing] 'healthy' people feel bad about themselves.")

(i) One can hardly find a more central concept in modern biology than that of the organism. Now, the term 'organism' derives from 'organ', which gives rise to a myriad of important terms in the biological sciences, such as 'organelle', 'organic', 'organization', 'superorganism', etc. But what is the etymology of 'organ'? Following Sweetser's lead, the "hidden" semantic history of this term might provide clues about underlying conceptual mappings. Indeed, 'organ' has both Latin (organum) and Greek (organon) etyma, both of which mean something like "mechanical device, tool, instrument." It appears that humans, at some point, began to see biological entities as human-made artifacts, and this conceptualization manifested itself through the semantic change of 'organ' and (eventually) 'organism', which now means "a living being." (Thus, the sentence 'organisms are artifacts' is, from the etymological point-of-view, almost an analytic truth.)

But when did this occur? Obviously, Rene Descartes proposed a mechanistic conception of the cosmos in the seventeenth century, postulating animals (which have no "mind" substance) as nothing more than machines. Laplace's "clockwork universe" concept is another example of artifactually metaphorizing the world. Later, the natural theologians--most notably William Paley--explicitly understood the universe to be an artifact, namely God's artifact, according to their "Platonic" conception of teleology. But, as Ruse and other philosopher-historians have noted, it was Charles Darwin who pushed the organism/artifact metaphor "further than anyone." That is to say, Darwin understood--in a fundamental way--"nature's parts as machines, as mechanisms, as contrivances" (to quote Ruse again).

Now, the question "When?" is important because its answer may have some bearing on the cogency of Lakoff/Johnson's metaphorology. Consider, for example, the metaphors TIME IS MONEY and TIME IS A RESOURCE. These are not universally held metaphors, by any means. Rather, they are spatiotemporally peculiar--that is, one finds them primarily in the West (space), and they first appeared with the emergence of industrial capitalism (time). And this makes sense, since Lakoff/Johnson claim only that conceptual mappings proceed unidirectionally from more to less familiar domains. Thus, as human familiarity with certain domains increases or decreases, the metaphors we use to understand abstracta will correspondingly change. In the case of ORGANISMS ARE ARTIFACTS, one finds this metaphor becoming foundational to biology right around the time of the English Industrial Revolution. That is to say, the term 'organism' acquired its modern signification circa the early nineteenth century, when the environment in which biologists were theorizing about transmutation and other evolutionary phenomena was becoming increasingly mechanized, industrialized, and cluttered with human-made artifacts. (The term 'organ' appears to have come into use slightly earlier, beginning circa Descartes' time.) Given our cognitive architecture, then, it was only natural to metaphorize organisms (not so familiar domain) as artifacts (increasingly familiar domain).

There are, indeed, many examples in Darwin's work that suggest an external--that is, extra-scientific--influence on this scientific ideas. For example, Darwin talked about "division of labor" in biology, he borrowed from Thomas Malthus' theory of population growth and, as historian Peter Bowler observes, his overall conception of nature "was more in tune with the aggressive worldview of industrial capitalism." Thus, as the source domain from which Darwin (and others) extended conceptual metaphors became increasingly "technologized," the terms 'organ' and 'organism' offered themselves as metaphorically coherent designations for biological entities. Indeed, as further evidence of the newness of 'organism' in nineteenth century biology, Darwin felt compelled to actually define it in his Glossary (Figure 1).

Figure 1: Darwin's definition of 'organism' from On the Origin of Species.

(ii) A glance through an evolutionary biology textbook reveals numerous terms that are consistent with the ORGANISMS ARE ARTIFACTS metaphor. Consider, for example, the terms 'function' and 'mechanism'. Both of these terms are associated with human-made artifacts, as technical devices have functions (in virtue of some agential intention) and are generally composed of mechanisms (which often work according to "laws" or "invariant generalizations"). But the significance of these terms in biology goes deeper than the mere terminological; indeed, the primary modes of explanation used by biologists are properly termed functional and mechanistic. In a functional explanation, one explains why a particular organismal trait is there--that is, why it exists in the first place. For example, a functional explanation of the heart involves specifying its evolutionary history, i.e., what it was naturally selected (in "modern history") to do. In contrast, in a mechanistic explanation, one explains how an aggregate of appropriately organized entities and activities act and interact to produce a phenomenon (the explanandum). For example, the phenomenon of blood circulation is mechanistically explained by the ventricles and atria, their diastolic and systolic activities, etc. (Indeed, the leading theorists of the "new mechanical philosophy" call the phenomena of mechanisms "products," and instead of discussing "causation" they prefer to talk about "productivity.")

Thus, modern biologists apply to biological explananda the exact same modes of explanation used for technological phenomena. And from this we can formulate the following two conceptual metaphors, which follow deductively from the ARTIFACT ORGANISM mapping:

(i) ORGANISMAL PARTS HAVE FUNCTIONS
(ii) ORGANISMS ARE COMPOSED OF MECHANISMS


One finds many more such conceptual mappings, both explicit and tacit, in the biological and philosophical literature. For example, in addition to the two metaphors above, the following metaphors appear to be rather common in biology:

(iii) BIOLOGY IS ENGINEERING
(iv) ORGANISMS ARE REVERSE ENGINEERABLE
(v) MINDS ARE COMPUTERS
(vi) ORGANISMS AND THEIR PARTS ARE DESIGNED
...and so on.

On the present view, then, the terminology and metaphoricity of modern biology are the external, observable manifestations of a deeper underlying conceptual mapping from technology to biology. Incidentally, much of the transhumanist program is based on the notion that organisms (recall here the term's etymology) are no more metaphysically than complex artifacts, designed and engineered by the "blind watchmaker." As Dennett (who is not a transhumanist) boldly argues, evolutionists ought to accept Paley's premise that nature exhibits design; our naturalism, though, impels us to replace God with an ersatz "designer," such as natural selection. Furthermore, the view that humans can fill themselves with (e.g.) nanobots, such as "respirocytes," to carry oxygen to various organs, or that humans can "upload" their minds to a computer, is crucially based on the ORGANISMS ARE ARTIFACTS metaphor. Strong AI, for example, puts forth the artifactual metaphors that BRAINS ARE COMPUTER HARDWARE and MINDS ARE COMPUTER SOFTWARE. Thus, Strong AI reasons that just as computer software is "multiply realizable," so too are minds--the particular physical substrate is irrelevant, as long as it exhibits the proper functional organization. (Note that Jaron Lanier's critique of "cybernetic totalism" ties directly into the present discussion.) In conclusion, then, this points to the connection between cyborgs and metaphorology.

(iii) But there is also a connection, I believe, between phenomena like "reverse adaptation" and the ORGANISMS ARE ARTIFACTS metaphor. To begin, let's look at what reverse adaptation is. In Langdon Winner's words:

A subtle but comprehensive alteration takes place in the form and substance of [he] thinking and motivation [of modern humans]. Efficiency, speed, precise measurement, rationality, productivity, and technical improvement become ends in themselves applied obsessively to areas of life in which they would previously have been rejected as inappropriate.

Without a doubt, it is precisely these qualities that transhumanists identify as the properties that humans ought to possess; indeed, the entire motivation behind "enhancement" technologies is to overcome innate human limits on efficiency, speed, productivity, etc. For example, Nick Bostrom sees as undesirable "the impossibility for us current humans to visualize an [sic] 200-dimensional hypersphere or to read, with perfect recollection and understanding, every book in the Library of Congress." And the futurist Ray Kurzweil complains about (to compile a rather random list of passages that gesture at the point):

--"the very slow speed of human knowledge-sharing through language"
--our inability "to download skills and knowledge"
--the slow rate of "about one hundred meters per second for the electrochemical signals used in biological mammalian brains"
--our failure to "master all [the knowledge of our human-machine civilization]"
--the "fleeting and unreliable" ability of human beings to maintain intimate interpersonal relations (e.g., love)
--the "slow speed of our interneuronal connections [and our] fixed skull size"
--our "protein-based mechanisms [that lack] in strength and speed"
--the "profoundly limited" plasticity of the brain
...and so on.

In other words, the human organism is a technological artifact, and as such it ought to behave like one. It is no wonder, then, that behaviors and thought patterns that deviate from (what we might call) a "technological norm" are considered, through the process of medicalization, "pathological." Just as computers are expected to sit on one's desk and perform specific tasks on command, so too the corporate employee is expected to sit at one's desk and perform specific tasks on command. Psychiatry is not a value-neutral field, and the values applied to humans are, one might argue, often derived from technology.

This is my tentative thesis linking the cyborg and metaphorology. More theoretical work is required, as many of these points can be significantly elaborated. But, after all, I am only human--at least for now.

March 25, 2009

An Existential Risk Singularity?

(Words: ~846)
The gravest existential risks facing us in the coming decades will be of our own making
. --Transhumanist FAQ, Section 3.3

In the lexis of future studies, the term 'singularity' has numerous different meanings. For the present purposes, one can think of the singularity as, basically, the point at which the rate of technological change exceeds the capacity of any human to rationally comprehend it. (There are further questions about why this will occur, such as the total merging of biology and technology, the emergence of Strong AI, etc.) The postulation of this future event, which Ray Kurzweil expects to occur circa 2045, is based on a manifold of historical trends in technological development that evince an ostensibly exponential rate of change. Moore's Law (formulated by Gordon Moore, co-founder of Intel Corporation) is probably the most well-known "nomological generalization" of such an exponential trend (Figure 1).

Figure 1: Graph of Moore's Law, from 1971 to 2008.

In The Singularity is Near, Kurzweil plots a number of "key milestones of both biological evolution and human technological development" on a logarithmic scale, and discovers an unequivocal pattern of "continual acceleration" (e.g., "two billion years from the origin of life to cells; fourteen years from the PC to the World Wide Web"; etc.) (Figure 2). (See also Theodore Modis.) And from this trend, Kurzweil and other futurists extrapolate a future singular event at which the world as we now know it will undergo a radical transmogrification. (Indeed, the singularity can be thought of as an "event horizon," beyond which current humans cannot "see.")

Figure 2: Kurzweil's "Countdown to the Singularity" graph.

What I am interested in here is the possibility of an "existential risk singularity," or future point at which the rate of existential risk creation exceeds our human capacity for rational comprehension--as well as mitigation and control (yet another reason for developing posthumans). Consider, for example, Nick Bostrom's observation that existential risks (which instantiate the 'X' in Figure 3) "are a recent phenomenon." That is to say, nearly all of the risks that threaten to either (ex)terminate or significantly compromise the potential of earth-originating intelligent life stem directly from "dual use" technologies of neoteric origin. In a word, such risks are "technogenic."

Figure 3: Bostrom's typology of risks, ranging from the personal (scale) and endurable (intensity) to the global (scale) and terminal (intensity). The latter, global-terminal risks are "existential."

The most obvious example, and the one probably most vivid in the public mind, is nuclear warfare. But futurists unanimously expect technologies of the (already commenced) genetics, nanotechnology and robotics (GNR) revolution to bring with them a constellation of brand-new and historically unprecedented risks. As Bostrom discusses in his 2002 paper, prior to 1945, intelligent life was vulnerable to only a few, extremely low-probability events of catastrophic proportions. Today, Bostrom identifies ~23 risks to (post-)human existence, including disasters from nanotechnology, genetic engineering, unfriendly AI systems, and possible events falling within various "catch-all" categories (e.g., unforeseen consequences of unanticipated technological breakthroughs). Thus, since many of these risks are expect to arise within the next three decades, it follows that within only 100 years--from 1945 to 2045--the number of existential risks increased roughly 12-fold (Figure 4).

Figure 4: Rapid increase in the number of existential risks, from pre-1945 to 2045.

But what about the probability? This issue is much more difficult to graph, of course. Nonetheless, we have three basic (although not entirely commensurate) data points, which at least gesture at a global trend: (i) the probability of a comet or asteroid impact per century is extremely low; (ii) John F. Kennedy once estimated the likelihood of nuclear war during the Cuban Missile Crisis to be "somewhere between one out of three and even"; and (iii) experts today estimate a subjective probability that Homo sapiens (the self-described "wise man") will self-immolate within the next century between 25% (Nick Bostrom) and 50% (Sir Martin Rees). In other words, our phylogenetic ancestors in the Pleistocene were virtually care free, in terms of existential risks; mid-to-late-twentieth century humans had to worry about a sudden and significant increase in the likelihood of annihilation through nuclear war; and future (post-)humans will, at least ostensibly, have to worry about a massive rise in both the number and probability of an existential catastrophe, through error or terror, use or abuse (Figure 5).

Figure 5: A graph sketching out the approximate increase in the probability of an existential disaster from 1945 - 2045. (The Cold War period may be an exception to the curve shown, which may or may not be exponential; see below.)

Thus, given the apparent historical trends, it appears reasonable to postulate an existential risk singularity. This makes sense, of course, given that (a) nearly all of these risks are technogenic, and (b) as Kurzweil and others show, the development of numerous technologies is occurring at an exponential (even exponentially exponential) rate. One is therefore led to pose the question: Is the existential risk singularity near?

March 20, 2009

Towards a Theory of Ignorance

(Words: ~2000)
Knowledge is like a sphere, the greater its volume, the larger its contact with the unknown. --Blaise Pascal


"When information doubles," the futurist/economist Robert Theobald once said, "knowledge halves and wisdom quarters." By most contemporary accounts, though, information is not merely doubling; rather, as Kevin Kelly argues, "the fastest growing entity today is information." (Indeed, the very study of information contributes to its rapid expansion.) According to Kelly and Google economist Hal Varian, "world-wide information has been increasing at the rate of 66% per year for many decades." As they show, this information growth is manifest in the number of public websites, inventions patented, and scientific articles published. Similarly, as I show in the following two graphs, the number of international journals (selected because of their high "impact factor") has increased significantly over the last century+, both in Science (Figure 1) and Philosophy (Figure 2). This is no surprise, of course, given the phenomenon of academic specialization (which was once said to produce "people who know more and more about less and less, until they know all about nothing").

Figure 1: The number of notable Science journals from 1860 - 2007.

Figure 2: The number of notable Philosophy journals from 1900 - 2008.

And along with disciplinary and vocational specialization comes linguistic specialization, or the creation of new vocabularies and terminologies. Thus, it follows that the English language is expanding too, as the following graph confirms (Figure 3). Here, I plot the number of entries in various dictionaries, from Dr. Johnson's A Dictionary of the English Language to the OED's Third Edition (anticipated in 2037), against the years they were published. As linguists will confirm, the English language (now the lingua franca of the world) is larger today than any language has ever been in anthropological history.

Figure 3: The growth of the English language; see the bottom of post for information about individual dictionaries.

If we accept Theobald's assertion, then, it follows that knowledge and wisdom are rapidly shrinking, at an inverse rate of information growth. Thus, despite the wonders of modern science, it appears that humans are today becoming increasingly ignorant, rather than knowledgeable and wise. (The humanist psychologist Erich Fromm once wrote with solicitude: "We have the know-how, but we do not have the know-why, nor the know-what-for.") It seems, then, that an adequate "theory of ignorance" (cf. "theory of knowledge") is needed, to make sense of the observed patterns of information growth and human understanding (the explanandum), as well as to examine the implications of these patterns for the transhumanist program.

Maybe, though, the word 'despite' in the paragraph above is misleading; maybe the best locution is 'because of'. On this view, it is because of modern science that humanity finds itself in its epistemic plight of ignorance. This is precisely the position championed by Kelly, who argues that modern science increases both the quantity of answers and the quantity of questions, but it increases the latter faster than the former--exponentially faster, in fact (Figure 4).
Thus, if one characterizes ignorance as the difference between questions (known but without answers) and answers (given to known questions), then human ignorance is expanding at an exponential rate.

Figure 4: Kevin Kelly's graphic illustration of the growth of ignorance.

But why, if Kelly is correct, does this phenomenon occur? The answer: Kelly's thesis rests upon the plausible claim that every new answer formulated by scientists introduces greater-than-or-equal-to 2 novel questions. The result is that science, like technology (which always has "unintended consequences" and "negative externalities"), becomes a self-perpetuating enterprise, built upon a positive-feedback loop in which scientific work creates the possibility for more scientific work. In other words, enlightenment leads to benightedness, science entails nescience. Thus, this ostensibly paradoxical expansion of ignorance is precisely what makes scientific "progress" possible; as Albert Einstein once observed: "We can't solve problems by using the same kind of thinking we used when we created them." Thus, we change our thinking, and in doing so science "progresses."

There are, for a theory of ignorance, many important questions to be asked. To begin, one might inquire about the sources of human ignorance: In what ways can one become ignorant? This question, indeed, has a long and venerable history. For example, in the thirteenth century, Roger Bacon--a forward-thinking Franciscan friar with empiricist leanings--identified "four causes of human ignorance," namely (i) authority, (ii) custom, (iii) popular opinion, and (iv) pride of supposed knowledge. Later, echoing R. Bacon's work, the natural philosopher Francis Bacon delineated a typology of "Idols," including those of the Tribe, the Den, the Marketplace and the Theater. These Idols, F. Bacon argued, are truth-distorting and as such have no place in an epistemologically respectable new empirical science.

Another issue pertains to the distinction between individual and collective ignorance. For example, although I know very little about how the Large Hadron Collider (LHC) actually works, it is nonetheless true that scientists (specifically physicists) know, and in great theoretical detail. Thus, while I am ignorant of the inner workings of the LHC, the collective we--which includes laypersons and experts--is knowledgeable. Such collective knowing by groups composed of individually ignorant scientists has, to be sure, become the norm today: so-called "Big Science," which commenced with the Manhattan Project, involves large numbers of scientists working on particular problems in groups structured according to an "intellectual division of labor." (Recall here Hilary Putnam's notion of a "division of linguistic labor" and Figure 3 above.) The end-result is a solution to a problem that no single scientist fully understands--the whole "knows" but its parts do not.

We must, therefore, define 'ignorance' for the group and the individual differently. On Kelly's view, which focuses on the collective group (rather than the individual), ignorance is the "widening gap" between the group's collectively held questions and its collectively held answers. In contrast, individual ignorance is characterizable (along these lines) as the difference between the group's collectively held questions and the individual's personally held answers. Thus, as humans collectively acquire more questions, the individual finds him or herself increasingly dwarfed by his or her relativistic ignorance. This is roughly, I believe, what that Langdon Winner is getting at when he writes: "If ignorance is measured by the amount of available knowledge that an individual or collective 'knower' does not comprehend, one must admit that ignorance, that is relative ignorance, is growing." (Although here one must interpret "knowledge" as referring inclusively to the unanswered questions that form the upper curve of Kelly's graph.)

Furthermore, the individual is far more constrained by (what I call) the "breadth/depth trade-off" than the group: given our common "finitary predicament" (which involves constraints imposed by time, memory, etc.), the knowledge-depth of any given individual tends to be inversely related to his or her knowledge-breadth. (Culture, on the other hand, is transgenerationally cumulative--it doesn't have to start over each time a generation dies out.) One can thus imagine a spectrum of knowers ranging from experts on one side to jack-of-all-trades on the other, where the former has a parochial focus and the latter a sciolistic understanding. This gestures at two sources of individual ignorance, which we might articulate as follows: (i) breadth-ignorance from knowledge-depth, and (ii) depth-ignorance from knowledge-breadth.

We have, in the above paragraphs, borrowed and elaborated Kelley's definition of 'ignorance'. But the concept of ignorance is, I believe, further analyzable. Consider, for example, Kelly's notion of a question. To state the obvious, there are questions that we know, and questions that have not yet been formulated--that is, questions that we don't know. Kelly considers only the former in characterizing collective ignorance. It is possible, though, that infinitely (or maybe finitely?) many questions exist that we are not yet aware of--e.g., abstruse questions that a future theory X brings to our attention--just as pre-twentieth century physicists were not yet aware of the (apparent) theoretical incompatibility of quantum mechanics and relativity theory.

The fifteenth century polymath and "apophatic" theologian Nicholas of Cusa seems to capture this distinction with his "doctrine of learned ignorance." According to Nicholas, ignorance and knowledge are not wholly distinct epistemic phenomena, but combine and overlap in interesting ways. As Nicholas writes: "The more [a wise person] knows that he is unknowing, the more learned he will be." In other words, "learned ignorance is not altogether ignorance," but a kind of knowledge or wisdom. (Socrates had a similar thought with his proclamation that "All I know is that I know nothing.") Applying this to Kelly's graph, then, the upper curve constitutes not ignorance simpliciter, but a sort of quasi-knowledge or learned ignorance revealed by science. As Deborah Best and Margaret Jean Intron-Peterson claim, "it takes knowledge to acknowledge ignorance, and it takes acknowledgment to inquire and face what we do not know. [...] Ignorance is neither a void nor lack. Rather, it is a plenum: full and fertile." Kelly's claim that science leads to more ignorance than knowledge thus may be misleading.

This kind of ignorance, then, involves a sort of "meta-knowledge," or second-order knowing about (not) knowing. Indeed, one can't know what one doesn't know, but one can know that one doesn't know. Witte et al. make these logical possibilities explicit in their tripartite distinction between (i) known ignorance (A knows that he/she doesn't know p); (ii) unknown ignorance (A doesn't know that he/she doesn't know p); and (iii) pseudoknowledge (A thinks he/she knows p but she doesn't). Kelly accepts the first and third (what matters here is the objective fact about what one does not know and what he or she does know); I'm not even sure how to represent the second on a graph since there may be an infinite number of questions--a dotted-line of "unknown unknowns" that forever hovers above the exponential curve. On this view, then, science is a process of converting unknown questions to known questions, and then attempting to answer them.

Our discussion thus far yields the following sexipartite matrix (Figure 5):

Figure 5: Different kinds of ignorance, based on Kelly's distinction between questions and answers and Witte et al.'s distinction explicated above.

Finally, in concluding this post, the discussion so far may constitute a compelling premise in an argument for a "transhumanism future." That is to say, the existence of innate "limits" of human cognition precludes Homo sapiens today from fully comprehending the massively complex system of sociotechnics upon which the cultural superstructure is built. We are, at least from one perspective, becoming increasingly ignorant both individually and collectively, and without knowledge of how the sociotechnical system works we cannot hope to control it, contain it, or use it for good. Cognitively "superior" posthuman creatures who can fathom advanced technology, therefore, may be required to reduce the probability of self-annihilation and global collapse.

Thus, ignorance theory has implications for transhumanism as a normative thesis about whether certain kinds of technological projects (robotics, specifically Strong AI) should be pursued. On my view, given the discussion above, it probably should. But, of course, I may be ignorant.

"Knowledge is not happiness, and science but an exchange of ignorance for that which is another kind of ignorance
."

Key to Figure 3:
1755: A Dictionary of the English Language (Dr. Johnson)
1828: An American Dictionary of the English Language (Webster)
1860: An American Dictionary of the English Language (Webster-Mahn Edition)
1884: OED, First Edition
1890: Webster's International Dictionary of the English Language (Porter)
1900: Webster's International Dictionary of the English Language (republished with supplemental words)
1909: Webster's New International Dictionary (Harris & Allen)
1934: Webster's New International Dictionary, Second Edition (Neilson & Knott; contains many nonce words; is thus currently the largest English dictionary)
1961: Webster's Third New International Dictionary of the English Language, Unabridged
1989: OED, Second Edition
2005: OED (with added words)
2009: OED (with added words)
2037: OED, Third Edition

March 18, 2009

Four Kinds of Philosophical Fallibilism

(Words: ~694)
There are several ways a lexicon can grow bigger. The most obvious is for novel words--neologisms, portmanteaus, etc.--to be added to later editions of dictionaries or new dictionaries (see Figure 3 in "Towards a Theory of Ignorance"). Another way is for words already in the lexicon to acquire additional definitions, thereby becoming polysemous. This occurs, for example, with so-called "catachretic" metaphors, which involve borrowing (often in a highly systematic manner) words from one domain and applying them to another. For example, the terminology of genetics is replete with words mapped into it from the domain of texts, as in: 'transcribe', 'translate', 'palindrome', 'primer', 'reading frame', 'library', etc. The point here is simply to say that the term 'fallibilism' is a highly polysemous term, whose meaning has arborized into a bushy semantic tree. The concept therefore requires disambiguation, which I attempt below.

To begin, in his "Transhumanist Values," Nick Bostrom defines (although not explicitly) the term 'philosophical fallibilism' as the "willingness to reexamine assumptions as we go along." This seems like a good "first-pass" definition, and it captures the spirit in which this blog is written. Indeed, the views here articulated are, with respect to popular transhumanism, often iconoclastic. For example, while transhumanists are generally the first to acknowledge the risks and dangers of anticipated future technologies (esp. those of the impending genetics, nanotechnology and robotics [GNR] revolution), nearly all accept the reality and goodness of "technological progress." In my view, the historical-anthropological facts simply do not support the techno-progressivism thesis.

As I argue in "Not the Brightest Species: The Blunders of Humans and the Need for Posthumans" (link forthcoming), the empirical data seems to substantiate the opposite hypothesis, which sees civilization as "regressive" (in the sense of "moving backwards" with respect to human well-being, health and felicity) in important respects. Nonetheless, I argue, one still can (and ought to) advocate the development of a technologically "enhanced" species of posthumans, who will be, by design, more cognitively able to solve, mitigate, and control the increasingly profound existential risks confronting intelligent life on earth. (One must not forget, of course, that most of these problems stem from "dual-use" technologies themselves of neoteric origin--that is, these problems are "technogenic.")

Although Bostrom's characterization is a good start to the lexicographic task of defining 'fallibilism', the concept is further analyzable. On the one hand, we may distinguish between "first-person" and "second-person" interpretations, where first-person fallibilism focuses on the subject him or herself and second-person fallibilism focuses on others. Cutting across this division, then, is a second distinction between "weak" and "strong" versions. The former asserts that it is always possible for one's beliefs to be wrong--that is, any given belief held by an individual might turn out false. The latter asserts, in contrast, that it is very probable that one's beliefs are wrong--that is, any given belief held by an individual is very likely false.

These two distinctions lead, in combination, to the following typology of fallibilism (Figure 1):

Figure 1: Four types of fallibilism, namely weak first-person; weak third-person; strong first-person; and strong third-person fallibilism.

An example of weak fallibilism comes from David Hume's so-called "problem of induction." According to Hume, inductive reasoning cannot lead to epistemic certitude: no matter how many, for example, earth-like planets astronomers find to have no life, it is could always be the case that the next earth-like planet observed will have life on it. Thus, it is in principle always possible that the generalization 'Earth-like planets within the observable universe are lifeless', no matter how many trillions of lifeless earth-like planets previously observed, might turn out false.

On the other hand, an example of strong fallibilism comes from Larry Laudan's so-called "pessimistic meta-induction" thesis. This argument extrapolates from the historical fact that virtually all scientific theories once accepted as true by the scientific community--some having considerable predictive power--have turned out false. Thus, Laudan concludes that our current theories--from quantum theory to quantal theory, from Darwin to Dawkins--are almost certainly false. They are destined to join phlogiston theory, caloric theory, impetus theory, and other opprobria of scientific theorization in the sprawling "graveyard" of abandoned theories.

I myself tend towards a strong first-person interpretation of fallibilism, while maintaining (although tentatively) a realist attitude towards science. In future posts, I will be elaborating on this position.