(Words: ~694)
There are several ways a lexicon can grow bigger. The most obvious is for novel words--neologisms, portmanteaus, etc.--to be added to later editions of dictionaries or new dictionaries (see Figure 3 in "Towards a Theory of Ignorance"). Another way is for words already in the lexicon to acquire additional definitions, thereby becoming polysemous. This occurs, for example, with so-called "catachretic" metaphors, which involve borrowing (often in a highly systematic manner) words from one domain and applying them to another. For example, the terminology of genetics is replete with words mapped into it from the domain of texts, as in: 'transcribe', 'translate', 'palindrome', 'primer', 'reading frame', 'library', etc. The point here is simply to say that the term 'fallibilism' is a highly polysemous term, whose meaning has arborized into a bushy semantic tree. The concept therefore requires disambiguation, which I attempt below.
To begin, in his "Transhumanist Values," Nick Bostrom defines (although not explicitly) the term 'philosophical fallibilism' as the "willingness to reexamine assumptions as we go along." This seems like a good "first-pass" definition, and it captures the spirit in which this blog is written. Indeed, the views here articulated are, with respect to popular transhumanism, often iconoclastic. For example, while transhumanists are generally the first to acknowledge the risks and dangers of anticipated future technologies (esp. those of the impending genetics, nanotechnology and robotics [GNR] revolution), nearly all accept the reality and goodness of "technological progress." In my view, the historical-anthropological facts simply do not support the techno-progressivism thesis.
As I argue in "Not the Brightest Species: The Blunders of Humans and the Need for Posthumans" (link forthcoming), the empirical data seems to substantiate the opposite hypothesis, which sees civilization as "regressive" (in the sense of "moving backwards" with respect to human well-being, health and felicity) in important respects. Nonetheless, I argue, one still can (and ought to) advocate the development of a technologically "enhanced" species of posthumans, who will be, by design, more cognitively able to solve, mitigate, and control the increasingly profound existential risks confronting intelligent life on earth. (One must not forget, of course, that most of these problems stem from "dual-use" technologies themselves of neoteric origin--that is, these problems are "technogenic.")
Although Bostrom's characterization is a good start to the lexicographic task of defining 'fallibilism', the concept is further analyzable. On the one hand, we may distinguish between "first-person" and "second-person" interpretations, where first-person fallibilism focuses on the subject him or herself and second-person fallibilism focuses on others. Cutting across this division, then, is a second distinction between "weak" and "strong" versions. The former asserts that it is always possible for one's beliefs to be wrong--that is, any given belief held by an individual might turn out false. The latter asserts, in contrast, that it is very probable that one's beliefs are wrong--that is, any given belief held by an individual is very likely false.
These two distinctions lead, in combination, to the following typology of fallibilism (Figure 1):
Figure 1: Four types of fallibilism, namely weak first-person; weak third-person; strong first-person; and strong third-person fallibilism.
An example of weak fallibilism comes from David Hume's so-called "problem of induction." According to Hume, inductive reasoning cannot lead to epistemic certitude: no matter how many, for example, earth-like planets astronomers find to have no life, it is could always be the case that the next earth-like planet observed will have life on it. Thus, it is in principle always possible that the generalization 'Earth-like planets within the observable universe are lifeless', no matter how many trillions of lifeless earth-like planets previously observed, might turn out false.
On the other hand, an example of strong fallibilism comes from Larry Laudan's so-called "pessimistic meta-induction" thesis. This argument extrapolates from the historical fact that virtually all scientific theories once accepted as true by the scientific community--some having considerable predictive power--have turned out false. Thus, Laudan concludes that our current theories--from quantum theory to quantal theory, from Darwin to Dawkins--are almost certainly false. They are destined to join phlogiston theory, caloric theory, impetus theory, and other opprobria of scientific theorization in the sprawling "graveyard" of abandoned theories.
I myself tend towards a strong first-person interpretation of fallibilism, while maintaining (although tentatively) a realist attitude towards science. In future posts, I will be elaborating on this position.
March 18, 2009
Subscribe to:
Posts (Atom)