Philosophical Fallibilism

February 4, 2011

Home page

See my homepage at www.philosophicalfallibilism.com!!

July 17, 2010

Notes to How Certain Should We Be?

[1] Another important thesis of Gubrud's post is that the Kurzweil/Moravec theory that the self is constituted by an abstract pattern that endures over time and the theory that the self is an immaterial soul are really the same theories. From the original article's comments, Gubrud writes the following:

"Philippe - Christopher Hitchens is a pissant. You write:

Holding that the self is a pattern is definitely *NOT* the same as holding that the self is a soul.

So who's "underlining" here? My entire argument is that it is the same. Where's your "critique" of that? It's the same because "pattern" is presumed to be a thing that exists apart from the substance of the body and separable from it, transferable to another body, another substance. This is essentially a mythical, magical image of soul transfer. Well, I guess I'm just "underlining" my argument again. And I guess you've proven (somehow, ad hominem I suspect) that it just isn't worth your even bothering to critique it."

[2] From the lexicon: “A pretentious attitude of scholarship; superficial knowledgeability.”

[3] Recall, for example, that John Searle maintains that computers could not possibly be conscious, and Susan Schneider holds that the sort of non-destructive uploading described in the book Mindscans would fail to transfer the self. But their reasons are quite different, and far more sophisticated, than Gubrud's.

[4] Note that the theory of evolution – of how evolutionary change actually happens – is distinct from the fact that it did occur. One could accept that evolution has occurred yet reject the Darwinian mechanism of natural selection in favor of a Lamarckian, or God-directed, one.

[5] Extrapolating from the history of science, one might even hold that most of our current theories are wrong, where “wrong” could mean either being incorrect (in which case the theory ought to be discarded) or merely being incomplete (in which case the theory need only be revised). But, of course, the possibility of a theory being wrong has no bearing on whether it is rational to accept it as true, given the evidence available at a given moment.

[6] If transhumanism is construed as below – that is, as a bipartite thesis about what will and what should be the case – then transhumanism would fail if the world turned out to be other than what it says the world is like. In other words, since ought implies can, if we can't effectively enhance the human organism, then there's no point in saying that we should.

One definition, from this paper of mine: "Transhumanism is a recent philosophical and cultural movement that has both descriptive and normative components: (1) the descriptive claim is that current and anticipated future technologies will make it possible to radically alter both our world and persons, not just by “enhancing” the capacities that we already have but also by adding entirely new capacities not previously had.1 (2) The normative claim is that we ought to do what we can to foment and accelerate the creation of such “enhancement” technologies, thereby converting the possibility of a “posthuman” future into an actuality."

[7] See my “Risk Mysterianism and Cognitive Boosters,” forthcoming in the Journal of Future Studies, for an argument to this effect.

June 20, 2010

Notes to Why "Why Transhumanism Won't Work" Won't Work

The article is now up on the IEET website, HERE. (A PDF version can be found here.)

Footnotes:


[1] Cognitive enhancements have the potential to significantly augment the cognitive capacities of the individual, thus (possibly, to some extent) closing the epistemic gap between what the collective whole and the individual knows. At some point in the future, then, it may be that each individual knows as much as the entire group.

[2] See Jaynes, E.T. and G.L. Bretthorst. 2003. Probability Theory: The Logic of Science. Cambridge: Cambridge University Press.

[3] Gubrud writes: “Since these multiple criteria, not all clearly defined, may sometimes conflict, or may tend to different formulations of ‘identity’ and its rules, philosophers have here a rich field in which to play. ‘Progress’ in this field will then consist of an endless proliferation of terms, distinctions, cases, arguments, counterarguments, papers, books, conferences and chairs, until all tenured positions and library shelves are filled, the pages yellowed, new issues come into vogue, and the cycle starts over. I come as a visitor to this field, and while I must admire the intricate Antonio Gaudi architecture of castles that have been raised from its sands, twisting skyward and tumbling over one another, my impulse is bulldoze [sic] it (see first sentence of this essay), flatten it out and start from scratch, laying a simple structure with thick walls no more than ankle-high above the ground, as follows.”

[4] Consider the opening sentence of Gubrud’s “Balloon” paper: “Physical objects exist, consisting of matter and energy, or any other physical substance that may exist, but please note that ‘information’ is not one; neither is ‘identity’.” Where to begin? I cannot think of a single philosopher – or, for that matter, scientist – who would argue that information doesn’t exist or is non-physical in nature. (Indeed, information theory is a part of physics.) Most philosophers today are ardent physicalists who see information as perfectly compatible with their metaphysical monism (which asserts that “everything is physical”).

Gubrud’s reflex here is, no doubt, to think: “Yeah, well, that doesn’t make sense to me. How could information really be physical? I mean, you can’t reach out and touch information…” I would encourage Gubrud not to leap to any conclusions; first try to understand why philosophers today take information to be physical (a crucial first step that Gubrud repeatedly fails to make). Then you can proceed to critique the thesis, if you’d like, once you know what that thesis is. (Perusing this article on physicalism would be a good start – but only a start.)

[5] Gubrud writes: “For transhumanism itself is uploading writ large.”

[6] Take note that philosophers typically distinguish between the qualitative and non-qualitative aspects of mentality; in Ned Block’s phraseology, the former is “phenomenal” consciousness and the latter “access” consciousness. Chalmers (1996) also emphasizes an exactly parallel distinction between "psychological" (or "functional") and "phenomenal" conceptions of the mind.

[7] Note that this computation may be of numerous different kinds; again, see this article.

[8] To be clear, functionalism takes mental states to be “ontologically neutral.” That is, while purely physical (e.g., neural) systems could indeed instantiate a given mental state, so could, in principle, an immaterial substance of some sort. All that’s relevant, according to the functionalist view, is the substrate's causal-functional properties.

[9] As Howard Robinson writes: “Predicate dualism is the theory that psychological or mentalistic predicates are (a) essential for a full description of the world and (b) are not reducible to physicalistic predicates. For a mental predicate to be reducible, there would be bridging laws connecting types of psychological states to types of physical ones in such a way that the use of the mental predicate carried no information that could not be expressed without it. An example of what we believe to be a true type reduction outside psychology is the case of water, where water is always H2O: something is water if and only if it is H2O. If one were to replace the word ‘water’ by ‘H2O’, it is plausible to say that one could convey all the same information. But the terms in many of the special sciences (that is, any science except physics itself) are not reducible in this way. Not every hurricane or every infectious disease, let alone every devaluation of the currency or every coup d'etat has the same constitutive structure. These states are defined more by what they do than by their composition or structure. Their names are classified as functional terms rather than natural kind terms. It goes with this that such kinds of state are multiply realizable; that is, they may be constituted by different kinds of physical structures under different circumstances. Because of this, unlike in the case of water and H2O, one could not replace these terms by some more basic physical description and still convey the same information. There is no particular description, using the language of physics or chemistry, that would do the work of the word ‘hurricane’, in the way that ‘H2O’ would do the work of ‘water’. It is widely agreed that many, if not all, psychological states are similarly irreducible, and so psychological predicates are not reducible to physical descriptions and one has predicate [or descriptive] dualism.”

[10] More generally, Gubrud seems especially susceptible to confusing terms with the entities signified by those terms. That is, Gubrud reasons that since there are two (or three, etc.) different terms in the discussion, then there must be two (or three, etc.) different referents. Consider, for example, the following passage from his Futurisms article:

"Thus Moravec advances a theory of
pattern-identity ... [which] defines the essence of a person, say myself, as the pattern and the process going on in my head and body, not the machinery supporting that process. If the process is preserved, I am preserved. The rest is mere jelly.
Not only has Moravec introduced 'pattern' as a stand-in for 'soul', but in order to define it he has referred to another stand-in, 'the essence of a person'. But he seems aware of the inadequacy of 'pattern', and tries to cover it up with another word, 'process'. So now we have a pattern and a process, separable from the 'mere jelly'. Is this some kind of trinity?"

See the first "rule for avoiding sciolism" mentioned in my article.

(Additional note: ontological dualism seems to imply descriptive dualism, but descriptive dualism does not necessarily imply ontological dualism.)

[11] Chalmers’ view is called “property dualism.” It holds that certain particulars have physical and non-physical properties. In contrast, Cartesian substance dualism posits that those particulars themselves are non-physical in nature. My own tentative view is that this is probably wrong, but that we (unenhanced humans) are simply "cognitively closed" to the correct answer. (This is McGinn's "transcendental naturalism.")

[12] As Georges Rey puts it, "consider some theory, H, about houses (which might state generatlizations about the kinds of houses to be found in different places). The ontology of this theory is presumably a subset of the ontology of a complete physical theory, P: every house, after all, is some or other physical thing. But the sets of physical things picked out by the ideology of H -- for example, by the predicate "x is a house" -- may not be a set picked out by any of the usual predicates in the ideology of P. After all, different houses may be made out of arbitrarily different physical substances (straw, wood, bricks, ice, ...), obeying different physical laws. Houses, that is, are multiply realizable. To appreciate the generalizations of theory H it will be essential to think of those sundry physical things as captured by the ideology of H, not P. But, of course, one can do this without deny that houses are, indeed, just physical things."

[13] The answer to Why? here, on Chalmer's view, is that consciousness is simply a brute fact about the world in which we live. Psychophysical laws connecting matter and conscious states are fundamental laws, just like the laws of thermodynamics, or motion. They are, as it were, the ultimate "unexplained explainers."

[14] As Schneider points out, patternism is thus a computationalist version of the "psychological continuity theory" of personal identity.

[15] This is, in my opinion, a rather interesting thought: the uploaded mind would indeed by psychologically continuous with me. Mind clones seem, I suppose, more intimately related than genetic clones (such as identical twins).

Back to the article -->

April 8, 2010

Blue Skies and Existential Risks

[This is a revised version of an article previously published on the Institute of Ethics and Emerging Technology website.]

Basic research is what I'm doing when I don't know what I'm doing.
– Wernher Von Braun


CERN's Large Hadron Collider (LHC), a product of the biggest Big Science project in human history, has recently been in the news for having “smashed beams of protons together at energies that are 3.5 times higher than previously achieved.” This achievement stimulated thought, once again, about my ambivalence towards the LHC. The feeling arises from a conflict between (a) my “epistemophilia,” or love of knowledge, and (b) specific moral considerations concerning what sorts of pursuits ought to have priority given the particular world we happen to inhabit. In explaining this conflict, I would like to suggest two ways the LHC's funds could have been better spent, as well as respond to a few defenses of the LHC.

Moral and Practical Considerations

In 2008, the former UK chief scientist Sir David King criticized the LHC for being a “blue skies” project[1], arguing that “the challenges of the 21st Century are qualitatively different from anything that we've had to face up to before,” and that “this requires a re-think of priorities in science and technology.” In other words, couldn't the >$6 billion that funded the LHC have been better spent on other endeavors, projects or programs?

I am inclined to answer this question positively: YES, the money could have been better spent. Why? For at least two reasons[2]:

(1) Morally speaking, there is an expanding manifold of “sub-existential risk” scenarios that have been and are being actualized around the globe – scenarios that deserve immediate moral attention and urgent financial assistance. Thus, one wonders about the moral justifiability of “unnecessary” research projects in the affluent “First World” when nearly 16,000 children die of avoidable hunger-related illnesses every day; when water pollution kills more humans than all violence worldwide; when unregulated pharmaceuticals pollute public drinking water; when the Great Pacific Garbage Patch, superfund sites and eutrophication threaten the very livability of our lonely planet in space.

Ascending from the “personal/local/global” to the “transgenerational” level of analysis, there exists a growing mass of increasingly ominous existential risks that demand serious scientific and philosophical study. Such risks are the most conspicuous reason why, as King observes above, the present moment in human history is “qualitatively different” from any prior epoch. Just 65 years ago, for example, there were only one or two “natural” existential risks. Looking at the present moment and into the near future, experts now count roughly 23 mostly anthropogenic types of existential risks (to say nothing of their tokens). Yet, as Nick Bostrom laments, “it is sad that humanity as a whole has not invested even a few million dollars to improve its thinking about how it may best ensure its own survival.” If any projects deserve $6 billion, in my opinion, it is those located within the still-incipient field of “secular eschatology.” More on this below.

(2) Practically speaking, one could argue that LHC's money could have been better spent developing “enhancement” technologies. Consider the fact that, if “strategies for engineered negligible senescence” (SENS) were perfected, the physicists now working on the LHC could have significantly more (life)time to pursue their various research projects. The development of such techno-strategies would thus be in the personal interest of anyone who, for example, wishes to see the protracted research projects on which they're working come to fruition. (As one author notes, the LHC extends beyond a single professional career[3].) Furthermore, healthspan-extending technologies promise to alleviate human suffering from a host of age-related pathologies, thus providing a more altruistic public good as well.

A similar argument could apply to the research domain of cognitive enhancements, such as nootropics, tissue grafts and neural implants. Again, in terms of the benefits for science, “a 'superficial' contribution that facilitates work across a wide range of domains can be worth much more than a relatively 'profound' contribution limited to one narrow field, just as a lake can contain a lot more water than a well, even if the well is deeper.”[4] Cognition-enhancing technologies would thus provide an appreciable boost not just to research on fundamental physics issues – the first billionth of a second after the Big Bang, the existence of the Higgs boson particle, etc. – but to the scientific enterprise as a whole.

Second, there may exist theories needed to understand observable phenomena that are in principle beyond our epistemic reach – that is, theories to which we are forever “cognitively closed.” The so-called “theory of everything,” or a theory elucidating the nature of conscious experience, might fall within this category. And the only plausible route out of this labyrinth, I believe, is to redefine the boundary between “problems” and “mysteries” via some techno-intervention on the brain. Otherwise, we may be trapped in a state of perennial ignorance with respect to those phenomena – floundering like a chimpanzee trying to conjugate a verb or calculate the GDP of China. Yet another reason to divert more funds towards “applied” enhancement research.

Furthermore, upgrading our mental software would augment our ability to evaluate the risks involved in LHC-like experiments. Physicists are, of course, overwhelmingly confident that the LHC is safe and thus will not produce a strangelet, vacuum bubble or microscopic black hole. (See the LSAG report.) But it is easy – especially for those who don't study the history and philosophy of science – to forget about the intrinsic fallibility of scientific research. When properly contextualized, then, such confidence appears consternatingly less impressive than one might initially think.

Consider, for example, Max Plank's oft-quoted comment that “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” Thus, for all we know at present, the next generation of physicists, working within a modified framework of more advanced theory, will regard the LHC's risks as significant – just as the lobotomy, for which Egas Moniz won the science's most prestigious award, the Nobel Prize, in 1949, is now rejected as an ignominious violation of human autonomy. This point becomes even more incisive when one hears scientists describe the LHC as “certainly, by far, the biggest jump into the unknown” that research has ever made. (Or, recall Arthur Clarke's famous quip: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.")

Critics of Critics

In response to such criticism, many scientists have vehemently defended the LHC. Brian Cox, for example, riposts “with an emphatic NO” to contrarians who suggest that “we [can] do something more useful with that kind of money.” But Cox's thesis, in my opinion, is not compelling. Consider the culmination of Cox's argument: “Most importantly, though, the world would be truly impoverished without all the fundamental knowledge we've gained” from projects like the LHC[5]. Now, at first glance, this claim seems quite reasonable. Without the “fundamental knowledge” provided by Darwinian theory, for example, it would be difficult (as Dawkins contends) to be an “intellectually fulfilled atheist.” This is an instance of science – by virtue of its pushing back the “envelope of ignorance” – significantly enriching the naturalistic worldview.

But Cox's assertion could also be construed as rather offensive. Why? Because the fact is that much of the world is quite literally impoverished. Thus, from the perspective of millions of people who struggle daily to satisfy the most basic needs of Maslow's hierarchy – people who aren't fortunate enough to live lives marked by “the leisure of the theory class,” with its “conspicuous consumption” of information – Cox's poverty argument for blue skies research is, at worst, an argument from intellectual vanity. It considers the costs of postponing physics research from a rather solipsistic perspective; and considering issues from the perspectives of others is, of course, the heart of ethics[6]. Surely if the roles were reversed and advocates of the LHC suddenly found themselves destitute in an “undeveloped” country, they would agree that the material needs of the needy (themselves) should take precedence over the intellectual needs of the privileged.

Furthermore, consider Sir Martin Rees' defense. “It is mistaken to claim,” Rees argues, “that global problems will be solved more quickly if only researchers would abandon their quest to understand the universe and knuckle down to work on an agenda of public or political concerns. These are not 'either/or' options – indeed, there is a positive symbiosis between them.”

But, in my view, the existence of such symbioses is immaterial. The issue instead concerns how resources, including money and scientists, are best put to use. A retort to Rees could thus go as follows: there is a crucial difference between making every effort, and making merely some effort, to exorcise the specter of existential and other related risks that haunts the present millennium. Thus, if one is seriously concerned about the future of humanity, then one should give research directly aimed at solving these historically unique conundra of eschatological proportions strong priority over any project that could, at best, only “indirectly” or “fortuitously” improve the prospects of life on Earth.

Rees' defense of the LHC is perplexing because he is (at least ostensibly) quite concerned with secular eschatology. In his portentous book Our Final Hour, Rees argues that “the odds are no better than fifty-fifty that our present civilisation on Earth will survive to the end of the present century.” Similar figures have been suggested by futurologists like Bostrom, John Leslie and Richard Posner[7]. Thus, given such dismal probability estimates of human self-annihilation, efforts to justify the allocation of limited resources for blue skies projects appear otiose. As the notable champion of science, Bertrand Russell, once stated in a 1924 article inveighing against the space program, “we should address terrestrial problems first.”

In conclusion, it should be clear that the main thrust of my criticism here concerns the moral issue of existential risks. The present situation is, I believe, sufficiently dire to warrant the postponement of any endeavor, project or program that does not have a high probability of yielding results that could help contain the “qualitatively different” problems of the 21st century. This means that the LHC should be (temporarily) shut down. But even if one remains unmoved by such apocalyptic concerns, there are still good practical reasons for opposing, at the present moment, blue skies research: money could be better spent – or so the argument goes – developing effective enhancement technologies. Such artifacts might not only accelerate scientific “progress” but help alleviate human suffering too. Finally, I have suggested that some counterarguments put forth in defense of the LHC do not hold much water.

If we want to survive the present millennium, then we must, I believe, show that we are serious about solving the plethora of historically unique problems now confronting us.

[1] And as Brian Cox states, “there can be no better symbol of that pure curiosity-driven research than the Large Hadron Collider.”
[2] These are two distinct reasons for opposing the LHC – reasons that may or may not be compatible. For example, one might attempt to use the moral argument against the enhancement argument: spend money helping people now, rather than creating “enhancements” with dangerous new technology. Or, one might, as Mark Walker does, argue that the development of enhanced posthumans actually offers the best way of mitigating the risks mentioned in the moral argument.
[3] See this article, page 6.
[4] From this article by Bostrom.
[5] Space prevents me from considering Cox's additional arguments that “the world would be a far less comfortable place because of the loss to medicine alone, and a poorer place for the loss to commerce.” I would also controvert, to some extent, these assertions as well.
[6] This is the so-called “moral point of view.”
[7] Although Posner does not give an actual number.

March 28, 2010

Two kinds of posthumans

Posthumans are future beings who greatly exceed, in terms of their basic capacities, what present-day humans are capable of, with respect to healthspan, cognition and emotion. Furthermore, on this conception of the term, posthumans may or may not be "phylogenetically" related to humans: they may be completely synthetic beings, such as strong AI systems, rather than biotechnological hybrids (via cyborgization).

At the risk of silliness, one might then distinguish between post-humans and post-humans: the former would refer to a posthuman entity that non-human in nature, while the latter to a posthuman entity that has (or had in its past) biological components. An android would count as a post-human, while an advanced cyborg would count as a post-human.

For more on the relation between cyborgs and posthumans, see my most recent article on the IEET website.

March 22, 2010

Intelligence and Progress

New post up on the IEET website entitled "If Only We Were Smarter!" Has received far more hits than expected. Maybe the counter is malfunctioning?

Past articles include:



and...

January 6, 2010

Niche Construction Revisited

Technology and Human Evolution

Why technology in the first place? The answer, anthropologically and philosophically, revolves around humans relating to their environment. –Don Ihde

We have modified our environment so radically that we must now modify ourselves in order to exist in this new environment. –Norbert Weiner

Many theorists have metaphorized the development of technology as a kind of evolution; thus, one talks about “the evolution of technology.” As far as I know, Karl Marx was the first to suggest a Darwinian reading of the history of technology (in Das Kapital [1867]), but one finds the idea in work by contemporary techno-theorists too, such as Kevin Kelly in his TED talk. While such analyses can be, at times, intriguing, I am much more interested in how technology has influenced the evolution of humans in the past 2.6 million years (dating back at least to Homo habilis). In other words, I would like to understand technology along the diachronic axis not as a separate phenomenon – one that may or may not undergo a process analogous to Darwinian selection – but rather as a phenomenon constitutive of human evolution itself.

For example, anthropologists hypothesize that the creation of early lithic technologies had an amplificatory effect on human intelligence: as our ancestors came to rely on such technologies for survival, those with greater cognitive powers (to fashion such lithics) were naturally selected for. This established a positive feedback loop such that intelligence begat intelligence. Thus, in this way, human-built artifacts actually mediated the evolutionary process of natural selection to bring about more “encephalized” (bigger-brained) phenotypes.

In the literature on evolution, a new school of thought has recently emerged that rejects the standard Darwinian (or neo-Darwinian) model – a model in which organisms are always molded to fit their environments, in which causation extends unidirectionally from the environment to the organism. In contrast to this understanding of adaptation, “niche constructionists” argue that organisms actually make the environments in which they live and to which they are adapted. At the most passive end of the constructionist spectrum, simply being a “negentropic” organism far from thermodynamic equilibrium changes various factors in the environment, while at the most active end one finds Homo sapiens, a unique species that has profoundly altered the environment in which it (and most other Holocene organisms) exist (or once did).

Thus, niche construction theory explicitly brings into its theoretical view the human creation of technology – specifically, those artifacts that have in some way helped “construct” the niches that we occupy. While this is a good theoretical start (although not all biologists, including Dawkins, have jumped on the niche constructionist bandwagon), niche construction theory seems to neglect a crucial phenomenon relating to technology – a phenomenon that might be called “cyborgization” or, more prosaically, “organism construction” (on the model of “niche construction”).

To motivate this point, let me back up for a moment. First, note that explanation in biology is paradigmatically causal (rather than non-causal, as in nomological explanations citing the second law of thermodynamics ). Thus, since the standard model of Darwinian evolution sees causation as unidirectional, from the environment to the organism, it follows that explanations of organismal adaptation entail specifying an environmental factor that has, over transgenerational time, brought about a change in the corresponding organismal feature. Some philosophers have typologized this kind of explanation as “externalist,” since it is the selective environment external to the organism that accounts for the organism’s adaptedness to that environment.

But niche constructionists think that there is another type of explanation for organismal adaptation – a “constructive” explanation. According to this view, organismal features could complement or match the relevant environmental factors not because of natural selection, but because the organism itself modified those factors. While in many cases this modification is inadvertent (see the example of the Kwa-speaking yam farmers), humans are unique in the radical extent to which we have intentionally modified the environment. Back to this in a moment.

So, the picture sketched thus far looks like this. Fact: organisms are generally well-adapted to their environments (the explanandum). But why? According to niche constructionists, and in contrast to traditional neo-Darwinians, either of the following two phenomena might have occurred (these are not mutually exclusive): (i) natural selection might have intervened to bring about an adaptive change in an organismal feature to match an environmental factor, or (ii) the organism might have “constructed” its niche to make the relevant environmental factors complement its own features. Since causation here is bidirectional, causal explanation of adaptation therefore swings both ways – from the environment to the organism (externalist) and from the organism to the environment (constructive).

But what seems to be missing from this picture, at least when focusing on Homo sapiens, is the use of technology to artificially extend, substitute and enhance features of the human organism itself, for the purpose of increasing our complementarity to the increasingly artificial milieu in which we live. That is to say, niche constructionists only explicitly recognize natural selection as bringing about changes in organismal features. On reflection, though, it seems transparently clear that we humans have largely usurped the role of natural selection by technologically modifying our own behaviors, morphology and physiology – i.e., our phenotypes. The pervasive artifactual metaphors of function, mechanism, design, etc. as well as the agential metaphor of natural selection, are all being gradually replaced by literal functions, by literal mechanisms, by a literal engineer.

While some examples of “organism construction” are highly intuitive, such as neural implants and prosthetic limbs, I would like to intrepidly venture beyond our pre-theoretical intuitions and suggest that entities like the automobile might, under certain conditions, actually count as part of the (technologically-modified) human organism itself. For example, I see the automobile as a case in which engineers intervened to “construct” the human organism for the purpose of adaptively modifying it to complement a very specific selective environment, namely the road. One might therefore say that the human-automobile system is adapted to the road rather like the earthworm is adapted to its environment, which also turns out to be thoroughly constructed.

If one thinks this is a giant conceptual leap to an implausible picture of human evolution, consider the following: since the late nineteenth century, theorists have repeatedly characterized technologies as “extensions of man” (in Marshall McLuhan’s words); in his 1877 book Grundlinien einer Philosophie der Technik, the first philosopher of technology, Ernst Kapp, termed this phenomenon “organ projection.” More recently, some philosophers of mind (most notably Andy Clark) have argued that the boundary of the mind – and indeed the self too – is not demarcated by “skin and skull,” as our pre-theoretical intuitions might suggest. Rather, these philosophers claim that when specific criteria relating to (e.g.) function and reliability are satisfied, technological entities like notepads and computers literally become part of the individual’s cognitive system – that is, they become components internal to the individual’s mind and self. In a similar spirit, the physiologist J. Scott Turner has defended the conceptual-metaphysical thesis that organisms are fuzzily bounded, and still other theorists have considered the possibility of “boundary shifting,” as in the peculiar case of water crickets.

This being said, a common objection to understanding artifactual entities like automobiles, clothes, glasses, and so on, as instances of “organism construction” – that is, as extended adaptations of a sort – is that many technological modifications involve transient and reversible changes to human behavior, morphology and physiology. Unlike the evolutionary acquisition of a bigger brain, for example, the “automobilic phenotype” is expressed only temporarily. Rather than take this as a problematic datum, though, I see it as suggesting a novel interpretation of what biologists have called phenotypic plasticity, or the ability of an organism to manifest particular phenotypic features in response to specific environmental factors on an ontogenetic timescale. As Darwin once wrote: “I speculated whether a species very liable to repeated and great changes of conditions might not assume a fluctuating condition ready to be adapted to either condition.”

This is, in fact, precisely what one finds in our highly composite, artificialized world – that is, modernity is a complex mosaic of interlocking and disparate environmental conditions, each of which contains its own peculiar factors that complement often times very different features of the (technologized) organism. The point here is twofold: (i) it seems undeniable that our contemporary environment is not homogeneous but highly heterogeneous in nature, and (ii) it also seems obvious that no single set of organismal features – whether technologically modified or not – is sufficiently adapted to all of these disparate conditions. Thus, being liable to repeated and great changes of conditions, the modern human assumes a fluctuating condition through the use of technology, and therefore becomes ready to be adapted to all of the many conditions that he or she may encounter.

In sum, we humans have increasingly become adapted to our environments through active human intervention – that is, through technological modifications targeting both ourselves and our surroundings. While niche construction theory explicitly recognizes the latter category of techno-modification, it seems to problematically neglect the former. This is not a trivial lacuna, in my opinion, especially with all the talk in bioethics and biopolitics today about the creation of “enhancement” technologies, i.e., technologies that aim to augment some feature of the human organism or add entirely new features or capacities to its phenotypic repertoire. Thus, for these reasons, it seems that the niche constructionist framework ought to be expanded into a dual constructionist account of human evolution, even if this requires us to rethink inveterate concepts like phenotypic plasticity.

November 16, 2009

Conservapedia, "The Trustworthy Encyclopedia"

Here is my experience with Conservapedia.com. The page on "atheism and beliefs" states the following:

--------------------------

Religion and God
  • Comprehensible God: extending from an arrogance in their own reason, atheists believe that if God exists, He and His works must be comprehensible. Therefore, they argue that God does not exist from paradoxes and such arguments as the Problem of Evil, which God transcends.[9]
  • Disbelief from silence: atheists believe that God does not exist, merely because there is no absolute proof that he does exist, a logical fallacy. Atheists believe that science disproves God[10] but have no actual evidence that this is the case.
  • Moral superiority: atheists believe that religion causes strife and that atheists are inherently morally superior to theists. This flies in the face of actual evidence, which shows that atheists are markedly less generous than theists.[11]
  • Moral relativism: atheists believe that no absolute morals exist, God-inspired or otherwise, and thus rely on vague, transient, and corrupting notions of morality.[12]
  • Satan: Atheists deny the existence of Satan, while simultaneously doing his work.
  • Superior intellect: atheists believe that atheists are inherently more intelligent than theists, and some even consider this an "inevitable product of ongoing social selection."[13]
--------------------------

What
really caught my eye was the last feature attributed to atheists, namely superior intellect. As a matter of fact, there exists some good empirical data on this issue (which is, indeed, an empirical issue). So, in the interest of intellectual honesty and Truth, I thought I'd add to this misleading bullet point a few citations, to show that the idea of a negative correlation between intelligence and religiosity needn't be a mere belief had by atheists. Instead, it is an hypothesis confirmed, albeit tentatively, by the empirical data. The resulting emended bullet point read as follows:
  • Superior intellect: atheists believe that atheists are inherently more intelligent than theists, and some even consider this an "inevitable product of ongoing social selection."Gordon Stein, ''An Anthology of Atheism and Rationalism,'' 164. In fact, a recent peer-reviewed study (which confirms a number of prior peer-reviewed studies), found a strong negative correlation between IQ and religious belief.[http://dailycow.org/system/files/article_0.pdf] Similarly, it was reported in ''Nature'' that only 7% of "great" scientists today profess "personal belief" in a supernatural deity.[http://www.stephenjaygould.org/ctrl/news/file002.html] The connection between intelligence and atheism, therefore, appears not only to be statistically strong but to be getting stronger, at least according to the available data published in high-profile academic journals like ''Nature''.
I also added a comment on the "Talk page" explaining why I added these few extra sentences. I was not at all unreasonable, and indeed I stated that, in my opinion, the bullet point as it stood suggested that "superior intellect" is a mere belief of atheists, arrogant and dogmatic as they are, when in fact a number of respectable academic studies have confirmed that atheists are generally more intelligent, educated, and so on.

What was the response? Incredibly, not only (i) were my comments on the "atheism and beliefs" page deleted (of course), but (ii) my comment on the "Talk" page was also expunged, (iii) my user account was permanently deleted (one must create an account to edit pages), and (iv) my IP address was permanently blocked! (See screen shot below.) In other words, from my computer at home I can
never create another account to edit Conservapedia.com.

To be fair, all I did was
add relevant data with citations -- indeed, citations of papers one of which was published in arguably the most prestigious science journal in the world (i.e., Nature). What an extreme response to my edits, one that seems -- at least in my opinion -- to be wholly incommensurate with the "crime." Very frightening, if you ask me.

Finally, I should add that the editor who blocked me is Andy Schlafly, the proud Christian conservative founder of Conservapedia.com. His username is Aschlafly.

A screen shot of the "Talk page" after my comments were deleted and my IP address permanently blocked.

November 15, 2009

Technological Neutralism in Action

"I am making a nuclear weapon. I really don't see a problem with this because, like most other members of the NRA, I hold a neutralist view of technology: technology is a mere tool, neither good nor bad in itself. The morality of an artifact depends on how we use it. Thus, my maxim is this: Nuclear weapons don't kill people, people kill people. As far as I am concerned, everyone should have the right to own their own nuclear weapon. That should be a constitutional amendment. Nuclear weapons are neither good nor bad -- what matters is how they are used, either for good or bad. Let people have their guns! Let people have their weapons! Technology doesn't cause harm, humans do!!"

November 1, 2009

Intelligent Design and the Atheist's Nightmare

I was thinking about Intelligent Design the other day when I was suddenly interrupted by the hiccups. I then got a cramp in my leg, my foot twitched, and my ears began to ring (as they sometimes do). After taking some Sudafed to alleviate my allergies (especially to pollen), I spent the evening trimming my unibrow and clipping my fingernails. I also took a shower, lest I start to stink!! Fortunately, though, God made bananas just right for eating.




October 30, 2009

October 12, 2009

A Difference Between Computers and Humans

"We [humans] are qualitative geniuses, but quantitative imbeciles. Computers are just the opposite." -- paraphrasing Christopher Cherniak, philosopher and neuroscientist at the University of Maryland. Or, in the words of Andy Clark, humans are "good at frisbee, bad at logic."

October 9, 2009

Jesus the Atheist

Part 1. Many Christians believe that Jesus was both fully human and fully divine. Is this a coherent doctrine, though? Consider this argument:

P1: An essential feature of the human predicament is not knowing with absolute certitude whether or not God exists. Indeed, this feature is not only not knowing for sure whether a God exists, but which one exists if He does (viz., is the existent Deity that belonging to Islam, or Christianity, etc.?). (Note that while neither the theist nor the atheist can answer the question of God's existence with absolute certitude, one can still assign a probability value to the corresponding proposition, given the available evidence. As far as I can tell, the probability of any God existing seems rather low, at least at the moment.)

P1.2: In order to be fully human, therefore, it seems that the individual in question must be able to legitimately doubt the existence of God. In fact, one finds such doubt not just among atheists, but among the most pious advocates of Christianity as well (e.g., Mother Teresa).

P2: As a fully divine being (Hebrews 4:15, John 1:14, 1 John 4:1-3, 2 John 7-11, etc.), Jesus could not possibly have doubted the existence of God. Indeed, he knew with absolute certitude not only that God exists, but that this existent God was/isthat belonging to the Bible (although couching it like this is a bit anachronistic, given that the Bible was assembled after Jesus' death).

C: It follows, therefore, that Jesus could not have been fully human since, as a fully divine being, doubting the existence of God for him would have been metaphysically impossible.

Part 2. This also puts Jesus' passion in dubious light, I believe. There are, roughly speaking, two reasons that people fear death: (1) one might fear the pain associated with the act or process of dying; and (2) one might fear the possibility of spending eternity in hell (this is presumably what leads to death-bed conversions). If Jesus were fully divine and couldn't have doubted the existence of God -- not to mention the eternal fate of His soul -- then Jesus wouldn't have been preoccupied by the second source of death-related anxiety just specified. What a relief for Jesus Christ! A couple days of extreme human pain ending in -- He knew for sure -- eternal life in heaven. That's not too bad an epistemic situation to be in, if you ask me.

But what about the first source? Interestingly, psychological studies have shown that pain perception can be modulated by one's situation, or understanding of the situation. For example, soldiers in WWI dealt with the pain of losing a limb amazing well, as Melzack and Wall (1984) report. Putting myself in Jesus' place, then, I find it rather difficult to believe that I would really be suffering all that much during my crucifixion. How could I, after all, knowing with absolute certitude that beyond this momentary episode of human pain lies eternal life in indescribable bliss? And indeed, if Jesus couldn't of been fully human, in what sense could He have really experience human pain?

Ted Johnston writes HERE: "That Jesus is both God and human is a mystery beyond our limited experience." Falling back on the "mystery" of Christian doctrine is, of course, one of the most intellectually dishonest and facile moves one could make. It is tantamount to giving up on making clear a muddy issue. Saying that "it is a deep, spiritual mystery how the invisible elephant living in my closet manages to be so quiet when others are around" is patent nonsense. Mystery is not an excuse to keep believing some proposition p, but rather is a reason to reject believe p. In contrast, science constitutes a highly sophisticated strategy (as Godfrey-Smith calls it) for converting mystery into understanding -- that is, for transforming nescience (ignorance) into science (knowledge). As fallible as science may be (e.g., study the history of science and then extrapolate to currently accepted paradigms), it nonetheless appears to be by far the best mode of acquiring knowledge about our universe that we have.

Part 3. Finally, a crucial point about atheism. Atheism is not a dogma. It is the conclusion of a logical argument that begins with the available empirical evidence. I don't think the prominent exponents of "New Atheism" emphasize enough that, as intellectually honest individuals, if God were to suddenly make Himself known, or provide some irrefragable bit of evidence for His existence, then these paragons of atheistic thought would promptly abandon their Godless worldviews and adopt a thoroughly theistic posture. Indeed, this would be the scientific thing to do.

Put differently, I -- as an atheist -- would quite happily convert to theism if only there were sufficient evidence to support the theistic hypothesis. As far as I can tell, though, maintaining a scientific stance that puts truth before what I want or desire to be the case, no such evidence has been or even is adducible. But I am always open to such theism-supporting evidence. The flexibility that intellectual honesty entails thus constitutes a significant difference between the rational atheist and the dogmatic zealot.

October 3, 2009

Last words: "God is love"!!

September 28, 2009

New Link to an Old Song

New link to an old song HERE. Who can make a music video for it?

September 22, 2009

September 21, 2009

Balaam, Jonas and Jesus

Forthcoming in American Atheist magazine.
The Christian Cave of Shadows

September 7, 2009

X is too complicated to explain

Saying that some theory or idea X is complicated -- too complicated to explain at the moment, for example -- can mean two possible things: first, X may be convoluted in the sense that it involves (or is composed of) many different parts, and thus requires significant time to explain, without X itself being abstruse. Or second, X may be abstruse, or difficult to understand, without being composed of many different parts. For example, Lakoff/Johnson's conceptual metaphor theory is, in my opinion, not particularly difficult to understand, but it is nonetheless difficult to explain to someone, say, who's never heard of it before, simply because it is composed of manifold theses and subtheses; one must explain traditional philosophical views of objectivity, the Cartesian notion of a disembodied mind, the pleonastic "metaphor metaphor" at the center of their cognitivist framework, and so on. Yet another good example -- at least in my view -- is Darwin's theory of evolution by natural selection. On the other hand, then, "Russell's paradox" -- which brought the entire formidable edifice of Frege's logicism down -- is not an especially complicated idea, although most people do find it especially difficult to grasp. And then, of course, there are theories that are both convoluted and recondite, like string theory or Chomskian linguistics. Alternatively, then, some points -- like the one made in this post -- are neither convoluted nor recondite.

August 1, 2009

Conceptual Metaphor and Embodied Truth

I recently sent Prof. Mark Johnson an email query about his (and George Lakoff's) view of "embodied truth." The trouble, it seemed to me -- and others, such as Steven Pinker in The Stuff of Thought -- is that if the embodied truth thesis is true and (as it claims) truth is always relative to some particular, metaphorically-structured understanding of "the situation" (as Lakoff/Johnson put it), then the truth of the embodied truth thesis itself must also be relative to some such understanding. In contrast, it seems as though Lakoff/Johnson are arguing that embodied truth is absolutely true, and conversely that the "absolutist" or "objectivist" conception of truth (they single out the correspondence theory) is objectively false. Below is the resultant email exchange. Please note that I did not ask Prof. Johnson's permission to post these emails; I have simply assumed that he would not object. Nonetheless, it behooves the reader to read his -- and my -- comments in a charitable manner. Finally, please feel free to add comments, or send me a helpful email if you'd like -- I must admit, I am still not convinced, as much as I'd like to be!

My Email: Prof. Johnson: I am currently reading your Philosophy in the Flesh with great interest, although -- to be frank, if I may -- I am having difficulty seeing how your theory of truth is coherent. A quick question, if you have a moment:

You (and Lakoff) write that "what we take to be true in a situation depends on our embodied understanding of the situation which is in turn shaped by all these factors [i.e., sensory organs, culture, etc.]" (p102). Assuming that this is true, it must be true according to your particular embodied understanding of the situation; and, furthermore, it being shaped by "all these factors" must also be true according to your particular embodied understanding. Why can't I rejoin that according to my particular embodied understanding, the proposition (for example) that "objects have properties objectively" is true -- that is, true in the very same sense as the proposition, stated on the same page as above, that "truth is not simply a relation between words and the world" (p102). I just cannot see how this is a tenable position (obviously -- and this is why I'm writing -- that may be because of my own intellectual shortcomings!). What is your response to this criticism?

Relatedly, I am unsure how your theory handles statements like "atoms contain electrons, protons and neutrons," which is surely not -- it seems to me -- metaphorical. That is just true, and true because it corresponds to an empirical fact. Indeed, the claim that "minds are computers" (or whatever the more sophisticated version would be) is supposed to be on the same par as the assertion about atoms above -- even if it began as a conceptual mapping from computers to minds, a mapping with "heuristic" value. Might it turn out that that statement (about minds and computers) turns out to be true in the same sense that "atoms contain electrons, [etc.]" is true, or that "the earth revolves around the sun" is true?

I apologize for a verbose email -- I am just really eager to know what you think about the "self-defeating" objection, etc. If only I could take a course with you! Thanks so much. Sincerely, Phil

Prof. Johnson's Reply: Phil,Notice the sentence you quoted: "what we take to be true in a situation depends on our embodied understanding . . ." Our point is that "truth" is just another concept like any other human concept, and so it is understood by structures that underlie our conceptual system, and those are grounded in our bodies and their interactions with their environments. An absolutist (objectivist) notion of truth, like the one you are pushing when you speak of scientific truths about electrons, says that truth is independent of our ways of understanding and making sense of things--that it is just a relation between propositions and mind-independent states of affairs. But the history of the philosophy of science over the past thirty years (since Kuhn's Structure of Scientific Revolutions) has been one of coming to realize that science is a human endeavor for making sense of, and interacting in certain specified ways with, our environments, given our values and interests. What makes a scientific view "objective" (I word we shouldn't probably put any serious weight on) is that there is a history of methods of inquiry that articulate phenomena and give explanations according to shared assumptions, and these methods have proved very useful for our shared purposes. So, we think we've got the line on absolute truth. However, the history of science simply shows that this is not the case. People had methods for doing science in ancient Greece that worked in some ways, and not in others, but they got along well enough. We, today, are in a different place, with different conceptions of inquiry, method, and values (such as prediction, simplicity, generalization, elegance, coherence, and so forth--there is a vast literature on such values in science). Moreover, there is a growing, and very large, body of literature showing that our most fundamental concepts in science (and in virtually every field and discipline) are defined metaphorically. We have thirty years of detailed analyses of the metaphorical structure of our key scientific and mathematical concepts. This is not a problem, but just an insight about how the human mind, at this stage of evolutionary development, makes fundamental use of metaphor. The literature is vast, but in Philosophy in the Flesh we give references in the topical bibliography at the end. I've also given references in my books The Body in the Mind (there is a chapter dealing with truth) and in The Meaning of the Body. For mathematics and metaphor, see Lakoff and Nunez Where Mathematics Comes From. For psychology, see Raymond Gibbs, Embodiment and Cognitive Science. Then there's Turner and Fauconnier, The Way We Think. For science see Magnani and Nersessian (eds.) Model-Based Reasoning. There are literally scores of articles on the metaphorical structure of basic scientific concepts.

Mark

My Reply: Prof. Johnson: Thanks for your response a couple of weeks ago, and thanks for the suggested reading. I have perused a number of the books/papers you list, although I’ve not yet read your The Body in the Mind (it’s at the top of my reading list!). Thus, at the risk of asking a question that your book will clearly answer, my fundamental concern is this:

You and Lakoff seem to be arguing that the statement “the ‘absolutist’ conception of truth is false” is absolutely true. On your theory, though, this statement can only be true relative to your particular understanding of “the situation.” Thus, it cannot be absolutely true that the absolutist conception of truth is false (to put it in a slightly circumlocutory way). I definitely understand that, according to your embodied truth thesis, the statement that (e.g.) “the fog is in front of the mountain” is true only relative to some metaphorically structured understanding of the relevant state of affairs. But what about the statement “the embodied truth thesis is true”? Again, it seems that you and Lakoff are arguing that embodied truth is absolutely true, and thus that there really is no absolute truth – an absolutist claim!

Similarly, you state (below) that “the history of science simply shows that this is not the case.” Given embodied truth, I am trying to figure out exactly in what sense this statement is true. Presumably, it is true because it corresponds to the historical facts; but that can’t be right, since the correspondence theory is false. Maybe the notion of the “stability” of truth comes in here – but I can’t find any detailed elucidation of “stable truth” in Philosophy in the Flesh (again, I look forward to reading The Body in the Mind). Indeed, I'm not sure I have any decent grasp of what exactly stability is.

The primary difficulty for me is the (no doubt objective) truth that your kind of relativism – i.e., that truth is always relative to some conceptual system (to quote from Metaphors We Live By) – is unavoidably self-defeating. There must be at least some absolute truth for your embodied truth thesis to be correct, right? And that would mean that it's false.

Am I missing something here? Have I properly understood your views? Aren't you and Lakoff actually making absolutist claims about what is an isn't true? Phil

Prof. Johnson's Reply: Philippe,You will not find in anything George and I have written together any claim to absolute truth (or absolute anything, for that matter). When we say "the absolutist conception of truth is false", that is simply a summary statement for the arguments we have previously given to undermine any absolutist conception. Similarly, when we say "history of science simply shows that . . . ", this is a conclusion based on previous arguments we've given. In both cases, those arguments rested on assumptions we tried to make explicit. However, there is nothing absolute about any of those statements or assumptions. If, for example, you reject the conception of science that we spelled out in Philosophy in the Flesh, then you won't find our arguments compelling, because you won't accept our explanations of the phenomena, as we have articulated those phenomena. Just as Quine argued, nearly fifty years ago now, there is no part of any web of belief that is absolutely unshakeable or unrevisable, given certain conditions that might arise.

Teachers often challenge their relativistic-minded young students, students who boldly assert "Everything is relative", by pointing out that, if that is true, then their statement "everything is relative" is likewise relative, and so not absolutely true. This is the same form of argument you've raised regarding our reliance on certain assumptions and our claims about how certain bodies of scientific research are incompatible with certain philosophical views and claims. But, as I've just said, ANY argument I can frame will necessarily depend on certain assumptions, some of which might indeed be challenged under certain conditions. So, we are not making self-contradictory claims about the truth of what we say, but it would be burdensome to append to every sentence in which we make a strong claim, that that claim is predicated on assumptions X, Y, Z, . . . and a certain conception of science and various methods of the different sciences.

Mark

June 5, 2009

Review of Minds and Computers

I'm currently working on a book review of Techne: Research in Philosophy and Technology. The book was written by Professor Matt Carter, and is entitled Minds and Computers: An Introduction to the Philosophy of Artificial Intelligence. So far -- I'm on the 7th chapter -- the book is very good; an interview with Prof. Carter about the book can be found here.

May 26, 2009

Personal Identity and Cognitive Enhancement

A very interesting talk given by Susan Schneider, professor of philosophy at the University of Pennsylvania. Her paper can be found here.

April 6, 2009

Interview with Nick Bostrom



An excellent interview with Nick Bostrom, Director of the Future of Humanity Institute at Oxford.

April 5, 2009

Appendix to "Towards a Theory of Ignorance"

(Words: ~1169)
In Towards a Theory of Ignorance, I adumbrated a theoretical account of human ignorance. I argued that a theory of ignorance is important, especially for a forward-looking movement like transhumanism, because of such phenomena as: the extraordinary growth of science since its Baconian origin in the seventeenth century; the fractal-like phenomenon of disciplinary and vocational specialization; the "breadth-depth trade-off" that constrains individual human knowledge; etc. Together, these phenomena might lead one to posit a kind of Malthusian principle concerning the epistemic relationship between the collective group and individual person. Such a principle might be: The knowledge had by the individual grows at an arithmetical rate, while the knowledge had by the collective grows at a geometric rate. The result is an exponential divergence between the group's knowledge and the person's knowledge. As Langdon Winner writes: "If ignorance is measured by the amount of available knowledge that an individual or collective 'knower' does not comprehend, one must admit that ignorance, that is relative ignorance, is growing." Finally, I suggested (following Mark Walker) that such phenomena together constitute a good premise for arguing that we ought to develop a species of cognitively "enhanced" posthumans, who would thus be more "mentally equipped" to understand, mitigate and control the negative externalities--most notably the existential risks--that result from our technological progeny.

There are at least two additional issues that are relevant to a theory of ignorance, but which I did not mention. I discuss these briefly below:

(1) In his The Mystery of Being, Gabriel Marcel distinguishes between a problem and a mystery. As the Princeton theologian Daniel Migliore puts it: "While a problem can be solved, a mystery is inexhaustible. A problem can be held at arm's length; a mystery encompasses us and will not let us keep a safe distance." This, of course, ties into our prior discussion of Nicholas of Cusa and "apophatic" theology: God is an incomprehensible mystery, definable only through negation--that is, by what He's not. Furthermore, the more one understands his or her deep and ineradicable ignorance about God, the more "learned" he or she becomes. This is Cusa's "doctrine of learned ignorance." Thus, the boundary between problems and mysteries marks the absolute limits of human knowledge: what lies before this boundary is in principle solvable, even if not yet solved; and what lies beyond it is in principle unsolvable, or completely inscrutable to begin with.

But the distinction between problems and mysteries is not found only in theology. Indeed, the linguist and polymath Noam Chomsky has championed a view of human mental limitations called "cognitive closure." (Note: one finds the same basic position in other works, such as Jerry Fodor's 2000 book, under the name "epistemic boundedness.") On this account, humans are in principle "cognitively closed" to mysteries, while problems are in principle epistemically accessible (that is, 'mystery' and 'problem' are defined as such). For example, the conundra of free will and consciousness are, according to Chomsky, both mysteries. Along these lines, a group of philosophers of mind have espoused a position called New Mysterianism, which states that humans will never fully understand the subjective or phenomenal aspect of consciousness (what Ned Block calls P-consciousness, as opposed to A-consciousness). This feature of conscious thought is often called qualia. Put differently, the connection between, or identity of, mind and matter is like that of mass and energy before 1905 (e.g., "uttered by a pre-Socratic philosopher"), except that the breakthrough paper connecting the two will never be published. That is what New Mysterianists claim.

Furthermore, as Daniel Dennett writes, Chomsky apparently sees the language organ as "not an adaptation, but... a mystery, or a hopeful monster." Thus, Darwin has nothing to say about the evolutionary emergence of human natural languages. Dennett adds that the cognitive closure "argument is presented as a biological, naturalistic argument, reminding us of our kinship with the other beasts, and warning us not to fall into the ancient trap of thinking 'how like an angel' we human 'souls' are with out 'infinite' minds." Thus, the philosopher Colin McGinn writes that "what is closed to the mind of a rat may be open to the mind of a monkey, and what is open to us may be closed to the monkey." Interestingly, this seems to gesture at the evolution-based cognitive metaphorology of George Lakoff and Mark Johnson, who argue that humans have evolved conceptual mapping mechanisms for understanding more abstract domains of thought/experience in terms of more concrete ones. In other words, human cognition is highly limited--our only way to make sense of, for example, the emotion of love is in terms of more familiar activities like journeys. Thus, LOVE IS A JOURNEY, which yields linguistic expressions like "Look how far we've come," "It's been a long, bumpy road," "We're at a crossroads," etc.

Now, the problem-mystery distinction is of interest to transhumanism because the creation of superintelligent beings--either machines that can think or technologically "enhanced" human beings--would almost certainly redefine the boundaries between problems and mysteries, between those questions that are in principle answerable and those questions that we cannot even ask. Thus, not only would the development of a posthuman species have practical benefits (presumably in terms of reducing the probability of an existential disaster, for example), but it would also likely lead to the discovery and elucidation of arcana by which modern Homo sapiens cannot even be baffled, due to our ineluctable epistemic boundedness. Along these lines, Nick Bostrom has even suggested (although the citation eludes me at the moment) that his academic focus is primarily on futurological rather than philosophical matters because, once we create superintelligent machines, many of the persistent puzzles of philosophy will be quickly solved. (See this paper for more.)

(2) The second issue worth mentioning is sometimes called "the theory of rational ignorance," or simply rational ignorance. The idea here is that, given the increasingly complex informational environment enveloping the modern individual, it is sometimes rational to be ignorant about an issue X. That is to say, if the payoff of knowing about X is not worth the commitments required to learn about X, then it might be rational to be X-ignorant. (This can be understood, I believe, as either a normative or descriptive theory: we ought to be ignorant about certain things, given our "finitary predicament," in contrast to "people often rationally choose to remain ignorant of a topic because the perceived utility value of the knowledge is low or even negative," respectively.) As I understand it, rational ignorance is discussed in economics--specifically in public choice theory. Sadly (and indeed ironically) I am not qualified to discuss this theory in detail. Thus, the second point must end here--it's a point worth noting, but one not well understood by the author.

In sum, then, a comprehensive theory of ignorance would account for not only the explananda discussed in the original post (some of which are listed above), but also (1) the relation between both humans and posthumans to the problem-mystery distinction championed by luminaries like Chomsky, and (2) the rationality of remaining ignorant about specific issues, especially given the Malthusian principle of epistemic growth explicated in the first paragraph.