November 16, 2009

Conservapedia, "The Trustworthy Encyclopedia"

Here is my experience with Conservapedia.com. The page on "atheism and beliefs" states the following:

--------------------------

Religion and God
  • Comprehensible God: extending from an arrogance in their own reason, atheists believe that if God exists, He and His works must be comprehensible. Therefore, they argue that God does not exist from paradoxes and such arguments as the Problem of Evil, which God transcends.[9]
  • Disbelief from silence: atheists believe that God does not exist, merely because there is no absolute proof that he does exist, a logical fallacy. Atheists believe that science disproves God[10] but have no actual evidence that this is the case.
  • Moral superiority: atheists believe that religion causes strife and that atheists are inherently morally superior to theists. This flies in the face of actual evidence, which shows that atheists are markedly less generous than theists.[11]
  • Moral relativism: atheists believe that no absolute morals exist, God-inspired or otherwise, and thus rely on vague, transient, and corrupting notions of morality.[12]
  • Satan: Atheists deny the existence of Satan, while simultaneously doing his work.
  • Superior intellect: atheists believe that atheists are inherently more intelligent than theists, and some even consider this an "inevitable product of ongoing social selection."[13]
--------------------------

What
really caught my eye was the last feature attributed to atheists, namely superior intellect. As a matter of fact, there exists some good empirical data on this issue (which is, indeed, an empirical issue). So, in the interest of intellectual honesty and Truth, I thought I'd add to this misleading bullet point a few citations, to show that the idea of a negative correlation between intelligence and religiosity needn't be a mere belief had by atheists. Instead, it is an hypothesis confirmed, albeit tentatively, by the empirical data. The resulting emended bullet point read as follows:
  • Superior intellect: atheists believe that atheists are inherently more intelligent than theists, and some even consider this an "inevitable product of ongoing social selection."Gordon Stein, ''An Anthology of Atheism and Rationalism,'' 164. In fact, a recent peer-reviewed study (which confirms a number of prior peer-reviewed studies), found a strong negative correlation between IQ and religious belief.[http://dailycow.org/system/files/article_0.pdf] Similarly, it was reported in ''Nature'' that only 7% of "great" scientists today profess "personal belief" in a supernatural deity.[http://www.stephenjaygould.org/ctrl/news/file002.html] The connection between intelligence and atheism, therefore, appears not only to be statistically strong but to be getting stronger, at least according to the available data published in high-profile academic journals like ''Nature''.
I also added a comment on the "Talk page" explaining why I added these few extra sentences. I was not at all unreasonable, and indeed I stated that, in my opinion, the bullet point as it stood suggested that "superior intellect" is a mere belief of atheists, arrogant and dogmatic as they are, when in fact a number of respectable academic studies have confirmed that atheists are generally more intelligent, educated, and so on.

What was the response? Incredibly, not only (i) were my comments on the "atheism and beliefs" page deleted (of course), but (ii) my comment on the "Talk" page was also expunged, (iii) my user account was permanently deleted (one must create an account to edit pages), and (iv) my IP address was permanently blocked! (See screen shot below.) In other words, from my computer at home I can
never create another account to edit Conservapedia.com.

To be fair, all I did was
add relevant data with citations -- indeed, citations of papers one of which was published in arguably the most prestigious science journal in the world (i.e., Nature). What an extreme response to my edits, one that seems -- at least in my opinion -- to be wholly incommensurate with the "crime." Very frightening, if you ask me.

Finally, I should add that the editor who blocked me is Andy Schlafly, the proud Christian conservative founder of Conservapedia.com. His username is Aschlafly.

A screen shot of the "Talk page" after my comments were deleted and my IP address permanently blocked.

November 15, 2009

Technological Neutralism in Action

"I am making a nuclear weapon. I really don't see a problem with this because, like most other members of the NRA, I hold a neutralist view of technology: technology is a mere tool, neither good nor bad in itself. The morality of an artifact depends on how we use it. Thus, my maxim is this: Nuclear weapons don't kill people, people kill people. As far as I am concerned, everyone should have the right to own their own nuclear weapon. That should be a constitutional amendment. Nuclear weapons are neither good nor bad -- what matters is how they are used, either for good or bad. Let people have their guns! Let people have their weapons! Technology doesn't cause harm, humans do!!"

November 1, 2009

Intelligent Design and the Atheist's Nightmare

I was thinking about Intelligent Design the other day when I was suddenly interrupted by the hiccups. I then got a cramp in my leg, my foot twitched, and my ears began to ring (as they sometimes do). After taking some Sudafed to alleviate my allergies (especially to pollen), I spent the evening trimming my unibrow and clipping my fingernails. I also took a shower, lest I start to stink!! Fortunately, though, God made bananas just right for eating.




October 30, 2009

October 12, 2009

A Difference Between Computers and Humans

"We [humans] are qualitative geniuses, but quantitative imbeciles. Computers are just the opposite." -- paraphrasing Christopher Cherniak, philosopher and neuroscientist at the University of Maryland. Or, in the words of Andy Clark, humans are "good at frisbee, bad at logic."

October 9, 2009

Jesus the Atheist

Part 1. Many Christians believe that Jesus was both fully human and fully divine. Is this a coherent doctrine, though? Consider this argument:

P1: An essential feature of the human predicament is not knowing with absolute certitude whether or not God exists. Indeed, this feature is not only not knowing for sure whether a God exists, but which one exists if He does (viz., is the existent Deity that belonging to Islam, or Christianity, etc.?). (Note that while neither the theist nor the atheist can answer the question of God's existence with absolute certitude, one can still assign a probability value to the corresponding proposition, given the available evidence. As far as I can tell, the probability of any God existing seems rather low, at least at the moment.)

P1.2: In order to be fully human, therefore, it seems that the individual in question must be able to legitimately doubt the existence of God. In fact, one finds such doubt not just among atheists, but among the most pious advocates of Christianity as well (e.g., Mother Teresa).

P2: As a fully divine being (Hebrews 4:15, John 1:14, 1 John 4:1-3, 2 John 7-11, etc.), Jesus could not possibly have doubted the existence of God. Indeed, he knew with absolute certitude not only that God exists, but that this existent God was/isthat belonging to the Bible (although couching it like this is a bit anachronistic, given that the Bible was assembled after Jesus' death).

C: It follows, therefore, that Jesus could not have been fully human since, as a fully divine being, doubting the existence of God for him would have been metaphysically impossible.

Part 2. This also puts Jesus' passion in dubious light, I believe. There are, roughly speaking, two reasons that people fear death: (1) one might fear the pain associated with the act or process of dying; and (2) one might fear the possibility of spending eternity in hell (this is presumably what leads to death-bed conversions). If Jesus were fully divine and couldn't have doubted the existence of God -- not to mention the eternal fate of His soul -- then Jesus wouldn't have been preoccupied by the second source of death-related anxiety just specified. What a relief for Jesus Christ! A couple days of extreme human pain ending in -- He knew for sure -- eternal life in heaven. That's not too bad an epistemic situation to be in, if you ask me.

But what about the first source? Interestingly, psychological studies have shown that pain perception can be modulated by one's situation, or understanding of the situation. For example, soldiers in WWI dealt with the pain of losing a limb amazing well, as Melzack and Wall (1984) report. Putting myself in Jesus' place, then, I find it rather difficult to believe that I would really be suffering all that much during my crucifixion. How could I, after all, knowing with absolute certitude that beyond this momentary episode of human pain lies eternal life in indescribable bliss? And indeed, if Jesus couldn't of been fully human, in what sense could He have really experience human pain?

Ted Johnston writes HERE: "That Jesus is both God and human is a mystery beyond our limited experience." Falling back on the "mystery" of Christian doctrine is, of course, one of the most intellectually dishonest and facile moves one could make. It is tantamount to giving up on making clear a muddy issue. Saying that "it is a deep, spiritual mystery how the invisible elephant living in my closet manages to be so quiet when others are around" is patent nonsense. Mystery is not an excuse to keep believing some proposition p, but rather is a reason to reject believe p. In contrast, science constitutes a highly sophisticated strategy (as Godfrey-Smith calls it) for converting mystery into understanding -- that is, for transforming nescience (ignorance) into science (knowledge). As fallible as science may be (e.g., study the history of science and then extrapolate to currently accepted paradigms), it nonetheless appears to be by far the best mode of acquiring knowledge about our universe that we have.

Part 3. Finally, a crucial point about atheism. Atheism is not a dogma. It is the conclusion of a logical argument that begins with the available empirical evidence. I don't think the prominent exponents of "New Atheism" emphasize enough that, as intellectually honest individuals, if God were to suddenly make Himself known, or provide some irrefragable bit of evidence for His existence, then these paragons of atheistic thought would promptly abandon their Godless worldviews and adopt a thoroughly theistic posture. Indeed, this would be the scientific thing to do.

Put differently, I -- as an atheist -- would quite happily convert to theism if only there were sufficient evidence to support the theistic hypothesis. As far as I can tell, though, maintaining a scientific stance that puts truth before what I want or desire to be the case, no such evidence has been or even is adducible. But I am always open to such theism-supporting evidence. The flexibility that intellectual honesty entails thus constitutes a significant difference between the rational atheist and the dogmatic zealot.

October 3, 2009

Last words: "God is love"!!

September 28, 2009

New Link to an Old Song

New link to an old song HERE. Who can make a music video for it?

September 22, 2009

September 21, 2009

Balaam, Jonas and Jesus

Forthcoming in American Atheist magazine.
The Christian Cave of Shadows

September 7, 2009

X is too complicated to explain

Saying that some theory or idea X is complicated -- too complicated to explain at the moment, for example -- can mean two possible things: first, X may be convoluted in the sense that it involves (or is composed of) many different parts, and thus requires significant time to explain, without X itself being abstruse. Or second, X may be abstruse, or difficult to understand, without being composed of many different parts. For example, Lakoff/Johnson's conceptual metaphor theory is, in my opinion, not particularly difficult to understand, but it is nonetheless difficult to explain to someone, say, who's never heard of it before, simply because it is composed of manifold theses and subtheses; one must explain traditional philosophical views of objectivity, the Cartesian notion of a disembodied mind, the pleonastic "metaphor metaphor" at the center of their cognitivist framework, and so on. Yet another good example -- at least in my view -- is Darwin's theory of evolution by natural selection. On the other hand, then, "Russell's paradox" -- which brought the entire formidable edifice of Frege's logicism down -- is not an especially complicated idea, although most people do find it especially difficult to grasp. And then, of course, there are theories that are both convoluted and recondite, like string theory or Chomskian linguistics. Alternatively, then, some points -- like the one made in this post -- are neither convoluted nor recondite.

August 1, 2009

Conceptual Metaphor and Embodied Truth

I recently sent Prof. Mark Johnson an email query about his (and George Lakoff's) view of "embodied truth." The trouble, it seemed to me -- and others, such as Steven Pinker in The Stuff of Thought -- is that if the embodied truth thesis is true and (as it claims) truth is always relative to some particular, metaphorically-structured understanding of "the situation" (as Lakoff/Johnson put it), then the truth of the embodied truth thesis itself must also be relative to some such understanding. In contrast, it seems as though Lakoff/Johnson are arguing that embodied truth is absolutely true, and conversely that the "absolutist" or "objectivist" conception of truth (they single out the correspondence theory) is objectively false. Below is the resultant email exchange. Please note that I did not ask Prof. Johnson's permission to post these emails; I have simply assumed that he would not object. Nonetheless, it behooves the reader to read his -- and my -- comments in a charitable manner. Finally, please feel free to add comments, or send me a helpful email if you'd like -- I must admit, I am still not convinced, as much as I'd like to be!

My Email: Prof. Johnson: I am currently reading your Philosophy in the Flesh with great interest, although -- to be frank, if I may -- I am having difficulty seeing how your theory of truth is coherent. A quick question, if you have a moment:

You (and Lakoff) write that "what we take to be true in a situation depends on our embodied understanding of the situation which is in turn shaped by all these factors [i.e., sensory organs, culture, etc.]" (p102). Assuming that this is true, it must be true according to your particular embodied understanding of the situation; and, furthermore, it being shaped by "all these factors" must also be true according to your particular embodied understanding. Why can't I rejoin that according to my particular embodied understanding, the proposition (for example) that "objects have properties objectively" is true -- that is, true in the very same sense as the proposition, stated on the same page as above, that "truth is not simply a relation between words and the world" (p102). I just cannot see how this is a tenable position (obviously -- and this is why I'm writing -- that may be because of my own intellectual shortcomings!). What is your response to this criticism?

Relatedly, I am unsure how your theory handles statements like "atoms contain electrons, protons and neutrons," which is surely not -- it seems to me -- metaphorical. That is just true, and true because it corresponds to an empirical fact. Indeed, the claim that "minds are computers" (or whatever the more sophisticated version would be) is supposed to be on the same par as the assertion about atoms above -- even if it began as a conceptual mapping from computers to minds, a mapping with "heuristic" value. Might it turn out that that statement (about minds and computers) turns out to be true in the same sense that "atoms contain electrons, [etc.]" is true, or that "the earth revolves around the sun" is true?

I apologize for a verbose email -- I am just really eager to know what you think about the "self-defeating" objection, etc. If only I could take a course with you! Thanks so much. Sincerely, Phil

Prof. Johnson's Reply: Phil,Notice the sentence you quoted: "what we take to be true in a situation depends on our embodied understanding . . ." Our point is that "truth" is just another concept like any other human concept, and so it is understood by structures that underlie our conceptual system, and those are grounded in our bodies and their interactions with their environments. An absolutist (objectivist) notion of truth, like the one you are pushing when you speak of scientific truths about electrons, says that truth is independent of our ways of understanding and making sense of things--that it is just a relation between propositions and mind-independent states of affairs. But the history of the philosophy of science over the past thirty years (since Kuhn's Structure of Scientific Revolutions) has been one of coming to realize that science is a human endeavor for making sense of, and interacting in certain specified ways with, our environments, given our values and interests. What makes a scientific view "objective" (I word we shouldn't probably put any serious weight on) is that there is a history of methods of inquiry that articulate phenomena and give explanations according to shared assumptions, and these methods have proved very useful for our shared purposes. So, we think we've got the line on absolute truth. However, the history of science simply shows that this is not the case. People had methods for doing science in ancient Greece that worked in some ways, and not in others, but they got along well enough. We, today, are in a different place, with different conceptions of inquiry, method, and values (such as prediction, simplicity, generalization, elegance, coherence, and so forth--there is a vast literature on such values in science). Moreover, there is a growing, and very large, body of literature showing that our most fundamental concepts in science (and in virtually every field and discipline) are defined metaphorically. We have thirty years of detailed analyses of the metaphorical structure of our key scientific and mathematical concepts. This is not a problem, but just an insight about how the human mind, at this stage of evolutionary development, makes fundamental use of metaphor. The literature is vast, but in Philosophy in the Flesh we give references in the topical bibliography at the end. I've also given references in my books The Body in the Mind (there is a chapter dealing with truth) and in The Meaning of the Body. For mathematics and metaphor, see Lakoff and Nunez Where Mathematics Comes From. For psychology, see Raymond Gibbs, Embodiment and Cognitive Science. Then there's Turner and Fauconnier, The Way We Think. For science see Magnani and Nersessian (eds.) Model-Based Reasoning. There are literally scores of articles on the metaphorical structure of basic scientific concepts.

Mark

My Reply: Prof. Johnson: Thanks for your response a couple of weeks ago, and thanks for the suggested reading. I have perused a number of the books/papers you list, although I’ve not yet read your The Body in the Mind (it’s at the top of my reading list!). Thus, at the risk of asking a question that your book will clearly answer, my fundamental concern is this:

You and Lakoff seem to be arguing that the statement “the ‘absolutist’ conception of truth is false” is absolutely true. On your theory, though, this statement can only be true relative to your particular understanding of “the situation.” Thus, it cannot be absolutely true that the absolutist conception of truth is false (to put it in a slightly circumlocutory way). I definitely understand that, according to your embodied truth thesis, the statement that (e.g.) “the fog is in front of the mountain” is true only relative to some metaphorically structured understanding of the relevant state of affairs. But what about the statement “the embodied truth thesis is true”? Again, it seems that you and Lakoff are arguing that embodied truth is absolutely true, and thus that there really is no absolute truth – an absolutist claim!

Similarly, you state (below) that “the history of science simply shows that this is not the case.” Given embodied truth, I am trying to figure out exactly in what sense this statement is true. Presumably, it is true because it corresponds to the historical facts; but that can’t be right, since the correspondence theory is false. Maybe the notion of the “stability” of truth comes in here – but I can’t find any detailed elucidation of “stable truth” in Philosophy in the Flesh (again, I look forward to reading The Body in the Mind). Indeed, I'm not sure I have any decent grasp of what exactly stability is.

The primary difficulty for me is the (no doubt objective) truth that your kind of relativism – i.e., that truth is always relative to some conceptual system (to quote from Metaphors We Live By) – is unavoidably self-defeating. There must be at least some absolute truth for your embodied truth thesis to be correct, right? And that would mean that it's false.

Am I missing something here? Have I properly understood your views? Aren't you and Lakoff actually making absolutist claims about what is an isn't true? Phil

Prof. Johnson's Reply: Philippe,You will not find in anything George and I have written together any claim to absolute truth (or absolute anything, for that matter). When we say "the absolutist conception of truth is false", that is simply a summary statement for the arguments we have previously given to undermine any absolutist conception. Similarly, when we say "history of science simply shows that . . . ", this is a conclusion based on previous arguments we've given. In both cases, those arguments rested on assumptions we tried to make explicit. However, there is nothing absolute about any of those statements or assumptions. If, for example, you reject the conception of science that we spelled out in Philosophy in the Flesh, then you won't find our arguments compelling, because you won't accept our explanations of the phenomena, as we have articulated those phenomena. Just as Quine argued, nearly fifty years ago now, there is no part of any web of belief that is absolutely unshakeable or unrevisable, given certain conditions that might arise.

Teachers often challenge their relativistic-minded young students, students who boldly assert "Everything is relative", by pointing out that, if that is true, then their statement "everything is relative" is likewise relative, and so not absolutely true. This is the same form of argument you've raised regarding our reliance on certain assumptions and our claims about how certain bodies of scientific research are incompatible with certain philosophical views and claims. But, as I've just said, ANY argument I can frame will necessarily depend on certain assumptions, some of which might indeed be challenged under certain conditions. So, we are not making self-contradictory claims about the truth of what we say, but it would be burdensome to append to every sentence in which we make a strong claim, that that claim is predicated on assumptions X, Y, Z, . . . and a certain conception of science and various methods of the different sciences.

Mark

June 5, 2009

Review of Minds and Computers

I'm currently working on a book review of Techne: Research in Philosophy and Technology. The book was written by Professor Matt Carter, and is entitled Minds and Computers: An Introduction to the Philosophy of Artificial Intelligence. So far -- I'm on the 7th chapter -- the book is very good; an interview with Prof. Carter about the book can be found here.

May 26, 2009

Personal Identity and Cognitive Enhancement

A very interesting talk given by Susan Schneider, professor of philosophy at the University of Pennsylvania. Her paper can be found here.

April 6, 2009

Interview with Nick Bostrom



An excellent interview with Nick Bostrom, Director of the Future of Humanity Institute at Oxford.

April 5, 2009

Appendix to "Towards a Theory of Ignorance"

(Words: ~1169)
In Towards a Theory of Ignorance, I adumbrated a theoretical account of human ignorance. I argued that a theory of ignorance is important, especially for a forward-looking movement like transhumanism, because of such phenomena as: the extraordinary growth of science since its Baconian origin in the seventeenth century; the fractal-like phenomenon of disciplinary and vocational specialization; the "breadth-depth trade-off" that constrains individual human knowledge; etc. Together, these phenomena might lead one to posit a kind of Malthusian principle concerning the epistemic relationship between the collective group and individual person. Such a principle might be: The knowledge had by the individual grows at an arithmetical rate, while the knowledge had by the collective grows at a geometric rate. The result is an exponential divergence between the group's knowledge and the person's knowledge. As Langdon Winner writes: "If ignorance is measured by the amount of available knowledge that an individual or collective 'knower' does not comprehend, one must admit that ignorance, that is relative ignorance, is growing." Finally, I suggested (following Mark Walker) that such phenomena together constitute a good premise for arguing that we ought to develop a species of cognitively "enhanced" posthumans, who would thus be more "mentally equipped" to understand, mitigate and control the negative externalities--most notably the existential risks--that result from our technological progeny.

There are at least two additional issues that are relevant to a theory of ignorance, but which I did not mention. I discuss these briefly below:

(1) In his The Mystery of Being, Gabriel Marcel distinguishes between a problem and a mystery. As the Princeton theologian Daniel Migliore puts it: "While a problem can be solved, a mystery is inexhaustible. A problem can be held at arm's length; a mystery encompasses us and will not let us keep a safe distance." This, of course, ties into our prior discussion of Nicholas of Cusa and "apophatic" theology: God is an incomprehensible mystery, definable only through negation--that is, by what He's not. Furthermore, the more one understands his or her deep and ineradicable ignorance about God, the more "learned" he or she becomes. This is Cusa's "doctrine of learned ignorance." Thus, the boundary between problems and mysteries marks the absolute limits of human knowledge: what lies before this boundary is in principle solvable, even if not yet solved; and what lies beyond it is in principle unsolvable, or completely inscrutable to begin with.

But the distinction between problems and mysteries is not found only in theology. Indeed, the linguist and polymath Noam Chomsky has championed a view of human mental limitations called "cognitive closure." (Note: one finds the same basic position in other works, such as Jerry Fodor's 2000 book, under the name "epistemic boundedness.") On this account, humans are in principle "cognitively closed" to mysteries, while problems are in principle epistemically accessible (that is, 'mystery' and 'problem' are defined as such). For example, the conundra of free will and consciousness are, according to Chomsky, both mysteries. Along these lines, a group of philosophers of mind have espoused a position called New Mysterianism, which states that humans will never fully understand the subjective or phenomenal aspect of consciousness (what Ned Block calls P-consciousness, as opposed to A-consciousness). This feature of conscious thought is often called qualia. Put differently, the connection between, or identity of, mind and matter is like that of mass and energy before 1905 (e.g., "uttered by a pre-Socratic philosopher"), except that the breakthrough paper connecting the two will never be published. That is what New Mysterianists claim.

Furthermore, as Daniel Dennett writes, Chomsky apparently sees the language organ as "not an adaptation, but... a mystery, or a hopeful monster." Thus, Darwin has nothing to say about the evolutionary emergence of human natural languages. Dennett adds that the cognitive closure "argument is presented as a biological, naturalistic argument, reminding us of our kinship with the other beasts, and warning us not to fall into the ancient trap of thinking 'how like an angel' we human 'souls' are with out 'infinite' minds." Thus, the philosopher Colin McGinn writes that "what is closed to the mind of a rat may be open to the mind of a monkey, and what is open to us may be closed to the monkey." Interestingly, this seems to gesture at the evolution-based cognitive metaphorology of George Lakoff and Mark Johnson, who argue that humans have evolved conceptual mapping mechanisms for understanding more abstract domains of thought/experience in terms of more concrete ones. In other words, human cognition is highly limited--our only way to make sense of, for example, the emotion of love is in terms of more familiar activities like journeys. Thus, LOVE IS A JOURNEY, which yields linguistic expressions like "Look how far we've come," "It's been a long, bumpy road," "We're at a crossroads," etc.

Now, the problem-mystery distinction is of interest to transhumanism because the creation of superintelligent beings--either machines that can think or technologically "enhanced" human beings--would almost certainly redefine the boundaries between problems and mysteries, between those questions that are in principle answerable and those questions that we cannot even ask. Thus, not only would the development of a posthuman species have practical benefits (presumably in terms of reducing the probability of an existential disaster, for example), but it would also likely lead to the discovery and elucidation of arcana by which modern Homo sapiens cannot even be baffled, due to our ineluctable epistemic boundedness. Along these lines, Nick Bostrom has even suggested (although the citation eludes me at the moment) that his academic focus is primarily on futurological rather than philosophical matters because, once we create superintelligent machines, many of the persistent puzzles of philosophy will be quickly solved. (See this paper for more.)

(2) The second issue worth mentioning is sometimes called "the theory of rational ignorance," or simply rational ignorance. The idea here is that, given the increasingly complex informational environment enveloping the modern individual, it is sometimes rational to be ignorant about an issue X. That is to say, if the payoff of knowing about X is not worth the commitments required to learn about X, then it might be rational to be X-ignorant. (This can be understood, I believe, as either a normative or descriptive theory: we ought to be ignorant about certain things, given our "finitary predicament," in contrast to "people often rationally choose to remain ignorant of a topic because the perceived utility value of the knowledge is low or even negative," respectively.) As I understand it, rational ignorance is discussed in economics--specifically in public choice theory. Sadly (and indeed ironically) I am not qualified to discuss this theory in detail. Thus, the second point must end here--it's a point worth noting, but one not well understood by the author.

In sum, then, a comprehensive theory of ignorance would account for not only the explananda discussed in the original post (some of which are listed above), but also (1) the relation between both humans and posthumans to the problem-mystery distinction championed by luminaries like Chomsky, and (2) the rationality of remaining ignorant about specific issues, especially given the Malthusian principle of epistemic growth explicated in the first paragraph.

March 28, 2009

Cyborgs and Metaphorology: Mapping Technology onto Biology

(Words: ~2802)
How is human cognition structured? One intriguing answer comes from the work of George Lakoff and Mark Johnson. In their two co-authored books, published in 1980 and 1999, Lakoff/Johnson argue that human cognition is metaphorical in structure--that is, most (but not all) thinking involves mapping concepts from more familiar domains of experience to less familiar ones. Such conceptual mappings are what Lakoff/Johnson call "conceptual metaphors," itself a metaphorical term. (Indeed, as Steven Pinker notes, this theory is based on a “metaphor metaphor” pleonasm.) Thus, the Lakoff /Johnson conception of metaphor contrasts with traditional accounts, which universally identify language as the locus of metaphor. On this “old” view, metaphors are false propositions, although they may prove fecund for the imagination, stimulating one to think about concepts in new and original ways. Rather than focusing on language, though, Lakoff/Johnson argue that metaphors in language are merely external manifestations of underlying cognitive phenomena. In other words, we speak metaphorically because we think metaphorically.

In fact, when one examines human speech--both colloquial and technical, in all languages around the world--one finds it saturated with metaphor, although not of the same type. Indeed Lakoff/Johnson distinguish between a number of different kinds of metaphors, such as: (i) metaphors that map an orientation onto a target domain, (ii) metaphors that confer entityhood to objects in a domain, and (iii) metaphors that map structure from one domain to another. These are, respectively, orientational metaphors, ontological metaphors, and structural metaphors. Now, consider the following quotidian statements, the metaphoricity of which few would normally notice:

--"They greeted me warmly" (based on AFFECTION IS WARMTH)
--"Tomorrow is a big day" (based on IMPORTANT IS BIG)
--"I'm feeling up today" (based on HAPPY IS UP)
--"We've been close for years, but we're beginning to drift apart" (based on INTIMACY IS CLOSENESS)
--"This movie stinks" (based on BAD IS STINKY)
--"She's weighed down by responsibilities" (based on DIFFICULTIES ARE BURDENS)
--"Prices are high" (based on MORE IS UP)
--"Are tomatoes in the fruit or vegetable category?" (based on CATEGORIES ARE CONTAINERS)
--"These colors aren't quite the same, but they're close" (based on SIMILARITY IS CLOSENESS)
--"John's intelligence goes way beyond Bill's" (based on LINEAR SCALES ARE PATHS)
--"How do the pieces of this theory fit together?" (based on ORGANIZATION IS PHYSICAL STRUCTURE)
--"Support your local charities" (based on HELP IS SUPPORT)
... and so on.

With such examples (Lakoff/Johnson give many more), the Necker cube begins to switch--in Kuhnian fashion--toward a new "way of seeing" human language and thought as fundamentally structured by metaphor. But this is just a synchronic look at metaphor and language (we examine language because language is our primary source of empirical evidence for the existence of conceptual metaphors); what about a diachronic perspective? What can history tell us about conceptual metaphor theory? As Lakoff/Johnson point out, a major source of corroborative evidence for their approach comes from distinct patterns of "historical semantic change." Indeed, in her dissertation--written under Lakoff at Berkeley and later published as a book in 1990--Eve Sweetser argues that human languages, stretching across cultural space and time, evince similar or identical etymological patterns. For example, words initially used to denote the activity of physical manipulation consistently acquired (usually through an intermediate stage of polysemy) meanings relating to mental manipulation. For example, when we "comprehend" a thought, we etymologically grasp it by the mind. The same goes for vision and mentation, the latter of which is often understood as a kind of seeing (thus, we have the words 'elucidate', 'obscure', 'enlighten', 'benighted', 'transparent', 'opaque', etc.). According to Sweetser, such repeated patterns of change in different parts of the world, and at different times throughout history, stand as further evidence that the Lakoff/Johnson theory is robust (to speak metaphorically, of course).

Now, a second diachronic perspective concerns biological evolution. This angle too supports Lakoff/Johnson's thesis that human cognition is metaphorically structured. Consider, for example, the following passage from Richard Dawkins:

The way we see the world, and the reason why we find some things intuitively easy to grasp and others hard, is that our brains are themselves evolved organs: on-board computers, evolved to help us survive in a world--I shall use the name Middle World--where the objects that mattered to our survival were neither very large nor very small; a world where things either stood still or moved slowly compared with the speed of light; and where the very improbable could safely be treated as impossible.

Thus, on the Lakoff/Johnson view, humans evolved cognitive mapping mechanisms that allow(ed) us to understand less familiar, abstract or poorly delineated domains of thought/experience in terms of more familiar, concrete, or better delineated domains. In other words, we evolved to our highly circumscribed, mesoscopic "Middle World," and yet we succeed in understanding abstracta at the most micro- and macro-scopic levels of reality. Indeed, as this suggests, metaphor does not just structure our thinking about ordinary, quotidian matters, but the most abstruse, theoretical issues as well. It is of course true that humans use the same brains for both activities. Thus, following Lakoff/Johnson, Theodore Brown argues that Lakoff/Johnson-style metaphors form the conceptual foundations of science. For example, Brown claims that modern chemistry is based (in part) on the metaphor that ATOMS ARE CLOUDS OF NEGATIVE CHARGE SURROUNDING A POSITIVE CENTER. And, similarly, the cognitive metaphorologist Geraldine Van Rijn-van Tongeren argues that modern genetics is based on the metaphor GENOMES ARE TEXTS, given the systematic multiplicity of polysemous textual terms in the lexis of genetics--e.g., 'transcribe', 'translate', 'palindrome', 'reading frame', 'primer', etc.

Now, let's examine the extent to which our modern thinking, both inside and outside of academic biology, is structured by the metaphor ORGANISMS ARE ARTIFACTS. The hypothesis here considered--that the metaphorical mapping of ARTIFACT ORGANISM lies at the conceptual foundations of modern biology, and even informs our pre-theoretic conception of living mater--constitutes nothing more than incipient theorization. Thus, I do not necessarily accept the conclusions arrived at, and indeed there is much to be ambivalent (and excited) about in Lakoff/Johnson's cognitivist metaphorology. Still, looking at biology from this particular angle, I believe, is a worthwhile intellectual endeavor.

To begin, philosophers and biologists have long noted a persistent and rather common metaphorization of organisms as artifacts in modern evolutionary biology. Tim Lewens, for example, uses the term “artifact analogy” to denote this mapping; but Lewens' account treats the analogy (or metaphor) as a purely linguistic, rather than cognitive, phenomenon. (He explicitly adopts Donald Davidson's conception of metaphor.) Indeed, no philosopher has yet provided a detailed interpretation of this organism/artifacts metaphor using Lakoff/Johnson's apparatus, although some, like Michael Ruse, do mention it. This is precisely what I want to do. Now, as alluded to above, there are several distinct phenomena, along both the synchronic and diachronic axis, that one could examine for evidence for/against hypotheses about particular conceptual mappings. In the following paragraphs, I will (i) consider historical semantic change; (ii) examine terminological polysemy and identify other metaphors in biology that systematically cohere with the ORGANISMS ARE ORGANISMS mapping; and finally (iii) I will suggest a possible link between this metaphor and other phenomena discussed outside of biology, such as Langdon Winner's notion of "reverse adaptation" and the medicalization of "deviance" and "natural life processes." (Some transhumanists actually advocate "mak[ing] 'healthy' people feel bad about themselves.")

(i) One can hardly find a more central concept in modern biology than that of the organism. Now, the term 'organism' derives from 'organ', which gives rise to a myriad of important terms in the biological sciences, such as 'organelle', 'organic', 'organization', 'superorganism', etc. But what is the etymology of 'organ'? Following Sweetser's lead, the "hidden" semantic history of this term might provide clues about underlying conceptual mappings. Indeed, 'organ' has both Latin (organum) and Greek (organon) etyma, both of which mean something like "mechanical device, tool, instrument." It appears that humans, at some point, began to see biological entities as human-made artifacts, and this conceptualization manifested itself through the semantic change of 'organ' and (eventually) 'organism', which now means "a living being." (Thus, the sentence 'organisms are artifacts' is, from the etymological point-of-view, almost an analytic truth.)

But when did this occur? Obviously, Rene Descartes proposed a mechanistic conception of the cosmos in the seventeenth century, postulating animals (which have no "mind" substance) as nothing more than machines. Laplace's "clockwork universe" concept is another example of artifactually metaphorizing the world. Later, the natural theologians--most notably William Paley--explicitly understood the universe to be an artifact, namely God's artifact, according to their "Platonic" conception of teleology. But, as Ruse and other philosopher-historians have noted, it was Charles Darwin who pushed the organism/artifact metaphor "further than anyone." That is to say, Darwin understood--in a fundamental way--"nature's parts as machines, as mechanisms, as contrivances" (to quote Ruse again).

Now, the question "When?" is important because its answer may have some bearing on the cogency of Lakoff/Johnson's metaphorology. Consider, for example, the metaphors TIME IS MONEY and TIME IS A RESOURCE. These are not universally held metaphors, by any means. Rather, they are spatiotemporally peculiar--that is, one finds them primarily in the West (space), and they first appeared with the emergence of industrial capitalism (time). And this makes sense, since Lakoff/Johnson claim only that conceptual mappings proceed unidirectionally from more to less familiar domains. Thus, as human familiarity with certain domains increases or decreases, the metaphors we use to understand abstracta will correspondingly change. In the case of ORGANISMS ARE ARTIFACTS, one finds this metaphor becoming foundational to biology right around the time of the English Industrial Revolution. That is to say, the term 'organism' acquired its modern signification circa the early nineteenth century, when the environment in which biologists were theorizing about transmutation and other evolutionary phenomena was becoming increasingly mechanized, industrialized, and cluttered with human-made artifacts. (The term 'organ' appears to have come into use slightly earlier, beginning circa Descartes' time.) Given our cognitive architecture, then, it was only natural to metaphorize organisms (not so familiar domain) as artifacts (increasingly familiar domain).

There are, indeed, many examples in Darwin's work that suggest an external--that is, extra-scientific--influence on this scientific ideas. For example, Darwin talked about "division of labor" in biology, he borrowed from Thomas Malthus' theory of population growth and, as historian Peter Bowler observes, his overall conception of nature "was more in tune with the aggressive worldview of industrial capitalism." Thus, as the source domain from which Darwin (and others) extended conceptual metaphors became increasingly "technologized," the terms 'organ' and 'organism' offered themselves as metaphorically coherent designations for biological entities. Indeed, as further evidence of the newness of 'organism' in nineteenth century biology, Darwin felt compelled to actually define it in his Glossary (Figure 1).

Figure 1: Darwin's definition of 'organism' from On the Origin of Species.

(ii) A glance through an evolutionary biology textbook reveals numerous terms that are consistent with the ORGANISMS ARE ARTIFACTS metaphor. Consider, for example, the terms 'function' and 'mechanism'. Both of these terms are associated with human-made artifacts, as technical devices have functions (in virtue of some agential intention) and are generally composed of mechanisms (which often work according to "laws" or "invariant generalizations"). But the significance of these terms in biology goes deeper than the mere terminological; indeed, the primary modes of explanation used by biologists are properly termed functional and mechanistic. In a functional explanation, one explains why a particular organismal trait is there--that is, why it exists in the first place. For example, a functional explanation of the heart involves specifying its evolutionary history, i.e., what it was naturally selected (in "modern history") to do. In contrast, in a mechanistic explanation, one explains how an aggregate of appropriately organized entities and activities act and interact to produce a phenomenon (the explanandum). For example, the phenomenon of blood circulation is mechanistically explained by the ventricles and atria, their diastolic and systolic activities, etc. (Indeed, the leading theorists of the "new mechanical philosophy" call the phenomena of mechanisms "products," and instead of discussing "causation" they prefer to talk about "productivity.")

Thus, modern biologists apply to biological explananda the exact same modes of explanation used for technological phenomena. And from this we can formulate the following two conceptual metaphors, which follow deductively from the ARTIFACT ORGANISM mapping:

(i) ORGANISMAL PARTS HAVE FUNCTIONS
(ii) ORGANISMS ARE COMPOSED OF MECHANISMS


One finds many more such conceptual mappings, both explicit and tacit, in the biological and philosophical literature. For example, in addition to the two metaphors above, the following metaphors appear to be rather common in biology:

(iii) BIOLOGY IS ENGINEERING
(iv) ORGANISMS ARE REVERSE ENGINEERABLE
(v) MINDS ARE COMPUTERS
(vi) ORGANISMS AND THEIR PARTS ARE DESIGNED
...and so on.

On the present view, then, the terminology and metaphoricity of modern biology are the external, observable manifestations of a deeper underlying conceptual mapping from technology to biology. Incidentally, much of the transhumanist program is based on the notion that organisms (recall here the term's etymology) are no more metaphysically than complex artifacts, designed and engineered by the "blind watchmaker." As Dennett (who is not a transhumanist) boldly argues, evolutionists ought to accept Paley's premise that nature exhibits design; our naturalism, though, impels us to replace God with an ersatz "designer," such as natural selection. Furthermore, the view that humans can fill themselves with (e.g.) nanobots, such as "respirocytes," to carry oxygen to various organs, or that humans can "upload" their minds to a computer, is crucially based on the ORGANISMS ARE ARTIFACTS metaphor. Strong AI, for example, puts forth the artifactual metaphors that BRAINS ARE COMPUTER HARDWARE and MINDS ARE COMPUTER SOFTWARE. Thus, Strong AI reasons that just as computer software is "multiply realizable," so too are minds--the particular physical substrate is irrelevant, as long as it exhibits the proper functional organization. (Note that Jaron Lanier's critique of "cybernetic totalism" ties directly into the present discussion.) In conclusion, then, this points to the connection between cyborgs and metaphorology.

(iii) But there is also a connection, I believe, between phenomena like "reverse adaptation" and the ORGANISMS ARE ARTIFACTS metaphor. To begin, let's look at what reverse adaptation is. In Langdon Winner's words:

A subtle but comprehensive alteration takes place in the form and substance of [he] thinking and motivation [of modern humans]. Efficiency, speed, precise measurement, rationality, productivity, and technical improvement become ends in themselves applied obsessively to areas of life in which they would previously have been rejected as inappropriate.

Without a doubt, it is precisely these qualities that transhumanists identify as the properties that humans ought to possess; indeed, the entire motivation behind "enhancement" technologies is to overcome innate human limits on efficiency, speed, productivity, etc. For example, Nick Bostrom sees as undesirable "the impossibility for us current humans to visualize an [sic] 200-dimensional hypersphere or to read, with perfect recollection and understanding, every book in the Library of Congress." And the futurist Ray Kurzweil complains about (to compile a rather random list of passages that gesture at the point):

--"the very slow speed of human knowledge-sharing through language"
--our inability "to download skills and knowledge"
--the slow rate of "about one hundred meters per second for the electrochemical signals used in biological mammalian brains"
--our failure to "master all [the knowledge of our human-machine civilization]"
--the "fleeting and unreliable" ability of human beings to maintain intimate interpersonal relations (e.g., love)
--the "slow speed of our interneuronal connections [and our] fixed skull size"
--our "protein-based mechanisms [that lack] in strength and speed"
--the "profoundly limited" plasticity of the brain
...and so on.

In other words, the human organism is a technological artifact, and as such it ought to behave like one. It is no wonder, then, that behaviors and thought patterns that deviate from (what we might call) a "technological norm" are considered, through the process of medicalization, "pathological." Just as computers are expected to sit on one's desk and perform specific tasks on command, so too the corporate employee is expected to sit at one's desk and perform specific tasks on command. Psychiatry is not a value-neutral field, and the values applied to humans are, one might argue, often derived from technology.

This is my tentative thesis linking the cyborg and metaphorology. More theoretical work is required, as many of these points can be significantly elaborated. But, after all, I am only human--at least for now.