[1] Another important thesis of Gubrud's post is that the Kurzweil/Moravec theory that the self is constituted by an abstract pattern that endures over time and the theory that the self is an immaterial soul are really the same theories. From the original article's comments, Gubrud writes the following:
"Philippe - Christopher Hitchens is a pissant. You write:
Holding that the self is a pattern is definitely *NOT* the same as holding that the self is a soul.
So who's "underlining" here? My entire argument is that it is the same. Where's your "critique" of that? It's the same because "pattern" is presumed to be a thing that exists apart from the substance of the body and separable from it, transferable to another body, another substance. This is essentially a mythical, magical image of soul transfer. Well, I guess I'm just "underlining" my argument again. And I guess you've proven (somehow, ad hominem I suspect) that it just isn't worth your even bothering to critique it."
[2] From the lexicon: “A pretentious attitude of scholarship; superficial knowledgeability.”
[3] Recall, for example, that John Searle maintains that computers could not possibly be conscious, and Susan Schneider holds that the sort of non-destructive uploading described in the book Mindscans would fail to transfer the self. But their reasons are quite different, and far more sophisticated, than Gubrud's.
[4] Note that the theory of evolution – of how evolutionary change actually happens – is distinct from the fact that it did occur. One could accept that evolution has occurred yet reject the Darwinian mechanism of natural selection in favor of a Lamarckian, or God-directed, one.
[5] Extrapolating from the history of science, one might even hold that most of our current theories are wrong, where “wrong” could mean either being incorrect (in which case the theory ought to be discarded) or merely being incomplete (in which case the theory need only be revised). But, of course, the possibility of a theory being wrong has no bearing on whether it is rational to accept it as true, given the evidence available at a given moment.
[6] If transhumanism is construed as below – that is, as a bipartite thesis about what will and what should be the case – then transhumanism would fail if the world turned out to be other than what it says the world is like. In other words, since ought implies can, if we can't effectively enhance the human organism, then there's no point in saying that we should.
One definition, from this paper of mine: "Transhumanism is a recent philosophical and cultural movement that has both descriptive and normative components: (1) the descriptive claim is that current and anticipated future technologies will make it possible to radically alter both our world and persons, not just by “enhancing” the capacities that we already have but also by adding entirely new capacities not previously had.1 (2) The normative claim is that we ought to do what we can to foment and accelerate the creation of such “enhancement” technologies, thereby converting the possibility of a “posthuman” future into an actuality."
[7] See my “Risk Mysterianism and Cognitive Boosters,” forthcoming in the Journal of Future Studies, for an argument to this effect.
July 17, 2010
June 20, 2010
Notes to Why "Why Transhumanism Won't Work" Won't Work
The article is now up on the IEET website, HERE. (A PDF version can be found here.)
Footnotes:
[1] Cognitive enhancements have the potential to significantly augment the cognitive capacities of the individual, thus (possibly, to some extent) closing the epistemic gap between what the collective whole and the individual knows. At some point in the future, then, it may be that each individual knows as much as the entire group.
[2] See Jaynes, E.T. and G.L. Bretthorst. 2003. Probability Theory: The Logic of Science. Cambridge: Cambridge University Press.
[3] Gubrud writes: “Since these multiple criteria, not all clearly defined, may sometimes conflict, or may tend to different formulations of ‘identity’ and its rules, philosophers have here a rich field in which to play. ‘Progress’ in this field will then consist of an endless proliferation of terms, distinctions, cases, arguments, counterarguments, papers, books, conferences and chairs, until all tenured positions and library shelves are filled, the pages yellowed, new issues come into vogue, and the cycle starts over. I come as a visitor to this field, and while I must admire the intricate Antonio Gaudi architecture of castles that have been raised from its sands, twisting skyward and tumbling over one another, my impulse is bulldoze [sic] it (see first sentence of this essay), flatten it out and start from scratch, laying a simple structure with thick walls no more than ankle-high above the ground, as follows.”
[4] Consider the opening sentence of Gubrud’s “Balloon” paper: “Physical objects exist, consisting of matter and energy, or any other physical substance that may exist, but please note that ‘information’ is not one; neither is ‘identity’.” Where to begin? I cannot think of a single philosopher – or, for that matter, scientist – who would argue that information doesn’t exist or is non-physical in nature. (Indeed, information theory is a part of physics.) Most philosophers today are ardent physicalists who see information as perfectly compatible with their metaphysical monism (which asserts that “everything is physical”).
Gubrud’s reflex here is, no doubt, to think: “Yeah, well, that doesn’t make sense to me. How could information really be physical? I mean, you can’t reach out and touch information…” I would encourage Gubrud not to leap to any conclusions; first try to understand why philosophers today take information to be physical (a crucial first step that Gubrud repeatedly fails to make). Then you can proceed to critique the thesis, if you’d like, once you know what that thesis is. (Perusing this article on physicalism would be a good start – but only a start.)
[5] Gubrud writes: “For transhumanism itself is uploading writ large.”
[6] Take note that philosophers typically distinguish between the qualitative and non-qualitative aspects of mentality; in Ned Block’s phraseology, the former is “phenomenal” consciousness and the latter “access” consciousness. Chalmers (1996) also emphasizes an exactly parallel distinction between "psychological" (or "functional") and "phenomenal" conceptions of the mind.
[7] Note that this computation may be of numerous different kinds; again, see this article.
[8] To be clear, functionalism takes mental states to be “ontologically neutral.” That is, while purely physical (e.g., neural) systems could indeed instantiate a given mental state, so could, in principle, an immaterial substance of some sort. All that’s relevant, according to the functionalist view, is the substrate's causal-functional properties.
[9] As Howard Robinson writes: “Predicate dualism is the theory that psychological or mentalistic predicates are (a) essential for a full description of the world and (b) are not reducible to physicalistic predicates. For a mental predicate to be reducible, there would be bridging laws connecting types of psychological states to types of physical ones in such a way that the use of the mental predicate carried no information that could not be expressed without it. An example of what we believe to be a true type reduction outside psychology is the case of water, where water is always H2O: something is water if and only if it is H2O. If one were to replace the word ‘water’ by ‘H2O’, it is plausible to say that one could convey all the same information. But the terms in many of the special sciences (that is, any science except physics itself) are not reducible in this way. Not every hurricane or every infectious disease, let alone every devaluation of the currency or every coup d'etat has the same constitutive structure. These states are defined more by what they do than by their composition or structure. Their names are classified as functional terms rather than natural kind terms. It goes with this that such kinds of state are multiply realizable; that is, they may be constituted by different kinds of physical structures under different circumstances. Because of this, unlike in the case of water and H2O, one could not replace these terms by some more basic physical description and still convey the same information. There is no particular description, using the language of physics or chemistry, that would do the work of the word ‘hurricane’, in the way that ‘H2O’ would do the work of ‘water’. It is widely agreed that many, if not all, psychological states are similarly irreducible, and so psychological predicates are not reducible to physical descriptions and one has predicate [or descriptive] dualism.”
[10] More generally, Gubrud seems especially susceptible to confusing terms with the entities signified by those terms. That is, Gubrud reasons that since there are two (or three, etc.) different terms in the discussion, then there must be two (or three, etc.) different referents. Consider, for example, the following passage from his Futurisms article:
"Thus Moravec advances a theory of
See the first "rule for avoiding sciolism" mentioned in my article.
(Additional note: ontological dualism seems to imply descriptive dualism, but descriptive dualism does not necessarily imply ontological dualism.)
[11] Chalmers’ view is called “property dualism.” It holds that certain particulars have physical and non-physical properties. In contrast, Cartesian substance dualism posits that those particulars themselves are non-physical in nature. My own tentative view is that this is probably wrong, but that we (unenhanced humans) are simply "cognitively closed" to the correct answer. (This is McGinn's "transcendental naturalism.")
[12] As Georges Rey puts it, "consider some theory, H, about houses (which might state generatlizations about the kinds of houses to be found in different places). The ontology of this theory is presumably a subset of the ontology of a complete physical theory, P: every house, after all, is some or other physical thing. But the sets of physical things picked out by the ideology of H -- for example, by the predicate "x is a house" -- may not be a set picked out by any of the usual predicates in the ideology of P. After all, different houses may be made out of arbitrarily different physical substances (straw, wood, bricks, ice, ...), obeying different physical laws. Houses, that is, are multiply realizable. To appreciate the generalizations of theory H it will be essential to think of those sundry physical things as captured by the ideology of H, not P. But, of course, one can do this without deny that houses are, indeed, just physical things."
[13] The answer to Why? here, on Chalmer's view, is that consciousness is simply a brute fact about the world in which we live. Psychophysical laws connecting matter and conscious states are fundamental laws, just like the laws of thermodynamics, or motion. They are, as it were, the ultimate "unexplained explainers."
[14] As Schneider points out, patternism is thus a computationalist version of the "psychological continuity theory" of personal identity.
[15] This is, in my opinion, a rather interesting thought: the uploaded mind would indeed by psychologically continuous with me. Mind clones seem, I suppose, more intimately related than genetic clones (such as identical twins).
Back to the article -->
Footnotes:
[1] Cognitive enhancements have the potential to significantly augment the cognitive capacities of the individual, thus (possibly, to some extent) closing the epistemic gap between what the collective whole and the individual knows. At some point in the future, then, it may be that each individual knows as much as the entire group.
[2] See Jaynes, E.T. and G.L. Bretthorst. 2003. Probability Theory: The Logic of Science. Cambridge: Cambridge University Press.
[3] Gubrud writes: “Since these multiple criteria, not all clearly defined, may sometimes conflict, or may tend to different formulations of ‘identity’ and its rules, philosophers have here a rich field in which to play. ‘Progress’ in this field will then consist of an endless proliferation of terms, distinctions, cases, arguments, counterarguments, papers, books, conferences and chairs, until all tenured positions and library shelves are filled, the pages yellowed, new issues come into vogue, and the cycle starts over. I come as a visitor to this field, and while I must admire the intricate Antonio Gaudi architecture of castles that have been raised from its sands, twisting skyward and tumbling over one another, my impulse is bulldoze [sic] it (see first sentence of this essay), flatten it out and start from scratch, laying a simple structure with thick walls no more than ankle-high above the ground, as follows.”
[4] Consider the opening sentence of Gubrud’s “Balloon” paper: “Physical objects exist, consisting of matter and energy, or any other physical substance that may exist, but please note that ‘information’ is not one; neither is ‘identity’.” Where to begin? I cannot think of a single philosopher – or, for that matter, scientist – who would argue that information doesn’t exist or is non-physical in nature. (Indeed, information theory is a part of physics.) Most philosophers today are ardent physicalists who see information as perfectly compatible with their metaphysical monism (which asserts that “everything is physical”).
Gubrud’s reflex here is, no doubt, to think: “Yeah, well, that doesn’t make sense to me. How could information really be physical? I mean, you can’t reach out and touch information…” I would encourage Gubrud not to leap to any conclusions; first try to understand why philosophers today take information to be physical (a crucial first step that Gubrud repeatedly fails to make). Then you can proceed to critique the thesis, if you’d like, once you know what that thesis is. (Perusing this article on physicalism would be a good start – but only a start.)
[5] Gubrud writes: “For transhumanism itself is uploading writ large.”
[6] Take note that philosophers typically distinguish between the qualitative and non-qualitative aspects of mentality; in Ned Block’s phraseology, the former is “phenomenal” consciousness and the latter “access” consciousness. Chalmers (1996) also emphasizes an exactly parallel distinction between "psychological" (or "functional") and "phenomenal" conceptions of the mind.
[7] Note that this computation may be of numerous different kinds; again, see this article.
[8] To be clear, functionalism takes mental states to be “ontologically neutral.” That is, while purely physical (e.g., neural) systems could indeed instantiate a given mental state, so could, in principle, an immaterial substance of some sort. All that’s relevant, according to the functionalist view, is the substrate's causal-functional properties.
[9] As Howard Robinson writes: “Predicate dualism is the theory that psychological or mentalistic predicates are (a) essential for a full description of the world and (b) are not reducible to physicalistic predicates. For a mental predicate to be reducible, there would be bridging laws connecting types of psychological states to types of physical ones in such a way that the use of the mental predicate carried no information that could not be expressed without it. An example of what we believe to be a true type reduction outside psychology is the case of water, where water is always H2O: something is water if and only if it is H2O. If one were to replace the word ‘water’ by ‘H2O’, it is plausible to say that one could convey all the same information. But the terms in many of the special sciences (that is, any science except physics itself) are not reducible in this way. Not every hurricane or every infectious disease, let alone every devaluation of the currency or every coup d'etat has the same constitutive structure. These states are defined more by what they do than by their composition or structure. Their names are classified as functional terms rather than natural kind terms. It goes with this that such kinds of state are multiply realizable; that is, they may be constituted by different kinds of physical structures under different circumstances. Because of this, unlike in the case of water and H2O, one could not replace these terms by some more basic physical description and still convey the same information. There is no particular description, using the language of physics or chemistry, that would do the work of the word ‘hurricane’, in the way that ‘H2O’ would do the work of ‘water’. It is widely agreed that many, if not all, psychological states are similarly irreducible, and so psychological predicates are not reducible to physical descriptions and one has predicate [or descriptive] dualism.”
[10] More generally, Gubrud seems especially susceptible to confusing terms with the entities signified by those terms. That is, Gubrud reasons that since there are two (or three, etc.) different terms in the discussion, then there must be two (or three, etc.) different referents. Consider, for example, the following passage from his Futurisms article:
"Thus Moravec advances a theory of
pattern-identity ... [which] defines the essence of a person, say myself, as the pattern and the process going on in my head and body, not the machinery supporting that process. If the process is preserved, I am preserved. The rest is mere jelly.Not only has Moravec introduced 'pattern' as a stand-in for 'soul', but in order to define it he has referred to another stand-in, 'the essence of a person'. But he seems aware of the inadequacy of 'pattern', and tries to cover it up with another word, 'process'. So now we have a pattern and a process, separable from the 'mere jelly'. Is this some kind of trinity?"
See the first "rule for avoiding sciolism" mentioned in my article.
(Additional note: ontological dualism seems to imply descriptive dualism, but descriptive dualism does not necessarily imply ontological dualism.)
[11] Chalmers’ view is called “property dualism.” It holds that certain particulars have physical and non-physical properties. In contrast, Cartesian substance dualism posits that those particulars themselves are non-physical in nature. My own tentative view is that this is probably wrong, but that we (unenhanced humans) are simply "cognitively closed" to the correct answer. (This is McGinn's "transcendental naturalism.")
[12] As Georges Rey puts it, "consider some theory, H, about houses (which might state generatlizations about the kinds of houses to be found in different places). The ontology of this theory is presumably a subset of the ontology of a complete physical theory, P: every house, after all, is some or other physical thing. But the sets of physical things picked out by the ideology of H -- for example, by the predicate "x is a house" -- may not be a set picked out by any of the usual predicates in the ideology of P. After all, different houses may be made out of arbitrarily different physical substances (straw, wood, bricks, ice, ...), obeying different physical laws. Houses, that is, are multiply realizable. To appreciate the generalizations of theory H it will be essential to think of those sundry physical things as captured by the ideology of H, not P. But, of course, one can do this without deny that houses are, indeed, just physical things."
[13] The answer to Why? here, on Chalmer's view, is that consciousness is simply a brute fact about the world in which we live. Psychophysical laws connecting matter and conscious states are fundamental laws, just like the laws of thermodynamics, or motion. They are, as it were, the ultimate "unexplained explainers."
[14] As Schneider points out, patternism is thus a computationalist version of the "psychological continuity theory" of personal identity.
[15] This is, in my opinion, a rather interesting thought: the uploaded mind would indeed by psychologically continuous with me. Mind clones seem, I suppose, more intimately related than genetic clones (such as identical twins).
Back to the article -->
April 8, 2010
Blue Skies and Existential Risks
[This is a revised version of an article previously published on the Institute of Ethics and Emerging Technology website.]
Basic research is what I'm doing when I don't know what I'm doing.
– Wernher Von Braun
CERN's Large Hadron Collider (LHC), a product of the biggest Big Science project in human history, has recently been in the news for having “smashed beams of protons together at energies that are 3.5 times higher than previously achieved.” This achievement stimulated thought, once again, about my ambivalence towards the LHC. The feeling arises from a conflict between (a) my “epistemophilia,” or love of knowledge, and (b) specific moral considerations concerning what sorts of pursuits ought to have priority given the particular world we happen to inhabit. In explaining this conflict, I would like to suggest two ways the LHC's funds could have been better spent, as well as respond to a few defenses of the LHC.
Moral and Practical Considerations
In 2008, the former UK chief scientist Sir David King criticized the LHC for being a “blue skies” project[1], arguing that “the challenges of the 21st Century are qualitatively different from anything that we've had to face up to before,” and that “this requires a re-think of priorities in science and technology.” In other words, couldn't the >$6 billion that funded the LHC have been better spent on other endeavors, projects or programs?
I am inclined to answer this question positively: YES, the money could have been better spent. Why? For at least two reasons[2]:
(1) Morally speaking, there is an expanding manifold of “sub-existential risk” scenarios that have been and are being actualized around the globe – scenarios that deserve immediate moral attention and urgent financial assistance. Thus, one wonders about the moral justifiability of “unnecessary” research projects in the affluent “First World” when nearly 16,000 children die of avoidable hunger-related illnesses every day; when water pollution kills more humans than all violence worldwide; when unregulated pharmaceuticals pollute public drinking water; when the Great Pacific Garbage Patch, superfund sites and eutrophication threaten the very livability of our lonely planet in space.
Ascending from the “personal/local/global” to the “transgenerational” level of analysis, there exists a growing mass of increasingly ominous existential risks that demand serious scientific and philosophical study. Such risks are the most conspicuous reason why, as King observes above, the present moment in human history is “qualitatively different” from any prior epoch. Just 65 years ago, for example, there were only one or two “natural” existential risks. Looking at the present moment and into the near future, experts now count roughly 23 mostly anthropogenic types of existential risks (to say nothing of their tokens). Yet, as Nick Bostrom laments, “it is sad that humanity as a whole has not invested even a few million dollars to improve its thinking about how it may best ensure its own survival.” If any projects deserve $6 billion, in my opinion, it is those located within the still-incipient field of “secular eschatology.” More on this below.
(2) Practically speaking, one could argue that LHC's money could have been better spent developing “enhancement” technologies. Consider the fact that, if “strategies for engineered negligible senescence” (SENS) were perfected, the physicists now working on the LHC could have significantly more (life)time to pursue their various research projects. The development of such techno-strategies would thus be in the personal interest of anyone who, for example, wishes to see the protracted research projects on which they're working come to fruition. (As one author notes, the LHC extends beyond a single professional career[3].) Furthermore, healthspan-extending technologies promise to alleviate human suffering from a host of age-related pathologies, thus providing a more altruistic public good as well.
A similar argument could apply to the research domain of cognitive enhancements, such as nootropics, tissue grafts and neural implants. Again, in terms of the benefits for science, “a 'superficial' contribution that facilitates work across a wide range of domains can be worth much more than a relatively 'profound' contribution limited to one narrow field, just as a lake can contain a lot more water than a well, even if the well is deeper.”[4] Cognition-enhancing technologies would thus provide an appreciable boost not just to research on fundamental physics issues – the first billionth of a second after the Big Bang, the existence of the Higgs boson particle, etc. – but to the scientific enterprise as a whole.
Second, there may exist theories needed to understand observable phenomena that are in principle beyond our epistemic reach – that is, theories to which we are forever “cognitively closed.” The so-called “theory of everything,” or a theory elucidating the nature of conscious experience, might fall within this category. And the only plausible route out of this labyrinth, I believe, is to redefine the boundary between “problems” and “mysteries” via some techno-intervention on the brain. Otherwise, we may be trapped in a state of perennial ignorance with respect to those phenomena – floundering like a chimpanzee trying to conjugate a verb or calculate the GDP of China. Yet another reason to divert more funds towards “applied” enhancement research.
Furthermore, upgrading our mental software would augment our ability to evaluate the risks involved in LHC-like experiments. Physicists are, of course, overwhelmingly confident that the LHC is safe and thus will not produce a strangelet, vacuum bubble or microscopic black hole. (See the LSAG report.) But it is easy – especially for those who don't study the history and philosophy of science – to forget about the intrinsic fallibility of scientific research. When properly contextualized, then, such confidence appears consternatingly less impressive than one might initially think.
Consider, for example, Max Plank's oft-quoted comment that “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” Thus, for all we know at present, the next generation of physicists, working within a modified framework of more advanced theory, will regard the LHC's risks as significant – just as the lobotomy, for which Egas Moniz won the science's most prestigious award, the Nobel Prize, in 1949, is now rejected as an ignominious violation of human autonomy. This point becomes even more incisive when one hears scientists describe the LHC as “certainly, by far, the biggest jump into the unknown” that research has ever made. (Or, recall Arthur Clarke's famous quip: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.")
Critics of Critics
In response to such criticism, many scientists have vehemently defended the LHC. Brian Cox, for example, riposts “with an emphatic NO” to contrarians who suggest that “we [can] do something more useful with that kind of money.” But Cox's thesis, in my opinion, is not compelling. Consider the culmination of Cox's argument: “Most importantly, though, the world would be truly impoverished without all the fundamental knowledge we've gained” from projects like the LHC[5]. Now, at first glance, this claim seems quite reasonable. Without the “fundamental knowledge” provided by Darwinian theory, for example, it would be difficult (as Dawkins contends) to be an “intellectually fulfilled atheist.” This is an instance of science – by virtue of its pushing back the “envelope of ignorance” – significantly enriching the naturalistic worldview.
But Cox's assertion could also be construed as rather offensive. Why? Because the fact is that much of the world is quite literally impoverished. Thus, from the perspective of millions of people who struggle daily to satisfy the most basic needs of Maslow's hierarchy – people who aren't fortunate enough to live lives marked by “the leisure of the theory class,” with its “conspicuous consumption” of information – Cox's poverty argument for blue skies research is, at worst, an argument from intellectual vanity. It considers the costs of postponing physics research from a rather solipsistic perspective; and considering issues from the perspectives of others is, of course, the heart of ethics[6]. Surely if the roles were reversed and advocates of the LHC suddenly found themselves destitute in an “undeveloped” country, they would agree that the material needs of the needy (themselves) should take precedence over the intellectual needs of the privileged.
Furthermore, consider Sir Martin Rees' defense. “It is mistaken to claim,” Rees argues, “that global problems will be solved more quickly if only researchers would abandon their quest to understand the universe and knuckle down to work on an agenda of public or political concerns. These are not 'either/or' options – indeed, there is a positive symbiosis between them.”
But, in my view, the existence of such symbioses is immaterial. The issue instead concerns how resources, including money and scientists, are best put to use. A retort to Rees could thus go as follows: there is a crucial difference between making every effort, and making merely some effort, to exorcise the specter of existential and other related risks that haunts the present millennium. Thus, if one is seriously concerned about the future of humanity, then one should give research directly aimed at solving these historically unique conundra of eschatological proportions strong priority over any project that could, at best, only “indirectly” or “fortuitously” improve the prospects of life on Earth.
Rees' defense of the LHC is perplexing because he is (at least ostensibly) quite concerned with secular eschatology. In his portentous book Our Final Hour, Rees argues that “the odds are no better than fifty-fifty that our present civilisation on Earth will survive to the end of the present century.” Similar figures have been suggested by futurologists like Bostrom, John Leslie and Richard Posner[7]. Thus, given such dismal probability estimates of human self-annihilation, efforts to justify the allocation of limited resources for blue skies projects appear otiose. As the notable champion of science, Bertrand Russell, once stated in a 1924 article inveighing against the space program, “we should address terrestrial problems first.”
In conclusion, it should be clear that the main thrust of my criticism here concerns the moral issue of existential risks. The present situation is, I believe, sufficiently dire to warrant the postponement of any endeavor, project or program that does not have a high probability of yielding results that could help contain the “qualitatively different” problems of the 21st century. This means that the LHC should be (temporarily) shut down. But even if one remains unmoved by such apocalyptic concerns, there are still good practical reasons for opposing, at the present moment, blue skies research: money could be better spent – or so the argument goes – developing effective enhancement technologies. Such artifacts might not only accelerate scientific “progress” but help alleviate human suffering too. Finally, I have suggested that some counterarguments put forth in defense of the LHC do not hold much water.
If we want to survive the present millennium, then we must, I believe, show that we are serious about solving the plethora of historically unique problems now confronting us.
[1] And as Brian Cox states, “there can be no better symbol of that pure curiosity-driven research than the Large Hadron Collider.”
[2] These are two distinct reasons for opposing the LHC – reasons that may or may not be compatible. For example, one might attempt to use the moral argument against the enhancement argument: spend money helping people now, rather than creating “enhancements” with dangerous new technology. Or, one might, as Mark Walker does, argue that the development of enhanced posthumans actually offers the best way of mitigating the risks mentioned in the moral argument.
[3] See this article, page 6.
[4] From this article by Bostrom.
[5] Space prevents me from considering Cox's additional arguments that “the world would be a far less comfortable place because of the loss to medicine alone, and a poorer place for the loss to commerce.” I would also controvert, to some extent, these assertions as well.
[6] This is the so-called “moral point of view.”
[7] Although Posner does not give an actual number.
Basic research is what I'm doing when I don't know what I'm doing.
– Wernher Von Braun
CERN's Large Hadron Collider (LHC), a product of the biggest Big Science project in human history, has recently been in the news for having “smashed beams of protons together at energies that are 3.5 times higher than previously achieved.” This achievement stimulated thought, once again, about my ambivalence towards the LHC. The feeling arises from a conflict between (a) my “epistemophilia,” or love of knowledge, and (b) specific moral considerations concerning what sorts of pursuits ought to have priority given the particular world we happen to inhabit. In explaining this conflict, I would like to suggest two ways the LHC's funds could have been better spent, as well as respond to a few defenses of the LHC.
Moral and Practical Considerations
In 2008, the former UK chief scientist Sir David King criticized the LHC for being a “blue skies” project[1], arguing that “the challenges of the 21st Century are qualitatively different from anything that we've had to face up to before,” and that “this requires a re-think of priorities in science and technology.” In other words, couldn't the >$6 billion that funded the LHC have been better spent on other endeavors, projects or programs?
I am inclined to answer this question positively: YES, the money could have been better spent. Why? For at least two reasons[2]:
(1) Morally speaking, there is an expanding manifold of “sub-existential risk” scenarios that have been and are being actualized around the globe – scenarios that deserve immediate moral attention and urgent financial assistance. Thus, one wonders about the moral justifiability of “unnecessary” research projects in the affluent “First World” when nearly 16,000 children die of avoidable hunger-related illnesses every day; when water pollution kills more humans than all violence worldwide; when unregulated pharmaceuticals pollute public drinking water; when the Great Pacific Garbage Patch, superfund sites and eutrophication threaten the very livability of our lonely planet in space.
Ascending from the “personal/local/global” to the “transgenerational” level of analysis, there exists a growing mass of increasingly ominous existential risks that demand serious scientific and philosophical study. Such risks are the most conspicuous reason why, as King observes above, the present moment in human history is “qualitatively different” from any prior epoch. Just 65 years ago, for example, there were only one or two “natural” existential risks. Looking at the present moment and into the near future, experts now count roughly 23 mostly anthropogenic types of existential risks (to say nothing of their tokens). Yet, as Nick Bostrom laments, “it is sad that humanity as a whole has not invested even a few million dollars to improve its thinking about how it may best ensure its own survival.” If any projects deserve $6 billion, in my opinion, it is those located within the still-incipient field of “secular eschatology.” More on this below.
(2) Practically speaking, one could argue that LHC's money could have been better spent developing “enhancement” technologies. Consider the fact that, if “strategies for engineered negligible senescence” (SENS) were perfected, the physicists now working on the LHC could have significantly more (life)time to pursue their various research projects. The development of such techno-strategies would thus be in the personal interest of anyone who, for example, wishes to see the protracted research projects on which they're working come to fruition. (As one author notes, the LHC extends beyond a single professional career[3].) Furthermore, healthspan-extending technologies promise to alleviate human suffering from a host of age-related pathologies, thus providing a more altruistic public good as well.
A similar argument could apply to the research domain of cognitive enhancements, such as nootropics, tissue grafts and neural implants. Again, in terms of the benefits for science, “a 'superficial' contribution that facilitates work across a wide range of domains can be worth much more than a relatively 'profound' contribution limited to one narrow field, just as a lake can contain a lot more water than a well, even if the well is deeper.”[4] Cognition-enhancing technologies would thus provide an appreciable boost not just to research on fundamental physics issues – the first billionth of a second after the Big Bang, the existence of the Higgs boson particle, etc. – but to the scientific enterprise as a whole.
Second, there may exist theories needed to understand observable phenomena that are in principle beyond our epistemic reach – that is, theories to which we are forever “cognitively closed.” The so-called “theory of everything,” or a theory elucidating the nature of conscious experience, might fall within this category. And the only plausible route out of this labyrinth, I believe, is to redefine the boundary between “problems” and “mysteries” via some techno-intervention on the brain. Otherwise, we may be trapped in a state of perennial ignorance with respect to those phenomena – floundering like a chimpanzee trying to conjugate a verb or calculate the GDP of China. Yet another reason to divert more funds towards “applied” enhancement research.
Furthermore, upgrading our mental software would augment our ability to evaluate the risks involved in LHC-like experiments. Physicists are, of course, overwhelmingly confident that the LHC is safe and thus will not produce a strangelet, vacuum bubble or microscopic black hole. (See the LSAG report.) But it is easy – especially for those who don't study the history and philosophy of science – to forget about the intrinsic fallibility of scientific research. When properly contextualized, then, such confidence appears consternatingly less impressive than one might initially think.
Consider, for example, Max Plank's oft-quoted comment that “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” Thus, for all we know at present, the next generation of physicists, working within a modified framework of more advanced theory, will regard the LHC's risks as significant – just as the lobotomy, for which Egas Moniz won the science's most prestigious award, the Nobel Prize, in 1949, is now rejected as an ignominious violation of human autonomy. This point becomes even more incisive when one hears scientists describe the LHC as “certainly, by far, the biggest jump into the unknown” that research has ever made. (Or, recall Arthur Clarke's famous quip: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.")
Critics of Critics
In response to such criticism, many scientists have vehemently defended the LHC. Brian Cox, for example, riposts “with an emphatic NO” to contrarians who suggest that “we [can] do something more useful with that kind of money.” But Cox's thesis, in my opinion, is not compelling. Consider the culmination of Cox's argument: “Most importantly, though, the world would be truly impoverished without all the fundamental knowledge we've gained” from projects like the LHC[5]. Now, at first glance, this claim seems quite reasonable. Without the “fundamental knowledge” provided by Darwinian theory, for example, it would be difficult (as Dawkins contends) to be an “intellectually fulfilled atheist.” This is an instance of science – by virtue of its pushing back the “envelope of ignorance” – significantly enriching the naturalistic worldview.
But Cox's assertion could also be construed as rather offensive. Why? Because the fact is that much of the world is quite literally impoverished. Thus, from the perspective of millions of people who struggle daily to satisfy the most basic needs of Maslow's hierarchy – people who aren't fortunate enough to live lives marked by “the leisure of the theory class,” with its “conspicuous consumption” of information – Cox's poverty argument for blue skies research is, at worst, an argument from intellectual vanity. It considers the costs of postponing physics research from a rather solipsistic perspective; and considering issues from the perspectives of others is, of course, the heart of ethics[6]. Surely if the roles were reversed and advocates of the LHC suddenly found themselves destitute in an “undeveloped” country, they would agree that the material needs of the needy (themselves) should take precedence over the intellectual needs of the privileged.
Furthermore, consider Sir Martin Rees' defense. “It is mistaken to claim,” Rees argues, “that global problems will be solved more quickly if only researchers would abandon their quest to understand the universe and knuckle down to work on an agenda of public or political concerns. These are not 'either/or' options – indeed, there is a positive symbiosis between them.”
But, in my view, the existence of such symbioses is immaterial. The issue instead concerns how resources, including money and scientists, are best put to use. A retort to Rees could thus go as follows: there is a crucial difference between making every effort, and making merely some effort, to exorcise the specter of existential and other related risks that haunts the present millennium. Thus, if one is seriously concerned about the future of humanity, then one should give research directly aimed at solving these historically unique conundra of eschatological proportions strong priority over any project that could, at best, only “indirectly” or “fortuitously” improve the prospects of life on Earth.
Rees' defense of the LHC is perplexing because he is (at least ostensibly) quite concerned with secular eschatology. In his portentous book Our Final Hour, Rees argues that “the odds are no better than fifty-fifty that our present civilisation on Earth will survive to the end of the present century.” Similar figures have been suggested by futurologists like Bostrom, John Leslie and Richard Posner[7]. Thus, given such dismal probability estimates of human self-annihilation, efforts to justify the allocation of limited resources for blue skies projects appear otiose. As the notable champion of science, Bertrand Russell, once stated in a 1924 article inveighing against the space program, “we should address terrestrial problems first.”
In conclusion, it should be clear that the main thrust of my criticism here concerns the moral issue of existential risks. The present situation is, I believe, sufficiently dire to warrant the postponement of any endeavor, project or program that does not have a high probability of yielding results that could help contain the “qualitatively different” problems of the 21st century. This means that the LHC should be (temporarily) shut down. But even if one remains unmoved by such apocalyptic concerns, there are still good practical reasons for opposing, at the present moment, blue skies research: money could be better spent – or so the argument goes – developing effective enhancement technologies. Such artifacts might not only accelerate scientific “progress” but help alleviate human suffering too. Finally, I have suggested that some counterarguments put forth in defense of the LHC do not hold much water.
If we want to survive the present millennium, then we must, I believe, show that we are serious about solving the plethora of historically unique problems now confronting us.
[1] And as Brian Cox states, “there can be no better symbol of that pure curiosity-driven research than the Large Hadron Collider.”
[2] These are two distinct reasons for opposing the LHC – reasons that may or may not be compatible. For example, one might attempt to use the moral argument against the enhancement argument: spend money helping people now, rather than creating “enhancements” with dangerous new technology. Or, one might, as Mark Walker does, argue that the development of enhanced posthumans actually offers the best way of mitigating the risks mentioned in the moral argument.
[3] See this article, page 6.
[4] From this article by Bostrom.
[5] Space prevents me from considering Cox's additional arguments that “the world would be a far less comfortable place because of the loss to medicine alone, and a poorer place for the loss to commerce.” I would also controvert, to some extent, these assertions as well.
[6] This is the so-called “moral point of view.”
[7] Although Posner does not give an actual number.
Labels:
blue skies research,
Bostrom,
Cox,
existential risk,
LHC,
Rees
March 28, 2010
Two kinds of posthumans
Posthumans are future beings who greatly exceed, in terms of their basic capacities, what present-day humans are capable of, with respect to healthspan, cognition and emotion. Furthermore, on this conception of the term, posthumans may or may not be "phylogenetically" related to humans: they may be completely synthetic beings, such as strong AI systems, rather than biotechnological hybrids (via cyborgization).
At the risk of silliness, one might then distinguish between post-humans and post-humans: the former would refer to a posthuman entity that non-human in nature, while the latter to a posthuman entity that has (or had in its past) biological components. An android would count as a post-human, while an advanced cyborg would count as a post-human.
For more on the relation between cyborgs and posthumans, see my most recent article on the IEET website.
March 22, 2010
Intelligence and Progress
New post up on the IEET website entitled "If Only We Were Smarter!" Has received far more hits than expected. Maybe the counter is malfunctioning?
Past articles include:
and...
January 6, 2010
Niche Construction Revisited
Technology and Human Evolution
Why technology in the first place? The answer, anthropologically and philosophically, revolves around humans relating to their environment. –Don Ihde
We have modified our environment so radically that we must now modify ourselves in order to exist in this new environment. –Norbert Weiner
Many theorists have metaphorized the development of technology as a kind of evolution; thus, one talks about “the evolution of technology.” As far as I know, Karl Marx was the first to suggest a Darwinian reading of the history of technology (in Das Kapital [1867]), but one finds the idea in work by contemporary techno-theorists too, such as Kevin Kelly in his TED talk. While such analyses can be, at times, intriguing, I am much more interested in how technology has influenced the evolution of humans in the past 2.6 million years (dating back at least to Homo habilis). In other words, I would like to understand technology along the diachronic axis not as a separate phenomenon – one that may or may not undergo a process analogous to Darwinian selection – but rather as a phenomenon constitutive of human evolution itself.
For example, anthropologists hypothesize that the creation of early lithic technologies had an amplificatory effect on human intelligence: as our ancestors came to rely on such technologies for survival, those with greater cognitive powers (to fashion such lithics) were naturally selected for. This established a positive feedback loop such that intelligence begat intelligence. Thus, in this way, human-built artifacts actually mediated the evolutionary process of natural selection to bring about more “encephalized” (bigger-brained) phenotypes.
In the literature on evolution, a new school of thought has recently emerged that rejects the standard Darwinian (or neo-Darwinian) model – a model in which organisms are always molded to fit their environments, in which causation extends unidirectionally from the environment to the organism. In contrast to this understanding of adaptation, “niche constructionists” argue that organisms actually make the environments in which they live and to which they are adapted. At the most passive end of the constructionist spectrum, simply being a “negentropic” organism far from thermodynamic equilibrium changes various factors in the environment, while at the most active end one finds Homo sapiens, a unique species that has profoundly altered the environment in which it (and most other Holocene organisms) exist (or once did).
Thus, niche construction theory explicitly brings into its theoretical view the human creation of technology – specifically, those artifacts that have in some way helped “construct” the niches that we occupy. While this is a good theoretical start (although not all biologists, including Dawkins, have jumped on the niche constructionist bandwagon), niche construction theory seems to neglect a crucial phenomenon relating to technology – a phenomenon that might be called “cyborgization” or, more prosaically, “organism construction” (on the model of “niche construction”).
To motivate this point, let me back up for a moment. First, note that explanation in biology is paradigmatically causal (rather than non-causal, as in nomological explanations citing the second law of thermodynamics ). Thus, since the standard model of Darwinian evolution sees causation as unidirectional, from the environment to the organism, it follows that explanations of organismal adaptation entail specifying an environmental factor that has, over transgenerational time, brought about a change in the corresponding organismal feature. Some philosophers have typologized this kind of explanation as “externalist,” since it is the selective environment external to the organism that accounts for the organism’s adaptedness to that environment.
But niche constructionists think that there is another type of explanation for organismal adaptation – a “constructive” explanation. According to this view, organismal features could complement or match the relevant environmental factors not because of natural selection, but because the organism itself modified those factors. While in many cases this modification is inadvertent (see the example of the Kwa-speaking yam farmers), humans are unique in the radical extent to which we have intentionally modified the environment. Back to this in a moment.
So, the picture sketched thus far looks like this. Fact: organisms are generally well-adapted to their environments (the explanandum). But why? According to niche constructionists, and in contrast to traditional neo-Darwinians, either of the following two phenomena might have occurred (these are not mutually exclusive): (i) natural selection might have intervened to bring about an adaptive change in an organismal feature to match an environmental factor, or (ii) the organism might have “constructed” its niche to make the relevant environmental factors complement its own features. Since causation here is bidirectional, causal explanation of adaptation therefore swings both ways – from the environment to the organism (externalist) and from the organism to the environment (constructive).
But what seems to be missing from this picture, at least when focusing on Homo sapiens, is the use of technology to artificially extend, substitute and enhance features of the human organism itself, for the purpose of increasing our complementarity to the increasingly artificial milieu in which we live. That is to say, niche constructionists only explicitly recognize natural selection as bringing about changes in organismal features. On reflection, though, it seems transparently clear that we humans have largely usurped the role of natural selection by technologically modifying our own behaviors, morphology and physiology – i.e., our phenotypes. The pervasive artifactual metaphors of function, mechanism, design, etc. as well as the agential metaphor of natural selection, are all being gradually replaced by literal functions, by literal mechanisms, by a literal engineer.
While some examples of “organism construction” are highly intuitive, such as neural implants and prosthetic limbs, I would like to intrepidly venture beyond our pre-theoretical intuitions and suggest that entities like the automobile might, under certain conditions, actually count as part of the (technologically-modified) human organism itself. For example, I see the automobile as a case in which engineers intervened to “construct” the human organism for the purpose of adaptively modifying it to complement a very specific selective environment, namely the road. One might therefore say that the human-automobile system is adapted to the road rather like the earthworm is adapted to its environment, which also turns out to be thoroughly constructed.
If one thinks this is a giant conceptual leap to an implausible picture of human evolution, consider the following: since the late nineteenth century, theorists have repeatedly characterized technologies as “extensions of man” (in Marshall McLuhan’s words); in his 1877 book Grundlinien einer Philosophie der Technik, the first philosopher of technology, Ernst Kapp, termed this phenomenon “organ projection.” More recently, some philosophers of mind (most notably Andy Clark) have argued that the boundary of the mind – and indeed the self too – is not demarcated by “skin and skull,” as our pre-theoretical intuitions might suggest. Rather, these philosophers claim that when specific criteria relating to (e.g.) function and reliability are satisfied, technological entities like notepads and computers literally become part of the individual’s cognitive system – that is, they become components internal to the individual’s mind and self. In a similar spirit, the physiologist J. Scott Turner has defended the conceptual-metaphysical thesis that organisms are fuzzily bounded, and still other theorists have considered the possibility of “boundary shifting,” as in the peculiar case of water crickets.
This being said, a common objection to understanding artifactual entities like automobiles, clothes, glasses, and so on, as instances of “organism construction” – that is, as extended adaptations of a sort – is that many technological modifications involve transient and reversible changes to human behavior, morphology and physiology. Unlike the evolutionary acquisition of a bigger brain, for example, the “automobilic phenotype” is expressed only temporarily. Rather than take this as a problematic datum, though, I see it as suggesting a novel interpretation of what biologists have called phenotypic plasticity, or the ability of an organism to manifest particular phenotypic features in response to specific environmental factors on an ontogenetic timescale. As Darwin once wrote: “I speculated whether a species very liable to repeated and great changes of conditions might not assume a fluctuating condition ready to be adapted to either condition.”
This is, in fact, precisely what one finds in our highly composite, artificialized world – that is, modernity is a complex mosaic of interlocking and disparate environmental conditions, each of which contains its own peculiar factors that complement often times very different features of the (technologized) organism. The point here is twofold: (i) it seems undeniable that our contemporary environment is not homogeneous but highly heterogeneous in nature, and (ii) it also seems obvious that no single set of organismal features – whether technologically modified or not – is sufficiently adapted to all of these disparate conditions. Thus, being liable to repeated and great changes of conditions, the modern human assumes a fluctuating condition through the use of technology, and therefore becomes ready to be adapted to all of the many conditions that he or she may encounter.
In sum, we humans have increasingly become adapted to our environments through active human intervention – that is, through technological modifications targeting both ourselves and our surroundings. While niche construction theory explicitly recognizes the latter category of techno-modification, it seems to problematically neglect the former. This is not a trivial lacuna, in my opinion, especially with all the talk in bioethics and biopolitics today about the creation of “enhancement” technologies, i.e., technologies that aim to augment some feature of the human organism or add entirely new features or capacities to its phenotypic repertoire. Thus, for these reasons, it seems that the niche constructionist framework ought to be expanded into a dual constructionist account of human evolution, even if this requires us to rethink inveterate concepts like phenotypic plasticity.
Why technology in the first place? The answer, anthropologically and philosophically, revolves around humans relating to their environment. –Don Ihde
We have modified our environment so radically that we must now modify ourselves in order to exist in this new environment. –Norbert Weiner
Many theorists have metaphorized the development of technology as a kind of evolution; thus, one talks about “the evolution of technology.” As far as I know, Karl Marx was the first to suggest a Darwinian reading of the history of technology (in Das Kapital [1867]), but one finds the idea in work by contemporary techno-theorists too, such as Kevin Kelly in his TED talk. While such analyses can be, at times, intriguing, I am much more interested in how technology has influenced the evolution of humans in the past 2.6 million years (dating back at least to Homo habilis). In other words, I would like to understand technology along the diachronic axis not as a separate phenomenon – one that may or may not undergo a process analogous to Darwinian selection – but rather as a phenomenon constitutive of human evolution itself.
For example, anthropologists hypothesize that the creation of early lithic technologies had an amplificatory effect on human intelligence: as our ancestors came to rely on such technologies for survival, those with greater cognitive powers (to fashion such lithics) were naturally selected for. This established a positive feedback loop such that intelligence begat intelligence. Thus, in this way, human-built artifacts actually mediated the evolutionary process of natural selection to bring about more “encephalized” (bigger-brained) phenotypes.
In the literature on evolution, a new school of thought has recently emerged that rejects the standard Darwinian (or neo-Darwinian) model – a model in which organisms are always molded to fit their environments, in which causation extends unidirectionally from the environment to the organism. In contrast to this understanding of adaptation, “niche constructionists” argue that organisms actually make the environments in which they live and to which they are adapted. At the most passive end of the constructionist spectrum, simply being a “negentropic” organism far from thermodynamic equilibrium changes various factors in the environment, while at the most active end one finds Homo sapiens, a unique species that has profoundly altered the environment in which it (and most other Holocene organisms) exist (or once did).
Thus, niche construction theory explicitly brings into its theoretical view the human creation of technology – specifically, those artifacts that have in some way helped “construct” the niches that we occupy. While this is a good theoretical start (although not all biologists, including Dawkins, have jumped on the niche constructionist bandwagon), niche construction theory seems to neglect a crucial phenomenon relating to technology – a phenomenon that might be called “cyborgization” or, more prosaically, “organism construction” (on the model of “niche construction”).
To motivate this point, let me back up for a moment. First, note that explanation in biology is paradigmatically causal (rather than non-causal, as in nomological explanations citing the second law of thermodynamics ). Thus, since the standard model of Darwinian evolution sees causation as unidirectional, from the environment to the organism, it follows that explanations of organismal adaptation entail specifying an environmental factor that has, over transgenerational time, brought about a change in the corresponding organismal feature. Some philosophers have typologized this kind of explanation as “externalist,” since it is the selective environment external to the organism that accounts for the organism’s adaptedness to that environment.
But niche constructionists think that there is another type of explanation for organismal adaptation – a “constructive” explanation. According to this view, organismal features could complement or match the relevant environmental factors not because of natural selection, but because the organism itself modified those factors. While in many cases this modification is inadvertent (see the example of the Kwa-speaking yam farmers), humans are unique in the radical extent to which we have intentionally modified the environment. Back to this in a moment.
So, the picture sketched thus far looks like this. Fact: organisms are generally well-adapted to their environments (the explanandum). But why? According to niche constructionists, and in contrast to traditional neo-Darwinians, either of the following two phenomena might have occurred (these are not mutually exclusive): (i) natural selection might have intervened to bring about an adaptive change in an organismal feature to match an environmental factor, or (ii) the organism might have “constructed” its niche to make the relevant environmental factors complement its own features. Since causation here is bidirectional, causal explanation of adaptation therefore swings both ways – from the environment to the organism (externalist) and from the organism to the environment (constructive).
But what seems to be missing from this picture, at least when focusing on Homo sapiens, is the use of technology to artificially extend, substitute and enhance features of the human organism itself, for the purpose of increasing our complementarity to the increasingly artificial milieu in which we live. That is to say, niche constructionists only explicitly recognize natural selection as bringing about changes in organismal features. On reflection, though, it seems transparently clear that we humans have largely usurped the role of natural selection by technologically modifying our own behaviors, morphology and physiology – i.e., our phenotypes. The pervasive artifactual metaphors of function, mechanism, design, etc. as well as the agential metaphor of natural selection, are all being gradually replaced by literal functions, by literal mechanisms, by a literal engineer.
While some examples of “organism construction” are highly intuitive, such as neural implants and prosthetic limbs, I would like to intrepidly venture beyond our pre-theoretical intuitions and suggest that entities like the automobile might, under certain conditions, actually count as part of the (technologically-modified) human organism itself. For example, I see the automobile as a case in which engineers intervened to “construct” the human organism for the purpose of adaptively modifying it to complement a very specific selective environment, namely the road. One might therefore say that the human-automobile system is adapted to the road rather like the earthworm is adapted to its environment, which also turns out to be thoroughly constructed.
If one thinks this is a giant conceptual leap to an implausible picture of human evolution, consider the following: since the late nineteenth century, theorists have repeatedly characterized technologies as “extensions of man” (in Marshall McLuhan’s words); in his 1877 book Grundlinien einer Philosophie der Technik, the first philosopher of technology, Ernst Kapp, termed this phenomenon “organ projection.” More recently, some philosophers of mind (most notably Andy Clark) have argued that the boundary of the mind – and indeed the self too – is not demarcated by “skin and skull,” as our pre-theoretical intuitions might suggest. Rather, these philosophers claim that when specific criteria relating to (e.g.) function and reliability are satisfied, technological entities like notepads and computers literally become part of the individual’s cognitive system – that is, they become components internal to the individual’s mind and self. In a similar spirit, the physiologist J. Scott Turner has defended the conceptual-metaphysical thesis that organisms are fuzzily bounded, and still other theorists have considered the possibility of “boundary shifting,” as in the peculiar case of water crickets.
This being said, a common objection to understanding artifactual entities like automobiles, clothes, glasses, and so on, as instances of “organism construction” – that is, as extended adaptations of a sort – is that many technological modifications involve transient and reversible changes to human behavior, morphology and physiology. Unlike the evolutionary acquisition of a bigger brain, for example, the “automobilic phenotype” is expressed only temporarily. Rather than take this as a problematic datum, though, I see it as suggesting a novel interpretation of what biologists have called phenotypic plasticity, or the ability of an organism to manifest particular phenotypic features in response to specific environmental factors on an ontogenetic timescale. As Darwin once wrote: “I speculated whether a species very liable to repeated and great changes of conditions might not assume a fluctuating condition ready to be adapted to either condition.”
This is, in fact, precisely what one finds in our highly composite, artificialized world – that is, modernity is a complex mosaic of interlocking and disparate environmental conditions, each of which contains its own peculiar factors that complement often times very different features of the (technologized) organism. The point here is twofold: (i) it seems undeniable that our contemporary environment is not homogeneous but highly heterogeneous in nature, and (ii) it also seems obvious that no single set of organismal features – whether technologically modified or not – is sufficiently adapted to all of these disparate conditions. Thus, being liable to repeated and great changes of conditions, the modern human assumes a fluctuating condition through the use of technology, and therefore becomes ready to be adapted to all of the many conditions that he or she may encounter.
In sum, we humans have increasingly become adapted to our environments through active human intervention – that is, through technological modifications targeting both ourselves and our surroundings. While niche construction theory explicitly recognizes the latter category of techno-modification, it seems to problematically neglect the former. This is not a trivial lacuna, in my opinion, especially with all the talk in bioethics and biopolitics today about the creation of “enhancement” technologies, i.e., technologies that aim to augment some feature of the human organism or add entirely new features or capacities to its phenotypic repertoire. Thus, for these reasons, it seems that the niche constructionist framework ought to be expanded into a dual constructionist account of human evolution, even if this requires us to rethink inveterate concepts like phenotypic plasticity.
Subscribe to:
Posts (Atom)