This weekend, I wrote up a review of a book that will be appearing at the Social Epistemology Review and Reply Collective later this month. The book I’m talking about is an essay collection called Post and Transhumanism, which is a general introduction to the topic through chapters that examine a couple of key historical figures or specific subjects.
This is something of a preview and a little more paratext on my review. The short version of what I actually think about the book is that it had some good essays, and some bad essays, with some interesting insights, but nothing really to write home about. I think one flaw is that everything maintained a very introductory focus, as if all the writers wanted to show how much they knew, but didn’t really want to challenge the reader to think on her own.
However, that’s not what I focussed my review on. It isn’t what I’m going to talk about today either. I wanted to mention a curious little argument that came up in one of these essays, which implied a conception of how moral standing works that I find genuinely perplexing to discuss. I consider it a mistake in thinking that interferes with the ability to do productive philosophy.
The arc of my formal review tended to a different direction. I concentrated directly on the problems of conceiving transhumanism (philosophical speculation and preparation for the utopian improvement of human life through biological enhancements) and posthumanism (the set of philosophies from Nietzsche onward that focus on a general issue of overcoming humanity) as basically the same thing. Short version of this theme: Don’t do that.
If Steve Buscemi got cybernetic enhancements, would he have the right to control me, as someone without such enhancements? |
The argument about moral standing goes like this. Robert Ranisch, the co-editor of this collection, includes an essay in the book called “Morality,” where he discusses different moral ideas and problems that occur in transhumanist thinking. This includes a discussion of how human enhancement will affect the moral standing of the human race. In particular, how differences in moral standing between the enhanced and the non-enhanced (or among gradations of superficially to deeply enhanced) will affect the moral standing of different people within humanity.
It’s a common argument in Western philosophy to ground moral standing in cognitive capacity: intelligence, a sense of self-consciousness, the intellectual ability to engage in moral discourse, empathetic and sympathetic powers. The sense of the term ‘enhanced’ in transhumanist discussions is usually very vague, but in the context of Ranisch’s argument (or rather, this small part), it refers to an enhancement of these powers that ground the moral weight accorded to a person.
If enhanced people have greater moral powers and abilities, then they’ll have greater moral standing than non-enhanced humans. The higher moral standing of the enhanced means that they could legitimately oppress humans who are still like us today. He is literally describing the technological advancement of personhood itself, such that an enhanced human would be more of a person than you or me.
Even though this wasn’t a major element of his piece, which was more about general moral issues and indicating problems with some of them, it stuck with me as having made a fundamental error about the nature of how morality really works. Well, let me rephrase that. Ranisch’s conception of morality works just fine in the context of philosophical discussions about ranking people’s moral standing on a chart and calculating what suffering can be inflicted upon them without it being morally relevant.
Being ethical operates entirely differently. Whatever kind of weird technological enhancement would make a human “morally superior” to another wouldn’t grant the superior a right to oppress the inferior. If someone, no matter how enhanced they are, was genuinely morally superior, then the thought of oppressing or controlling someone who was already above whatever minimal threshold of powers was necessary make their suffering immoral under the old regime wouldn’t even occur to them. That’s what moral superiority is.
It’s why I find so much talk about ‘enhancement’ among transhumanist speculation unproductive. It’s a vague way to make philosophical problems of no import. The question “What if there were a class of humans with the moral right to oppress other humans?” isn’t relevant for technological enhancement discussions. It’s relevant for discussions of the legitimacy of monarchies or workers’ rights.
Moral standing isn’t the kind of thing that you gain when you add a cybernetic implant. One is more moral when you change your character such that the desire to harm someone for your benefit becomes abhorrent to you. The problem of philosophy that uses terms like ‘enhancement’ so vaguely is that it makes you think moral standing is something like aerobics capacity, which biological or technological enhancements can presumably improve by various degrees.
You end up fundamentally misusing and misunderstanding important ideas, so you end up talking in circles about nothing of any substance.
No comments:
Post a Comment