Tough Times Call for Creativity II: Four Weird Concepts, Composing, 31/12/2014

Continued from last post . . . Can you really spoil a philosophy book? I don’t think so. Laying out a basic account of the four most important concepts at the foundational level of the philosophy in my next book, Ecology, Ethics, and the Future of Humanity in a few hundred words instead of the 7000 word chapter amounts to a preview, if anything.

The most radical idea in my manuscript, as far as the discipline of philosophy is concerned, is that the domains of ontology and ethics in philosophical knowledge are inextricably intertwined. They’re usually treated as separate domains, the most separate two domains can be. Remember David Hume’s formula, taken as obvious throughout philosophy: you can’t derive an ‘ought’ from an ‘is,’ or vice versa. Statements of what is true and statements of what is morally or ethically right can have nothing to do with each other.

And while that’s true for arguments, it’s not true for how personalities and self-conceptions are formed. Remember from yesterday that philosophy is about creating concepts that fundamentally revise what it means to be human. That’s an ethical matter: how we live, which is determined by what we think we are. That latter question depends on ontology, our conceptions of what the universe is and our place in it. So ontological questions are the ground of our ethics.

Every ecosystem is a complex process in constant flux.
So the foundation of our ethics is what we believe about the world. If we want to live in a more ecologically-friendly way, we have to think more ecologically. That means paying attention to the interdependence of all bodies and systems, and understanding reality as constant flux. A process philosophy where singularity arises from links and relations.

Weird Concept One. Everything that exists is an ongoing process. Stability isn’t absolute, but a matter of very slow change. So when we think about what could be a part of one body or system, the part doesn’t necessarily exist simultaneously with all the other parts. Part of a process can appear and disappear, while the process itself continues.

Weird Concept Two. A physical body, as we typically think of it, has a clear boundary separating inside from outside. But the boundary of a process is porous, pragmatically defined at best. So you can consider a particular process and any environmental conditions that shape how it appears and develops as part of a larger process of generation and decay. 

Weird Concept Three. Everything being a process that are all linked together as conditions and products of larger processes is a very holist view of existence. Ultimately, there’s only one process: the generation and decay of the entire universe. But this largest-scale unity doesn’t overwrite the singularity and uniqueness of all the particulars at smaller scales. Multiplicity survives the unification of connection.

Weird Concept Four. There are no primitively simple bodies, nothing that would have an identify solely in itself that was not somehow constituted by relations and interactions. Even elementary particles are only abstractions. Calling a quark or a photon a particle is a hangover from old-fashioned ontology concepts in equally old-fashioned physics. At the most fundamental level, reality is fields interacting and fluctuating. These fields of matter/energy are processes with porous boundaries whose interactions link them and make their states of being depend on each other. 

At all levels of analysis, we are ultimately dynamic fields whose identities and features depend on our mutual interaction. 

These concepts are only weird if you aren’t accustomed to thinking about the world as a process. I’ve been thinking about the world this way since about 2010 when the basic ideas of what I’ve been calling the Ecophilosophy manuscript, and what is now Ecology, Ethics, and the Future of Humanity, came together in my research.

Yet while I still read philosophy, politics, and science, and write works of fiction and philosophy that are informed by what I read, I’ve left the university sector behind. And I sometimes worry that my writing work will interfere with my attempt to build my new career in the communications sector. But I’m still hopeful for my own future. To be continued . . . 

Tough Times Call for Creativity I: On Getting the Book, A History Boy, 30/12/2014

From nearly eight years ago, when I'd first decided to
become a university professor. Doesn't feel like it's
me anymore.
I feel extremely lucky that I had someone like Barry Allen at McMaster University for my dissertation supervisor (and thankful not only for his help, but that Evan Simpson at Memorial University recommended him as a supervisor with whom I would get on well; that I certainly did). From the perspective of a 25-year-old fresh doctoral student starting what he thought would be a vibrant and inspirational career as a university professor, teacher, and researcher, I always conceived my dissertation as the first draft of my first published book of philosophy. 

This is not how most doctoral students approach their dissertation. It’s supposed to be a forgettable document that proves you have the basic capacity to research a complicated project and critically summarize it in something approaching a sane manner. It’s typically filled with disciplinary jargon and meandering, passive-voice sentences that are difficult to read. 

My first attempts in 2012 to approach publishers saw me brushed off because they presumed that I had written this. I was recommended a short book called From Dissertation to Book that would show me everything I had to change, only to read it and discover that I had done it already.

So it’s a happy irony that the same year I decide to leave the university sector entirely, I end up with a contract offer to publish the manuscript that began as my dissertation in environmental philosophy. Over the last few weeks since my Fall semester at Sheridan ended, I’ve been editing this text, and I’m about halfway through the chapter edits right now. There’s still a new introduction to write, which I’ll do after I finish these edits.

Ecology, Ethics, and the Future of Humanity is a weird book. I say this as its author. I just finished the edits on chapter four of seven, which is where I lay out the manuscript’s key concepts. You’d think this was a task for the first chapter, not the fourth. And in most circumstances, you’d be right. 

But Ecology, Ethics, and the Future of Humanity is a weird book. Part of its hook is that it isn’t quite like other books of philosophy. The reason is rooted in my conception of what philosophy is. I’m not talking about the intellectual discipline, per se, but the definitive activity of philosophy, creating concepts. Creating new ways to understand the world itself, and the accompanying new ideas of what it is to be human.

The reason we need to create new ways to understand the world is always rooted in a political problem: a human civilization is doing something that has become extremely counter-productive, and we need to work out new ways to organize ourselves socially and institutionally to stop our self-destructive activity. 

Our current mistake is our enormous industry, the horrifyingly destructive industrial processes around which so much in our civilization is built. We need a post-human moment, a re-conception of what we are, to repair it. We can clean up the mess, but we need to change how we go about living if we want to avoid creating future messes. You can’t genuinely change humanity’s world if you change all the institutions while leaving the human soul, our conception of what we are and what we want, the same. 

This is absurdly difficult, of course. So difficult that I’m not even sure that it’s ever been done. The task of transforming the human soul into something better than it is today has been the central motive of every utopian movement that was worth the name. It’s the most profound ethical idealism in human existence, the call to be better than we are, and the belief that, contrary to all evidence, it’s actually possible. Philosophy is the application of human intellect and reason to this motive. 

The first three chapters of Ecology, Ethics, and the Future of Humanity spells out my own account of the environmental crisis. We’re all familiar with it, but just as I have my own take on the answer, I have my own description of the problem, for which I develop a few concepts. But the really important concepts come after I’ve laid out the problem to which they’re an answer. To be continued . . . 

Crisis Management Through Stillness, Research Time, 29/12/2014

Last week, I discovered an article on Gawker that intrigued me because of the lesson it offered for me in my training to become a communications professional. Taking this narrative seriously, and I think we should, means that a lot of what I’m learning about proper crisis communications is extremely inappropriate for online media. And it gives further backing to my conviction that a strong communications professional needs deep knowledge of theoretical media analysis.

Gawker Media should not be considered a very reliable
source of news. But it is a reliable source of examples of
how screwed modern digital media is.
First, you should read the article, by Sam Biddle, who is one of the top bloggers at Gawker Media. In late 2013, Biddle noticed some online outrage over a tweet from Justine Sacco, the head of online communications for the IAC digital media conglomerate. As she travelled with her family on a vacation to South Africa, she made a joke about the relatively lower HIV infection rates among white Africans compared to blacks. It was in absurdly poor taste, and Biddle wrote about the tweet at Gawker to increase traffic.

As I’ve written before, the content of digital media tends to a yellow press model: generating as much sensationalism as possible to provoke the highest number of click-throughs because their revenue comes from advertisers paying per page load. Outrage and anger is the most efficient way to attract click-throughs, and so it was to Gawker's benefit (more clicks, more ad revenue) to sensationalize and spread Sacco’s idiocy as far as possible. 

That idiocy spread very well because of how viscerally satisfying it is to hate a stranger online. Sacco couldn’t respond in real time because she was on a web blackout in an airplane flying from the United States to South Africa, a flight that lasts almost half a day. But Biddle's real lesson doesn't just lie in Sacco’s problem, but his own image crisis storm when he tweeted an insult to Gamergaters that made it seem like he believed it was good to bully and beat up nerds.*

Of all the people embroiled in #Gamergate, Anita
Sarkeesian was the one most concerned with the ethics
of video games and games journalism.
* I think nerd culture has a lot to answer for, and that Gamergate revealed a festering sickness in the culture of male hardcore gamers. It was a harassment campaign started in reaction to a rambling, hostile blog post by a man who was angry and resentful about his ex-girlfriend which, while ostensibly about transparency in games journalism, never targeted any journalists. 

Traditional media's channel of information flow moves in a single direction: groups of professional gatherers and interpreters of information (journalists) disseminate reports. Even at the high speeds of the 24-hour cable news cycle, this is still slow enough for some critical thought to enter the final production.

Digital media is a continually roiling stream of new information. My classes call digital media a two-way communications medium, but I don't think this is adequate to the actual flow of content. The relationship between digital and traditional media can be called two-way, because print and television journalists gather information from user-generated media, while digital media users gather their information from journalists. But considered in itself, digital media is omnidirectionally scale-free.

It’s scale free because it organizes the same way the internet does. There are ordinary, small folks who make up the majority of users. They’re mostly consumers, and when they do publish (Facebook or Ello posts, tweets, Instagram shots), it’s usually only personal friends from the physical world or online communities who see them. 

Then there are the people who have more followers than who they follow. As the ratio of followers to following grows, that person has an increasing amount of power to disseminate information. The journalists who work for online media outlets (outgrowths of the traditional media, or all-online venues like Gawker, Breitbart, and Huffington) have prominent positions, but their power directly flows from the strength of their followings. What matters is their place as hubs in a network of information creation and consumption.

Famous asshole Dan Bilzarian is one of the most powerful
people on the internet with regard to the reach of his
following. But he is still just one single voice.
Online media is omnidirectional because communication occurs in any direction among the network's links. Elites (the network hubs) exchange information, but ordinary folks do as well, and sometimes conversations occur between elites and ordinaires. I’m just a guy starting a career in communications who blogs about his writing. But I’ve chatted with some remarkably famous people.**

** Yes, all of them have been in the Doctor Who community, but it still counts. Gareth Roberts thanked me for my glowing review of The Caretaker, and Christopher Bidmead chatted with me over my thoughts about a video he’d linked on entropy as the ground of a physics-based definition of life.

The particular implications of this radically different media organization for crisis communications has yet to break into communications education, at least in my experience. Crisis communications typically revolves around engaging the media as thickly as possible. You engage with every expression of the narrative you want to stop and counter it with your own message and image.

One example that’s come up in my own classes quite frequently is Maple Leaf Foods, whose upfront apology over 2008's fatal listeria contamination at a plant was extremely counter-intuitive for a profession that is ostensibly about controlling public discourse to maintain positive attitudes toward your clients. Their apology dominated all communications from the company, and it played everywhere. 

Yet it did so through a single channel: Maple Leaf, to television and newspaper information dissemination outlets, to an audience of receivers. The receivers could not push their own reactions back through that channel. Although a given network hub in digital media may have an enormous number of followers, it is still one channel, which is equally a receiver as a producer of information. While many people may receive a communication, each of them can reply with the same individual force.

An example of an ordinary journalist tweeting a joke at
a corporate twitter feed, and getting a brilliant convo.
Omnidirectionality at work.
So power over information is ostensible at best, and can always be overrun. Communication power in digital media relies not on mass broadcast, as in the old model, but mass mobilization. But that mobilization is easily distracted simply because there is so much communication in digital media constantly flowing.

In crisis communication, this information flow is combative, and at a very high intensity because every receiver is also a producer whose power as a producer is equal. Each person has only one channel. There is a difference in power only in how many people receive that channel at once. As communicators, everyone is on equal standing.

So the voice of a person combatively disbelieving a crisis communication (a corrective, critique, defence, or new narrative) is just as powerful as the person in crisis. Each move in a combative communication continues the conflict, achieving the exact opposite that the engagement of crisis communications intends.

Hence the final advice. Do nothing. Digital media communication flows moves so fast that the equivalent of dominating the 24-hour cable news cycle is to trend for two hours on Twitter. Have a crisis management message prepared for conventional single-flow media where you can actually dominate a channel, and let the social media storm die down on its own.

They’ll be on to cats, puppies, and the KKK before you know it.

To Live In Dreams, Doctor Who: Last Christmas, 27/12/2014

My girlfriend’s dad died almost three years ago, and she misses him. Even though I never met him, I miss him too, because of all the things she’s told me about him that make me think we really would have gotten along smashingly. Like everyone we miss, she sometimes dreams that he’s still around, and that they meet up for coffee and she tells him about how life is going.

The existence of Santa Claus is completely sensible and
believable, given everything else that can happen in
Doctor Who, and it's why I love this show.

are necessary, because I thought of her dreams of her father when I saw Danny Pink in Last Christmas, when he makes Clara promise him that she’ll stay alive, even though it means leaving the dream that’s been created for her to live out the rest of her life in happiness with him. “Miss me for five minutes, five solid minutes, every day.”

There are many strange miracles in this year’s Doctor Who Xmas special, the most obvious of which being the constant appearances of Nick Frost as Santa Claus to rescue the characters from their dream-states in which face-hugging aliens* sedate their victims while feeding on their brains. This is a story about dreams, the most overused symbol, image, and concept in the history of Western philosophy. 

* Yes, they make the Alien reference explicitly, and I love how the Doctor calls out the title for how offensive it is to perfectly ordinary extra-terrestrials like him.

Perhaps I say it is overused because to analyze the dream philosophically seems so run-down by now. It’s a common cliché of philosophy, a piece of shoddy rhetoric used to dismiss the discipline more than a genuine engagement with it. “Philosophy, isn’t that the class where you pay thousands in university tuition to wonder whether or not you're dreaming?” 

The design of the alien crab is brilliant, a hand when on
its own, but with a mouth when wrapped around a face.
And yes, it is incredibly suggestive of the same freaky
sexuality of H. R. Giger's facehugger. But Doctor Who
doesn't dwell on it. Just not its style.
And university philosophy often confirms the hostility. One of the best texts to introduce students to philosophical reflection and thinking is Descartes’ Meditations on First Philosophy, and one of the first questions he considers is that, considering how the experience of a dream is indistinguishable from the experience of real life, we are left doubting whether we are in a physical world or dreaming at any given moment.

The less said about the twisting corridors of sexual dysfunctions and obsessiveness in the Freudian canon the better, but if anyone is going to make use of the Freudian canon, you’d best put on a condom.**

** My girlfriend and I also watched a Marx Brothers movie after throwing on Last Christmas, because Netflix is dropping Duck Soup from its listings on the first day of 2015.

But the dream is a place for strange occurrences and parallelisms in Doctor Who. Tegan Jovanka found haunting images of her friends and companions when in the Mara’s dreamspace as it assaulted her mind. The Land of Fiction blended an entire cultural imaginary with the logic of Lewis Carroll and a dream. The trace of Amy’s Choice played through the entire character arc of Amy Pond in her first season. Last Christmas finds the plane of dreaming serving as a strange kind of afterlife. Danny Pink in this episode is no mere construct of a carnivorous crab, no facsimile. He is clearly there, still caring for Clara, urging her to fight the creatures that would kill her. 

Yearning for some kind of life after death is one of the most universal reasons why people turn to religion. Although I’m coming to join a religious tradition myself, my reasons don’t have to do with this assurance. I think I’ll always doubt whether there is any kind of afterlife for a human subjectivity as we experience it. 

Danny's calling Clara's extra time with him a bonus
reminded me of Pete Tyler's happiness over "all these
extra hours" with his wife, baby, and grown daughter
from the future.
Yet Last Christmas suggests a charming and fascinating idea. A pure speculation of course. To call it idle would give it too much credit. 

Science-fiction used to be about Hard Science. Not the disciplines of physics, chemistry, and the like, which are often called the hard sciences. I mean a view of the world that reduces all phenomena to the easily explicable. A world where all myths exist to be explained away. 

I remember this idea underlying the early Asimov, early Clarke, and the Larry Niven books that I used to read as a teenager. It’s the reductive sort of materialism that I find so intolerably pig-headed in its most popular current expression, the rantings of the New Atheists. That stupidity is why I’m always so happy to find a writer or sci-fi approach where the material and mundane become mythic, like Clarice Lispector, or Doctor Who.

Dreams in Last Christmas are a shared psychic space, still linked to the physical body, but where echoes of what and who we've lost exist, not only as images but as agencies. It’s the afterlife as the continuation of a life that’s passed on in memory. 

I have no idea what next season's storyline for Clara will
be, as this past year has given her brilliant work. But she's
right to stay on the show for at least one more year. It's not
just that she's a wonderful actor and companion, and not
just that I didn't want her character to leave on such a sad
note, but because Doctor Who is seriously an artistic
pinnacle for any actor who appears on it. They may never
get such good work regularly as they did on Doctor Who.
Henri Bergson theorized that each person’s past was a part of them, and memory just a sorting tool to select elements of that past which were relevant to a present situation to understand it and act. Because the durations of each individual life and body interacted, they combined to form a kind of universal duration. Existence itself can be considered a single life, and its past is searchable. The presence of the past is memory.

Last Christmas offers an image of the past, and past durations like Danny Pink, that retain their agency even as they only exist as the past of those who knew and interacted with them. Because all of the past is retained in the present, although it is rarely accessible, the agencies of the past are preserved as well. Dreams are here the archaeology of a past that still lives as long as we do. The afterlife is the retention of agency in the echo of a life.

Perhaps in a few years, I'll have tea (and probably still too many cigarettes) with my mother in a dream some nights, just as my girlfriend still meets with her dad. There are worse visions of the afterlife.

The Science of Refinement, Composing, 22/12/2014

If I were in film full-time, this would be where I did my
editing, and my life in general would have a much
higher overhead.
Keeping it light today with another post about my editing process on the Ecophilosophy manuscript. Here’s just a quick example of the kind of thing that I do, if you’re not sure what an editing process on a manuscript like this is all about. 

One of the recurring events as I go through this manuscript is me shaking my head in relative shame at some phrase that I included in the original final draft two years ago. I even have moments where I wonder why my committee gave me the damn degree for this work. As I go through the text today, I see so many ways to improve my expression and drastically change quite a few sentences around. 

I like to think that all good writers go through this thought process when they revisit older work that may have sat in a vault for a while. Now, I have no idea whether they actually do. I just like to think this to maintain my self-confidence as a writer. 

Here's an example of the kind of cuts I make and why. I cut the italic part of this sentence, for a reason that I'll explain below.
“Nuanced philosophical reasoning requires precisation because philosophical concepts and systems are so technical and complex enough that each sentence describing a philosophical system must be written with the most exact meaning possible.”
Appropriately, this section of the manuscript is about an approach to philosophical analysis that focusses on making statements more precise, one of the early works of Arne Næss that I discussed last week. The core idea behind this conception of preciseness is that a short statement is usually quite vague or ambiguous. It can have many different interpretations, all of which are valid because they flow from sensible speculation on how the words should be understood. 

These interpretations might not even be mutually compatible, which makes sense because most unproductive philosophical conversations are just people arguing over mutually contradictory interpretations of the same relatively vague sentence or set of sentences. Making a statement more precise means that its set of possible interpretations shrinks without including any new interpretations that weren't in the original, vague set.

Here’s why I made the cut: the meaning of a statement becomes more precise as you add more material. Grammatically speaking, a very precise statement will contain far more than one single sentence. Most sentences, taken on their own, are actually rather ambiguous. Only when you add more and more qualifications, elaborations, and general detail does what you say starts to become more precise. 

So a sentence would inevitably be rather vague, especially if its subject is rather complex and open to many possible interpretations. A paragraph's worth of explanation less so, and as your statement becomes more precise, it grows longer to account for more details and possibilities. So while it’s important that each sentence be written with the most precision possible, its size forces a natural limit. And I didn’t want to focus a reader's attention on individual sentences as the sole vehicle of philosophical meaning. 

What Is Life? There Is No Mystery, Jamming, 20/12/2014

I follow quite a few people from the world of Doctor Who on Twitter. In most cases, I get a wonderful glimpse into the thoughts of one of England's crankiest men (mid-1980s Doctor Colin Baker), notices of plucky theatre productions and charitable drives (1960s and 1970s companions Frazer Hines and Louise Jameson), and steadfast promotion for television shows so terrible that it constitutes a model case of entertainment industry professionalism (Karen Gillan's enthusiastic promotion of her absolutely terrible American sitcom, Selfie, which I badly hope doesn't ruin her career, as she's a wonderful actress who just needs material that measures up to her talent).

But Christopher Bidmead is perhaps the most rewarding single Twitter feed among my Doctor Who list. Bidmead is a science writer who was the script editor of the show in its 18th season (succeeding Douglas Adams, the toughest act to follow). He only held the post for that single year, but there was no other year like it in the entire history of Doctor Who. The stories produced in his short tenure skillfully combined thoughtfulness with suspense, adventure with science and philosophy, and some impressively weird imagery.

Saturday morning, Bidmead shared this video, which I thought I would share, even at risk of its being a too-long-didn't-watch at a length of an hour.

It's a lecture by physicist Jeremy England about how to define life according to the terms of physics. It was quite fortuitous for me that Bidmead would link this video after I just wrapped up the most sustained philosophical argument on my own blog. I was quite honoured that SK, one of the regular semi-troll commenters on Phil Sandifer's website visited my old post about the Star Trek episode "The Measure of a Man" from this May after Vaka Rangi linked to it in his discussion of that story.

The Star Trek episode is a cheekily over-dramatized trial, where Starfleet must decide whether Data has any rights, or if, as a machine, he is the institution's property. SK kept hammering me on the old point in philosophy of mind, that because Data is a machine, his entire person is a series of mechanical movements and reactions. And because he is determined entirely by physical causes, he does not truly think and express those thoughts, only precisely reproduce exactly what a creature (who is not a machine, so not subject to strict physical determinism) would say.

This juvenile way of thinking about physics and physical determinism is one of the central reasons why philosophy of mind doesn't get that much respect anymore in the scientific community. The conception of physical determinism as strictly linear A-causes-B-causes-C hasn't held much water in the physics community at least since James Clerk Maxwell. The development of the dynamic branches of physics and their cascading effects on the fundamental concepts of every other science is either forgotten or unknown in the general population, who still think of determinism after the fashion of Isaac Newton and David Hume's billiard ball analogy.

Throughout my time as a university-based philosopher, I regularly interacted with people WITH PHDS who said the scientific concept of emergence was nonsense, or who seriously described in professional contexts the neurological activity that constituted personality as "the mind stuff that brains do." My colleague in Texas, Levi Bryant, still regularly vents his frustrations on Facebook about a trend in philosophy to revive old-school vitalist biology, whose key postulate was that there had to be a fundamental force of nature, vitality, that animated organic bodies to be exemptions from the universe's strictly deterministic causality.

All that said, it is very difficult to use the fundamental concepts of physics to understand life. In their mathematical descriptions, all physical processes are fundamentally reversible. Yet living processes never flow in reverse. England's talk discusses how entropy, one of the fundamental concepts of Maxwell's thermodynamics (the revolutionary field that ultimately crushed the Newtonian model of linear causality) can be used to define what life is.

Living bodies, described in the terms of physics, are bodies whose internal processes tend to maximize the efficiency of entropy production.

Of course, the meaning of life as a whole goes far beyond the principle. Reductionism is a stupid way to think as well. But if you're wondering what physical principle might be held in common between me, you, a tree, a lichen, an ant, a trilobite, a paramecium, a plankton, a bivalve, and a herpes virus, entropy efficiency maximization is a pretty reasonable notion.

When Even the Idealists Can’t Get It Right, Jamming, 19/12/2014

I found it very fitting that I was editing the third chapter of my Ecophilosophy manuscript when I read this morning about Greenpeace’s protest stunt at the Nazca Lines. The third chapter revolves around how philosophy can best serve direct political action. I discuss it in the context of the environmentalist movement, but it applies to all political activism generally. 

While Næss’ own articulation of the idea that I trace in the chapter has some troubling implications, there is at heart a promising way that a bunch of professional thinkers can play important roles in political activism. And no, it isn’t a reference to Marx. The Theses on Feuerbach are so well-known* that referring to them for an answer to this problem is almost like a cheat code.

* Well, the only one that ever really gets quoted is the philosophy-interprets-the-world-but-the-point-is-to-change-it one. 

No, the more promising role that philosopher can play in a political movement is the voice of the ideal, the movement’s own self-critic who, as it were, makes sure everyone checks themselves before they wreck themselves. I feel like Greenpeace could have used someone like this.

Huge, theatrical protests are necessary in an era of enormous industry. We’re a world where the Pacific Trash Vortex exists, where the Aral Sea became the Aralkum desert in a generation, and where the largest geographical feature in whole towns are the giant mounds of shattered computer parts. Also, just Google image search open pit mines for some horrifying environmental photography. 

But the irony of this is simply too delicious. Greenpeace, the world’s most famous militant environmentalist organization, is dedicated to protecting the world’s most vulnerable territories and ecosystems. And their message of protest to the international climate summit in Lima consisted in ignorantly damaging a vulnerable territory by hiking through the middle of the night in total darkness all over millennia-old art made by lightly brushing black stones away from light-grey earth, just to unveil a bunch of big yellow letters spelling out a corny slogan to reduce greenhouse emissions.

VICE was nice enough to embed the CBS News report in their post about it yesterday, so I know enough of the Peruvian government’s stance that I thoroughly disagree with their contention that criminal charges against the individual activists will deter people from future protests that interfere with the Lines. Punishing perpetrators never deters crime.

I also don’t want this to be one of those anti-Greenpeace posts, which is close to a universal reaction as I scan through the online reaction. I’m still energized by the punk energy of these hippies as they carelessly trudge through the most fragile art on Earth to lay their protest banner. When I read their just-barely-apology on their Facebook page, I kind of want to slap them for their contemptuous attitude. 

But that’s why we need punks in all their forms, to shake us out of complacency by openly spitting on all our eyes. Even when that energy is so misdirected and stupidly employed that we’re entirely justified in slapping them back as hard as we can. Human society needs punks.

It’s Pronounced Næss I: A Hidden Oracle, A History Boy, 18/12/2014

Arne Næss wasn’t exactly hidden, but he isn’t as prominent in typical histories of philosophy as I think he should be. He plays a major role in my Ecophilosophy manuscript, returning multiple times to the manuscript’s argument, each time with a different aspect of his thought in focus, depending on the context manifested by appearing at that point in the book’s overall plot.*

* I realized a while ago that a lot of my writing works by creative repetition, cycling through several key ideas or themes, letting what’s come before change the new occurrence or each concept or image. This is true for my philosophy and my fiction work.

I wanted to find a picture of a fairly young Arne Næss, if
only to be different from almost every other photo out there.
I’ve never really written about Næss on the blog before, even though engaging with his work was vitally important for developing my Ecophilosophy manuscript in the first place. This was the first seriously large production in philosophy that I’ve ever done, and writing it marked a serious transition in my abilities to write and to plan long-form works. 

Most of my research and interpretation of Næss' ideas was already in the past long before I started the Adam Writes Everything blog in July 2013, so I never wrote much about it. Næss’ philosophy played an important role in the Ecophilosophy manuscript, as one major goal that ran throughout the entire project was to rescue his core concepts from the relative neglect into which they’d fallen in the discipline. 

But one of the final results of that manuscript is in pointing out how, while Næss’ ideas get the ball rolling on a genuinely ecological philosophy, they also contain their own limitations that prevent him from finishing the job. My manuscript exists to finish the job.

Næss spent a lot of his career in a strange position that way, always ahead of what would turn out to be the curve, but in ways that never had any direct successors or imitators in the discipline of philosophy. For example, even though he’s mostly known today for the works he composed later in his life on environmental and ecocentric ethics, did you know that Arne Næss was a member of the Vienna Circle?

Næss' entire family live interesting
lives. To illustrate, these are two of Arne
Næss' grandchildren, Ross and Evan,
with their mother, Diana Ross.
You probably didn’t. But Næss didn’t die until he was 96 years old, and that was in 2009. Being born in 1912 gave him the chance to experience every major change in analytic philosophy in the 20th century. He isn’t remembered as a major member of the Vienna Circle because he was never a protagonist in many of their historically significant debates or a direct influence on any of their followers or successors. As well, most of his professional life he spent in Norway, and Oslo is too far outside the major hubs of analytic philosophy’s social networks during its formative years to have been directly powerful.

But his work on logic during the 1930s and 1940s, his first major development as a young philosopher, became the foundation of his first major masterwork, Interpretation and Preciseness. It was an application of set theory to argumentation theory, creating a detailed formal method of how the act of making a statement very precise actually worked. Basically, a description becomes more precise as you add more content and qualifiers, and none of those new qualifiers allow for any possible interpretations beyond what the previous, more general description did.

As well, his period with the logical positivists produced a remarkable early product of his career, Cognition and Scientific Behaviour. This was the first serious work in what has become a rather hot new field, experimental philosophy. Næss literally designed a survey and what we would today call a focus group protocol, which tested people’s intuitions about basic scientific and philosophical concepts like causality. 

This method of doing philosophy, slamming the empirical techniques of the social sciences into what’s typically a purely contemplative practice, is considered controversial and strange today, when it’s practices in the labs and campuses of Joshua Knobe and Jonathan Weinberg. Consider how this must have gone over in the Weimar era of Europe. 

Næss, for much of his career, was a pioneer of new techniques and ideas in philosophy whose status as such was only given retroactively, not because he had any direct successors. He was a harbinger of philosophy’s future in experimentation and an intriguing side contributor to a significant movement. Until he read Silent Spring. . . To be continued

The Masochistic Pleasure of Mourning a Tradition, Composing, 17/12/2014

Martin Heidegger appears a couple of times in my Ecophilosophy manuscript, and I edited the last half of chapter two today, which revolves around how he influenced environmental moral philosophy. Of course, Heidegger influenced one entire side of the disciplinary parallelism of the last century of philosophy.

He demands your absolute attention and obedience. No
wonder I don't like Heidegger.
His work has an unfortunate gravity to it. I remark when I first discuss him in chapter two that it is a difficult task to bring Heidegger into any philosophical discussion without it soon becoming all about Heidegger. Nietzsche wrote disparagingly of the spirit of gravity that appears in too much Western thinking of his own time (and later times as well), a powerful despair that brings forces and currents from far away spiralling down into its whole. Heidegger embodies such a spirit of gravity, possibly the greatest in the Western tradition.*

* I’d venture that Jacques Derrida offers a similar gravity, thanks to the density of his language and the hypnotic power of the cyclical aporias out of which he builds he arguments. But Derrida found his most powerful gravity when he was channelling Heidegger’s themes, carrying on a detailed post-mortem after his predecessor’s diagnosis of philosophy’s death.

Heidegger’s influence in environmental philosophy is amorphous. He’s often talked about by writers and activists informally, but never a figure of analysis. No one declares themselves a Heideggerian in the typically very left-wing communities of environmental philosophy and political activism (I wonder why?). But the basic idea in his critique of technology is profoundly insightful, even if its surrounding conceptual framework can be utterly hateful and contemptuous of human variety in a profoundly terrible sense.

The roots of technology go back a long way in Western culture, according to Heidegger. He thinks they only go back to Plato, thanks to his veneration of the pre-Socratic sage philosophers. I actually think the key attitude finds its origin in the early days of hominid species, maybe even in the imperatives of the most vestigial forms of self-consciousness. It is to see the world as a resource: all your thinking is ultimately oriented to working out different ways to use the world. Heidegger, and the more contemplative of environmentalist philosophers and activists, just want you to see it.

This insight led to Heidegger’s uptake by environmentalist philosophers. But it can only go so far. Environmental philosophy’s origin as a self-conscious sub-discipline lies in activism, and Heidegger’s thought includes a pervasive quietism. The actions of people cannot save humanity. “Only a god can save us.” Only the movements of nature itself, and a people’s readiness and capacity to be moved in the correct way by nature, can drive genuine change. 

For a writer who uses martial metaphors as much as Heidegger when discussing the practice of philosophy, the man’s work provokes a feeling of powerlessness. Here was a man who saw the entire history of the discipline and tradition of writing and thought, Western philosophy, as having been in decline almost from its very beginning, the downfall having begun with Plato and Socrates. Heidegger’s own era, in his view, was the aftermath of the final crash in the work of Nietzsche.** Just as the language and life of the Greek culture in the era of Anaximander revealed being itself in a moment when people could be perfect conduits of nature, so were the language and life of the German culture in the era of Heidegger ready for a new beginning.

** Another reason to despise Heidegger as much as he surely would have despised you: he treats Nietzsche as a crashing end, rather than the glorious new beginning that so much of his thought was. 

Instead, there was the crash of the Nazi era, and the growth of environmental catastrophe. Since Heidegger held such stringent beliefs about the impossibility of most peoples in most eras to think being, when the critical moment passes, there is no chance of salvation anymore. He spent his career chasing false gods, mourning lost opportunities, and pining for a future deliverance beyond any of our controls.

I think he enjoyed despair.

Fear of a Green Conservatism, Composing, 16/12/2014

Now that my vacation away from the obligation of having to travel to Oakville and back four days a week has started, I’m still working. I just get to do it in my pyjamas. I’m back at the writing desk again. 

This time, I’m going through my Ecophilosophy manuscript. Some very pleasant things have come together regarding publishers over the couple of weeks, so I’m polishing the manuscript in detail to update the style to the more vernacular approach that I’ve developed through the blog. I spent a good chunk of today working on the second chapter, which is mostly about why political opponents of environmentalism see it as a threat to democracy.

Enough lobbyist money will even turn you against this
Nixon-founded bastion of liberal anti-economy claptrap.
On an intellectually basic level, most political opposition to environmentalism is based on the idea that anything that’s good for the environment is bad for the economy. This mostly comes out of the mouths of conservative politicians who are speaking on behalf of their petroleum company and chemical company lobbyists. In the Canadian case, there are more profound affinities between the Harper government and the oil industry, but I’ll save those comments for later, once I start working through Daniel Gutstein’s Harperism. That’ll be my fun reading after I finish this old Joseph Conrad I picked up at a bookstore bargain bin a while back.

But the more philosophically profound part of that chapter is about defending a basic idea behind liberal political thought from what is seen as an environmentalist challenge. This goes beyond the simple idea that too many people still hold (and that I examine in detail in this chapter), that humanity and nature are essentially in a zero-sum game, and that valuing nature means debasing humanity. To give you the short version of my argument against this: Well, no.

The typical liberal I discuss in this section sees that challenge as an imperative to give up on human freedom to save nature. It sees the environmentalist challenge as the contention that the best, most authentic place for humanity is nestled into a sustainable community enjoying the fruits of nature. 

If I am free, goes the liberal opposition to this vision, then I should be free to break from the conditions of my home place. That break could very well involve ecologically destructive activity, or perhaps not. The point is that humanity should be open to this freedom, or else it is constrained by the social conservatism of culture and duties to the environment. Human freedom lies in the power to break radically from where you grew and create an entirely new line of flight.

I’ve felt like this about my own life before. This was very much how I felt when I left St. John’s in 2008 to come to Ontario for my doctoral program. I felt enormous pressure to stay in St. John’s from the wider culture, but it only chafed me. I wanted a new beginning, and I knew it would liberate me. When a liberal hears about the environmentalist vision of small sustainable towns of people in mutually nurturing relationships with their land, they’re rightly afraid that this heralds a stagnating humanity, that environmentalism is a radical social conservatism that has found itself emerging from the political left.

The short version here is also: Well, no. The slightly longer version, which will be just as unsatisfying because my whole argument is a 10,000 word book chapter, is that the ability to break radically with your past is never taken away. The environmentalist vision of a sustainable community enforces no greater social conservatism than any other political philosophy. If anything, its model figures are more wary of social conservatism because environmentalism grew up in counter-culture. 

Environmentalism simply adds a new set of stupid decisions to the list of avenues that human freedom has stumbled down and discovered to be terribly destructive and generally awful. It’s the new critique of freedom so that the next time we exercise our freedoms to begin a new life, we won’t make similar mistakes as the last time, destroying our ecologies in the name of some amorphous progress. 

Change and newness is necessary for evolutionary development at a biological and cultural level. We must know what does and doesn’t work. Today, we know that environmentally destructive habits largely don’t work. Liberal freedom to start new lives is how we begin future experiments, some of which will be successful, and most of which won’t. Freedom is the ability to break. Wisdom in freedom is the knowledge not to break bad a second time.

Grounding the Right to Oppress: Taking Ability for Quantity, Composing, 15/12/2014

This weekend, I wrote up a review of a book that will be appearing at the Social Epistemology Review and Reply Collective later this month. The book I’m talking about is an essay collection called Post and Transhumanism, which is a general introduction to the topic through chapters that examine a couple of key historical figures or specific subjects.

This is something of a preview and a little more paratext on my review. The short version of what I actually think about the book is that it had some good essays, and some bad essays, with some interesting insights, but nothing really to write home about. I think one flaw is that everything maintained a very introductory focus, as if all the writers wanted to show how much they knew, but didn’t really want to challenge the reader to think on her own.

However, that’s not what I focussed my review on. It isn’t what I’m going to talk about today either. I wanted to mention a curious little argument that came up in one of these essays, which implied a conception of how moral standing works that I find genuinely perplexing to discuss. I consider it a mistake in thinking that interferes with the ability to do productive philosophy. 

The arc of my formal review tended to a different direction. I concentrated directly on the problems of conceiving transhumanism (philosophical speculation and preparation for the utopian improvement of human life through biological enhancements) and posthumanism (the set of philosophies from Nietzsche onward that focus on a general issue of overcoming humanity) as basically the same thing. Short version of this theme: Don’t do that.

If Steve Buscemi got cybernetic enhancements,
would he have the right to control me, as
someone without such enhancements?
The argument about moral standing goes like this. Robert Ranisch, the co-editor of this collection, includes an essay in the book called “Morality,” where he discusses different moral ideas and problems that occur in transhumanist thinking. This includes a discussion of how human enhancement will affect the moral standing of the human race. In particular, how differences in moral standing between the enhanced and the non-enhanced (or among gradations of superficially to deeply enhanced) will affect the moral standing of different people within humanity.

It’s a common argument in Western philosophy to ground moral standing in cognitive capacity: intelligence, a sense of self-consciousness, the intellectual ability to engage in moral discourse, empathetic and sympathetic powers. The sense of the term ‘enhanced’ in transhumanist discussions is usually very vague, but in the context of Ranisch’s argument (or rather, this small part), it refers to an enhancement of these powers that ground the moral weight accorded to a person.

If enhanced people have greater moral powers and abilities, then they’ll have greater moral standing than non-enhanced humans. The higher moral standing of the enhanced means that they could legitimately oppress humans who are still like us today. He is literally describing the technological advancement of personhood itself, such that an enhanced human would be more of a person than you or me.

Even though this wasn’t a major element of his piece, which was more about general moral issues and indicating problems with some of them, it stuck with me as having made a fundamental error about the nature of how morality really works. Well, let me rephrase that. Ranisch’s conception of morality works just fine in the context of philosophical discussions about ranking people’s moral standing on a chart and calculating what suffering can be inflicted upon them without it being morally relevant.

Being ethical operates entirely differently. Whatever kind of weird technological enhancement would make a human “morally superior” to another wouldn’t grant the superior a right to oppress the inferior. If someone, no matter how enhanced they are, was genuinely morally superior, then the thought of oppressing or controlling someone who was already above whatever minimal threshold of powers was necessary make their suffering immoral under the old regime wouldn’t even occur to them. That’s what moral superiority is.

It’s why I find so much talk about ‘enhancement’ among transhumanist speculation unproductive. It’s a vague way to make philosophical problems of no import. The question “What if there were a class of humans with the moral right to oppress other humans?” isn’t relevant for technological enhancement discussions. It’s relevant for discussions of the legitimacy of monarchies or workers’ rights. 

Moral standing isn’t the kind of thing that you gain when you add a cybernetic implant. One is more moral when you change your character such that the desire to harm someone for your benefit becomes abhorrent to you. The problem of philosophy that uses terms like ‘enhancement’ so vaguely is that it makes you think moral standing is something like aerobics capacity, which biological or technological enhancements can presumably improve by various degrees. 

You end up fundamentally misusing and misunderstanding important ideas, so you end up talking in circles about nothing of any substance.

The Left in Danger of Its Old Stupid Mistakes Again, Jamming, 13/12/2014

I’ve said before on the blog that I’ve always considered myself a person of the left, and embraced all the conflicts, misunderstandings, paradoxes, and idiosyncrasies. Even though I’ve sometimes been told that I shouldn’t be a leftist, it’s where I always fall when it comes to my political beliefs. Besides, it’s not as though there isn’t plenty of room (and history) of people on the political left wildly disagreeing with each other about incredibly important topics.

I thought about the left’s tendency to internal divisions, critique, and also its short memories when I read this article at Jacobin that a lot of my online friends were linking over the last couple of days. It’s an interview with the scholar Daniel Zamora about Michel Foucault. Zamora has published a book of critical essays on Foucault, and its central angle is that, despite being one of the most popular and influential thinkers on the modern intellectual left, Foucault was actually a free-market neoliberal.

How, I can imagine you asking, does that make any bloody sense at all? Well, it goes something like this.

As a gay man who embraced many of the
(unfortunately for him, quite dangerous)
subversive activities of that culture, he
understood the history of the state's power
to use police and prisons to control a
population's sexualities. A fundamental
aspect of liberation is freeing yourself
from a control apparatus.
Near the end of his life, in the early 1980s, Foucault came to see economic liberalism as a way to overcome the disciplinary social control of state domination of industry, civil life, health, and other essential aspects of human life. This was why he supported many figures in the French Socialist party whose policies embraced a market-liberalized social democracy. There were similar reasons why his contemporary Gilles Deleuze, a man with whom Foucault shared many philosophical ideas, supported François Mitterrand. Deleuze, however, would live long enough to regret this. Foucault was dead in 1984.

Zamora doesn’t denounce Foucault’s work, as such a thing would be too much for a respectful philosopher (though if he were a click-chasing blogger, it would be a different story). Foucault also deserves respect, says Zamora, because he’s one of the few thinkers on the left who actually engaged with the ideas of Hayek, Friedman, and their ilk before summarily dismissing them, as too many everyday leftists often do.

Zamora also makes a very insightful point that the long-term conservative revolution of neoliberal economics shifted the priorities of the welfare state from combatting overall inequality to fighting the most desperate poverty. So the rich could be as rich as possible as long as society maintained a general standard of living that kept people from a totally abject state. 

Zamora correctly describes this as making many of the government institutions that buoyed and supported the working class illegitimate. And slashing the welfare state has resulted in working people drifting closer to the borderline of poverty, living a largely hand-to-mouth existence. We are now the Dollarama generation, and a major intellectual hero of the left, Michel Foucault, endorsed the economic ideas at the heart of this shift.

But I don’t think Zamora is entirely fair to Foucault, and that he doesn’t understand the full depth of the problems with the world neoliberal economics replaced. Zamora expresses too much faith in the state as an institution to restore economic justice and fairness.

Foucault endorsed a vision of economic liberalism that broke down the immense power of the state to monitor and control its population down to each individual’s biological functions. Economic liberalization disconnects people, freeing them from a state apparatus that, while it does provide material services, also controls people.

You can also see this more purely conceptually in the discussions of deterritorialization as liberation in Deleuze and Guattari’s Anti-Oedipus. Zamora may be nostalgic for the welfare state of old, but that also means being nostalgic for a political principle of governance as population control. For the generation of Europeans whose teenage years were consumed by the war against Nazi totalitarianism (as Foucault’s and Deleuze’s were*), the power of the state to destroy individuality was the ubiquitous issue of politics, a spectre haunting the critical thought of the left.

* Deleuze himself lost his older brother Georges in the war, a French Resistance activist who was captured and disappeared in a Vichy government prison camp.

Foucault, Deleuze, Guattari, Negri, and the other post-pragmatist philosophers haunted the mainstream left of the 20th century with their more anarchistic visions of a left that didn’t rely on the state. I found one reaction to Zamora’s interview at a Washington Post blog, where Daniel Drezner writes that contemporary libertarians can learn a lot from Foucault. They certainly should learn a lot from Foucault: like how to become anarchists.

Ultimately, libertarian and neoliberal values will be defeated by the one problem that a society which ignores the problems of growing inequality always faces: the mass frustration of a huge majority of allegedly middle class people who can no longer afford any basic goods beyond the Dollarama level of quality. In allowing a collection of small oligarchies to concentrate almost all of Earth’s wealth in a few thousand hands, economic liberalization has failed.

But the vision of the people’s state has also failed. The generations who experienced Western totalitarianism in the Nazi and Stalinist regimes saw that. We see it today when we look at the militarization of the police, the American prison system, and the horrifyingly ubiquitous surveillance state institution we live under. Zamora is right about the bankruptcy of economic liberalism, but if he wants us to restore our faith in the state as the means of our liberation from this new regime, he’s a fool.

The Materialist Mysticism of Clarice Lispector, Jamming, 11/12/2014

This week, I finished reading a novel that depicts serious spiritual ecstasy without any religion whatsoever. Let me set the context first.

You know what bugs me about the “New Atheists” that too many people find impressive? I mean, aside from all their racism. It’s the sheer reductive blandness of their ideas. They commit the juvenile error of believing in a vision of science as a perfect system of knowledge that discovers only absolute and perfect truths about the world, which is bad enough. 

But even beyond this, the New Atheist picture of the world is as a place of pure, simple existence. The only events in the world are the trivial and mundane. Scientists tell us how the world works, and it’s the simple collisions of billiard balls. Anything more profound than that is an expression of their hated religion. For Dawkins, Maher, Harris, et al, understanding your identity through any kind of conception of the divine is a sign of idiocy.* They’re the kinds of dunderheaded atheists that unfortunately arise in an era where moronic Biblical literacy is more popular than it’s ever been before.

* Even here, I’m tactfully leaving aside their contention that anyone who steps inside a mosque is a mass-murdering child rapist. 

If you read Lispector's The Passion According to G. H.,
and I strongly recommend you do, it seriously helps to
imagine the narrator as a Brazilian Lucille Bluth.
Religion is most profoundly about frameworks to develop your identity through dialogue with the divinity of existence. This ethical dynamic accepts (like all sane religious thinking) pluralism of religious tradition and belief. How each person understands divinity in existence is unique to each person. Both oppressive religious institutions and New Atheists understand religion only as a series of doctrines for the masses to swallow whole for the sake of social and political conformity. Understanding the divinity of life is a more mystical path.

I recently finished reading a landmark novel of Clarice Lispector, The Passion According to G. H., a short, dense work that consists entirely of an interior monologue of a mystical experience in the most unlikely place. The G. H. of the title is a wealthy woman on the cusp of middle age who lives in Rio de Janeiro. I already felt happy to read an existentialist story whose "generic human" was a different kind of person than an early-middle aged white male. She begins the story in a typical moment of self-absorption, annoyed at having to clean her apartment herself for the first time in a while. Her live-in maid, whose name she has trouble remembering, has left her job and moved out. She’s stuck cleaning her room.

G. H. realized, on discovering a large wall-drawing, that her maid hated her. When she discovers a cockroach crawling out of an armoire and collapses, she spirals into an experience that I can only call a communion with matter itself. The cockroach’s empty black eye reflects its entire species’ history. She understands how disgusting living matter, indeed the very nature of even being alive, is. Squishy, moist, and oozing. 

Everything is pardoned because everything is alive, and life requires the consumption of other life to continue. The inevitability of death and grotesque violence is a casual revelation. She crushes the cockroach in the door, and through the long moment of its death, she begins to experience what feels like hours in seconds. Death is inevitable, a fact that deserves neither blame nor punishment. Reality is, in Lispector’s words, neutral, and the universe is a profound indifference to both suffering and joy. 

As a toddler, Clarice Lispector fled
Ukrainian pogroms with her family
to Brazil, where she grew into one
of her country's greatest writers.
We are interdependent bodies that eat and are eaten, are consumed to be reformed in a new form. This is a world without fear, because nothing is ever truly destroyed. The world is a vast and complicated continuity of change, its renewal a constant now. Humanity is a sad over-complication of reality, creating angst and terror from the egocentric songs of self-consciousness. Even the nature of our existence as individuals separate from the world around us is a mistaken perception, an error in judgment resulting from our over-complicated nervous systems. If there is a hell, says Lispector’s G. H., it’s an all-too-human life of constant worry and fear.

All of us will die eventually, and dying will be a relief from our anxiousness. Existence as dirt and rocks lets us experience slow, calm attitudes, the relaxed existence of simple bodies. Humans have so many ways to be irritated that everything in the world constantly annoys us. Death, simplifying our material, is a welcome calm after the continual busyness of organic living. 

“The world interdepended with me – that was the confidence I had reached . . . Life is itself for me . . . And therefore, I adore.”

I don’t really think my spoilers tag was appropriate for that.

Lispector’s Passion should be read if you ever feel yourself agreeing too much with a Richard Dawkins or a Bill Maher. Instead of an atheism that’s moronic and simple, Lispector can give you the foundation of an atheism that delivers all the profundity of religion at its best, even as popular religion falls into the idiocy of literalism. Biblical literalists and New Atheists deserve each other, and in my more resentful moments, I'll be glad to watch them can snipe each other to death for my amusement.

I’ll be with the mystics.

University Philosophy’s Complicity in CIA Torture, 10/12/2014

Today’s post was originally going to be about a beautiful novella by a writer I discovered only recently. That post is now scheduled for tomorrow, because of how the release of the United States Senate Intelligence Committee's report on the CIA’s torture tactics under the Bush-Cheney Administration affected me. Let me put it this way.

I’ve taught classes where I’ve argued that torturing prisoners is the right thing to do.

Here are the circumstances. Ever since September 11, introductory ethics courses at McMaster’s Philosophy Department included a couple of articles discussing different perspectives on whether torture was morally right. Whenever the courses covered issues like this that were relevant to the political climate of the time, they usually included one particle taking a pro and one taking an anti stance.

It terrifies me that Dick Cheney has no guilt for anything
he's done.
The pro-torture article that was usually included in that section would straightforwardly examine the moral consequences of the ticking bomb scenario. Every instance of real-world torture of a terrorism suspect would be described as if it were on an episode of 24, where we know the torture victim is an actual terrorist, and the pain of torture is immediately effective in breaking his ability to hide information that he genuinely knows about an imminent and deadly threat to a large number of innocent people. 

I always used to lead my tutorial students through a strong critique of this entire scenario for being the ridiculously unrealistic hogwash that it is. But in the educational context I was working in, this argument was taken seriously. Course lectures and textbooks presented this argument fairly, on its own terms, according it the same respect as essays by Peter Singer, John Rawls, and Ronald Dworkin. Hell, the ticking bomb was the actual public justification that the executive branch of the United States government used to justify its torture policies.

Now, I read about CIA operatives waterboarding people nearly to the point of permanent brain damage, and that waterboarding was an everyday practice at Guantanamo and the Salt Pit prison complex in Afghanistan. Food slurry was forcibly inserted into prisoners’ digestive tracts through their rectums. People were suspended by their wrists for hours upon hours, and imprisoned in boxes barely bigger than a coffin for days without relief. CIA operatives said the purpose of these techniques were to attain “total control over the detainee.”

None of the information that the tortured prisoners divulged was usable at all. Most were outright lies that were screamed in the throes of horror simply so the pain would stop. Some prisoners who suffered just these hideous tortures were completely innocent people; they were simply casual acquaintances whose names previous prisoners threw out in the midst of state-sanctioned violation.

I feel genuinely regretful about having led tutorials and sat through lectures that took these arguments seriously. Even in the abstract, ticking bomb scenarios were serious means to excuse and endorse horrifyingly cruel and inhuman treatment of people who were at the mercy of a state army and clandestine warfare service. But in the interests of fair philosophical argumentation, we presented both sides of this debate as if they were ostensibly equal in rigour and standing. It was a convention of what is taken to be the disinterested standpoint of philosophy: examining arguments from an unbiased perspective and deciding between them based on their objective merits.

But it is itself monstrous, as an authority figure in an institution of higher learning, to refuse to take a firm stance on the issue of one’s government carrying out acts so heinous that the people who enact them cannot stop themselves from becoming monstrous. Treating torture as if it were any other matter of political or moral philosophy normalizes it, makes it appear to be one among a number of alternatives that we can calmly weigh in impartial contexts. 

Yet being human renders impartiality here utterly obscene. Insofar as I was part of an educational institution that made torture appear to be an entirely normal philosophical question weighed according to a simple for/anti balance, I played a part in normalizing a crime against humanity.

We owe it to ourselves and each other to act with more care.