Knowing Knowledge IX: Knowing Necessary Possibilities, Dialogues, 30/04/2015

This is the first part of the last instalment of my exchange with University of Warwick’s Steve Fuller about his latest book Knowledge: The Philosophical Quest in History. I went on for a bit longer than I usually do, so we’ve split my initial critique in two halves. Steve responds to the first half of what I wrote him last week here. He responds to the second half of what I wrote him last week, in a day or so. As usual, I write cheeky responses in my photo captions.

You know, I've never actually used a picture of myself
to begin these posts, as I've used a photo of Steve.
Dear Steve;

Although I loved our explicitly political discussion of the last couple of dialogues, I want to dive into the final instalment of our exchange with some headier philosophy. I particularly want to discuss the power of counter-factual reasoning. Even though you consider this a foundational method for a progressive philosophy of science, I think it eclipses even your own vision. Counter-factual knowledge, I'd go so far as to say, makes a lot of your own vision obsolete.

The conclusion of Knowledge: The Philosophical Quest in History returns to the vision on which your early chapters focussed, the unity of science in humanity's conception of ourselves in the image of God. Your advocacy of this idea remains a point on which you and I will, I think, always disagree. But once I reached the end of your book, I had many more reasons for my disagreement.

The first such reason I want to discuss is the pragmatics of consilience. Scientific research, discoveries, institutions, and knowledge range over, as you describe in Chapter 6, all things for all people. Science is about the investigation of the world, of the systems, relationships, and bodies that constitute the world all together. Its mode of knowing the world is through investigating what must happen and what can change. 

Counter-factual knowledge, in other words. This model of knowledge and reasoning achieves, on our own power alone, the “middle knowledge” that you say in your conclusion is the foundation of modern science as it arose in the West, in the Medieval cultural milieu. 

The highest level of necessity in human reasoning is the strict necessity of syllogistic reasoning. A is B; B is C; therefore A is also C. There is another mode of knowledge that is far more contingent, counting as knowledge, but without any necessity at all. When I first met you in real life at the University of Toronto, I made note of many facts: your height, that (unlike many of the photos I had previously seen) you were now clean-shaven, and the tone of your speaking voice. 

The essence of the divine is also that all existence is
divine, and there's no hierarchy of being separating
me from this bear. None of us are superior or
inferior. We just do different things.
Between logical necessity and contingent fact is what you call Middle Knowledge, the knowledge of physical necessity, the knowledge of the laws of nature. Emerging from the Medieval milieu, Christian self-conceptions were required to ground our claim to genuine knowledge of physical necessity. 

If humanity were entirely profane, then we would never be able to grasp any necessity in the universe at all. We’d be mere animals, drifting and distracted from moment to moment and event to event. Yet we aren’t so entirely divine as to have perfect knowledge of all the facts, changes, and relationships that constitute the universe. This would be the knowledge of the divine plan of being, where every fact is understood in its necessity. 

I think of this along Spinozist lines, partially to needle you a little, given your previous comments to me that you could never be a Spinozist. But it also makes sense to me that the existence of the divine is necessity. It’s the space where I see some of our deeply held conceptions of divinity converging, one of the few spaces where they really do. As we understand the relations among all the bodies and processes of the universe, we become more divine ourselves.

But here’s where you and I differ. I don’t see why we need to be partially divine already to develop knowledge of physical necessities. Or rather, humanity’s status as the divine animal isn’t necessary to ground our capacity to know physical necessities. The reasoning structure of counter-factual knowledge, which you describe in delightful detail in Chapter 6, is a guide to such development. And we don’t need any underlying divine nature (at least no more divine than the rest of material existence) to have such a power. We figured it out ourselves.

A question related to the concept of humanity’s nature as the image of God still nags at me, and I think it always will. Why must the fact that the universe is intelligible imply that there is an intelligent designer? There is nothing about the existence of order which implies that such order is the product of design. Existence alone constitutes order through the dynamic relations of processes. Order figures itself out through time.

This is what I ultimately find so frustrating about your book and your larger philosophical projects as a public intellectual right now. I simply don’t understand why a simple notion like the logic of counter-factual knowledge can’t be a unifying principle for science, and why you think only the strong, deep concept of humanity being made in the image of God can do the job.

Groundbreaking, both in theoretical
physics and the ethical progress of
humanity, India's first great physicist
of the modern era, Satyendra Nath Bose.
It isn’t just my own non-Christian sensibility that remains skeptical of whether this concept can hack it, but it would likely meet resistance from the non-Christian sensibilities of many practicing scientists around the world. I’m not just referring to the atheist or agnostic scientists who were still raised in a broadly Christian culture. I’m referring to scientists in Africa, Asia, and the Arab World who were raised in a Muslim religious tradition, or the large number of scientists throughout the West who come from long-standing Orthodox Jewish communities. The same goes for scientists whose cultural theologies are Hindu, Buddhist, or the a-religious cultural philosophies of China.

All these people throughout the world would be hesitant to throw in on a unifying concept for science that is rooted so firmly in a Western, European, and Christian theologico-cultural milieu and tradition. What comes naturally to you with your Jesuit education would be culturally alien to someone with an upbringing in Hindu or Confucian institutions. 

What may have been the unifying principle of science at its origin is now a concept that will only sow division. In the words of a wise man, who I think was either Thomas Wolfe or B. A. Baracus, you can’t go home again.
• • •
Dear Adam,

I’m glad that you picked up on the importance I place on what the later Scholastics called ‘middle knowledge’, namely, our capacity to reason to counterfactual states of the world based on our empirical knowledge. In this way, we might be able to bootstrap our way up to God’s universal knowledge. 

Put in more modest and secular terms, we might come to know the laws of nature without having to experience every moment that is subsumed under those laws. Thus, experiments allow us to vary the conditions of the world – in our minds, in the lab and, increasingly, on the computer – so that we can simulate the requisite universality. Of course, experimental outcomes are notoriously fallible, and I want to say something about the significance of our fallibility later. But first I want to address the need for an intelligent designer who sets the gold standard against which to judge our efforts in this direction.

Steve Fuller, my formidable opponent.
The first point to observe is that unless you believe that there is a being who could know all things in all space and time, it doesn’t even make sense to attempt to get at ‘laws of nature’ in the modern scientific sense – unless, of course, you were writing fiction. But the quest for laws of nature requires more than simply belief in such a being. It also requires that we are already sufficiently like that being that it is reasonable to think that the quest might just succeed. This link between us and the intelligent designer is especially important because the search for laws of nature, while building on ordinary empirical knowledge, quickly takes us away from it. 

This is why I just suggested that the default human attitude to this project is to regard it as a genre of fiction. But experiments are not big video games or theatrical sets. They are models of physical reality. Without the theological scaffolding, such a conclusion would seem sheer lunacy – and this is how, I imagine, Aristotle (but not Plato) would have understood today’s science.

I believe that people fail to see this point because they haven’t considered what other rational grounds they might have to search for laws of nature prior to knowing any of the consequences that have made the project so empirically worthwhile over, say, the last four hundred years.* 

* ‘Curiosity,’ the snake oil of naturalized epistemologies, doesn’t explain the intergenerational persistence of science in the face of its own long-standing unsolved problems and collateral damage to the world. 

The answer would probably be none. And that was precisely the state of mind in which the original Scientific Revolutionaries found themselves. For them, the metaphysics of Christianity (especially the imago dei doctrine) led them to conclude that only specific Christian institutions – especially the Church – held them back from realizing something that their religion already told them was in principle within their reach, namely, absolution of Original Sin and reunion with God. 

But this conclusion was ultimately a leap of faith on their part – and perhaps it is no accident that Pascal’s Wager as an argument for the existence of God emerges at this time. Albert Hirschmann tells a similar story in The Passions and the Interests about the early acceptance of capitalism as an economic ideology before it had proven itself as a reliable wealth producing engine.

You ask why Christian theology needs to be dragged into the logic of counterfactuals, and the answer is that there is no obvious ‘logic of counterfactuals’ independent of specific metaphysical assumptions. You’re wrong to think that Christians are alone in possessing the relevant basic metaphysics, though Christians have done the most to develop it. 

The image of David Lewis, metaphysician of necessity.
As Abrahamic religions, Judaism and Islam also grant pride of place to humanity in divine creation based on the Garden of Eden episode and God’s uniquely direct address to humans. True, Christians have honed this point into a strong imago dei doctrine, which Judaism and Islam regard as controversial if not outright heretical, especially when it hints at the apotheosis of humanity, the point at which Christianity potentially slides into transhumanism.

This is light-years away from the metaphysical starting point of the great non-Abrahamic religions, which do not grant any cosmological privilege to the human condition whatsoever. Of course, there have been great scientists from India and China who have stuck to their native beliefs and not been converted to something more Abrahamic. 

However, I would argue that these scientists got the relevant metaphysics through their ‘Western’ scientific training, which in turn has modified how their non-Western religiosity functions in their overall world-view. Sometimes anthropologists speak of this phenomenon in terms of ‘compartmentalization’ but it may be more subtle.

A thoroughly secular debate over the exact metaphysics that underwrites counterfactuals stole much of the limelight in analytic philosophy in the 1970s, courtesy of Saul Kripke and David Lewis, two sharp-shooting Princeton logicians who overshadowed colleague Richard Rorty as he was putting the finishing touches on Philosophy and the Mirror of Nature. This debate left a strong impression on me, especially as it was presented in one of Jon Elster’s early books, the brilliant Logic and Society. In terms of how we’ve been discussing matters, the sense of ‘necessity’ that concerned Lewis was ‘logical,’ whereas Kripke’s was ‘physical.’

Basically Lewis saw counterfactuals as self-consistent non-actualized states of the world, full stop. He wasn’t particularly concerned with how to get to such states of the world from the actual one. In fact, whenever Lewis discussed the ‘closeness’ of some possible world to the actual one, he would be simply referring to the number of properties that they shared. 

His theory was not attuned to what economists call ‘the theory of the second best,’ whereby the second best policy may be radically different from the first best because what makes the first policy the best is how all its parts hang together. And if you’re missing some of the parts (or they’re not in the right proportion), then something completely different is better. This point, alien to Lewis’ purely logical analysis of counterfactuals, explains why the middle in politics is so often squeezed by the extremes.

The image of Saul Kripke, the 20th century's other
great metaphysician of necessity.
Kripke was more interesting. His theory of possible worlds fits Bismarck’s famous definition of politics as ‘the art of the possible’. Kripke insisted that unrealized possible worlds had to be based in the actual world. This means that an explanatory narrative of some sort needs to be spun. In particular, we might recount how a possibility had been prevented but perhaps could be reactivated in the future. 

All of this would involve looking at the resources available at various times for making things other than as they turned out to be. On that basis one could say how ‘far’ or ‘near,’ say, a desirable world was from the actual world at a given point in history – and this distance may vary, not necessarily always getting closer or farther away. We may well reach a point in the future where we can effectively recover a lost opportunity in the past.

To be sure, Kripke didn’t concern himself with any of the above details. He was simply interested in defining the sense in which it is reasonable to talk about ‘possible worlds’ as something other than pure fiction. And for him the bottom line was that a possible world is a possible version of the actual world, and hence a ‘counterfactual’ bears a stronger relationship to the ‘factual’ than the word ‘fictional’ normally implies. 

But for me Kripke made only the first moves. Navigating between possible worlds has been central to my own thinking over the past quarter-century or more, and it is developed in some detail in Knowledge: The Philosophical Quest in History.

However, to take this line of thought seriously is to commit to the idea that it makes sense to imagine an intellect who can scope out all the various contingencies, based on trying to realize some ideal plan within the budgetary constraints that matter imposes, i.e. variable but limited resources. In other words, the intellect is an optimizer, who prioritizes goals, identifies appropriate trade-offs and adjusts to vicissitudes. God would have all these possibilities programmed into his intelligent design algorithm, but we humans normally experience it as history, in which case the point of philosophy and science is to discover the algorithm and, in the process, realize our own divinity.

Even a dastardly old autocrat like
Otto von Bismarck could say
something reasonable sometimes.
I still prefer the more recent one.
This is a very brutally theological way of justifying our relationship with God – even by 17th century standards! But I think in secular form, it is also what Bismarck had in the back of his mind when he declared politics to be ‘the art of the possible’. He got it from Hegel, and Hegel reached back through Leibniz to Plato’s original conception of the philosopher-king, a member of a class of handpicked individuals who are trained to think like gods in case the day comes when they must function in that capacity.

To foreshadow my response to your final salvo on what you regard as my ‘political naïveté,’** consider two senses in which politicians may be said to ‘respond to events’. One reading of this phrase makes it appear that politicians simply adapt to circumstances, one after the other, without any sense of principle whatsoever. When we say that politicians are just in the business of staying in office, that’s what we mean. They just do what it takes to get the right number of votes. 

** Editor’s note. That critique is coming in the second half of the finale.

However, an alternative reading suggests that, when responding to events, politicians have already anticipated the possibility of those events and hence are already prepared to do the appropriate thing to keep the forward momentum going on the ideals which they ultimately wish to promote. And this may involve what, on the surface, looks like a change in course of action.

Now, politicians may do all this more or less successfully because, in the end, they’re just politicians and not gods. But this is the aspiration. It also gets us back to a point I raised earlier, namely, that the Machiavellian maxim ‘the end justifies the means’, often used to damn politicians’ lack of principle, is in fact the modus operandi of how political principle is implemented in the world. In this respect, we might wish to give politicians a bit more credit for intelligence when they say that their plans are working even though it looks like they’ve made a U-turn at a crucial juncture.

One person who I think understood all this very well was the great US theorist of journalistic ‘objectivity,’ Walter Lippmann. He saw the journalist as someone whose presentation of the news should reassure the public, in order to allow politicians the private space to race through various hypothetical scenarios as they decide what to do next: a calm exterior masking a dynamic interior. 

This was the process that at the height of the Cold War Stanley Kubrick’s Dr Strangelove immortalized in satire and Erving Goffman generalized into a sociology of the ‘front’ and ‘back’ regions of everyday life. In public relations, it’s called ‘impression management’ and when done properly it is a means to an end, not an end in itself.

This is not a man who you trust when he's calm. It
means he isn't afraid.
Lippmann’s divided self for political conduct may be seen as the mirror image of God’s dual self-presentation through nature: Instead of a calm exterior, nature inspires authority through its surface volatility as something ‘beyond our control’. However, beneath that volatility is a set of laws which science is in the business of discovering – perhaps in a less frantic and more rigorous way, yet nevertheless along the same experimental lines as the juggling of contingencies that transpire behind the political scenes so jealously guarded by Lippmann.

Put it this way. Both the politician’s appearance of calm and nature’s appearance of volatility are deceptions of a sort. The politician is really less placid than he appears, while nature is really less unruly than it seems. The frantic activity behind the appearances in the first case is in search of the secret to the underlying order in the second case. 

It may be that the various controversies surrounding ‘climate change’ are in the process of unravelling this delicate balance of knowledge and ignorance that has enabled something like Goffman’s front/back stage distinction to manage our understanding of both politics and nature in the modern era.

However, I don’t wish to dwell on this point here, but turn instead to something that drives the prominence of counterfactual thinking in my work. It’s what I take to be Hegel’s great counter-intuitive point about history. To have a rational account of history, you need to assume the arbitrariness of the decision points after which someone has won or lost – and as a result history goes in a one direction rather than another. Your rationality lies in how you cope with the arbitrariness, either as winner or loser. 

After all, the winners aren’t guaranteed indefinite success simply by repeating their winning actions, and the losers might have eventually won, given a different moment of decision, different resources, different evidence, different institutional arrangements, etc. Indeed, descendants of the losers might well overturn the winners in the future.  But it all depends on how these parties learn from their world-historic success or failure.

I can't help but think that we're only made in the image
of God because we looked in the mirror when we saw
the image. Count the fingers, Ludwig.
And Popper would agree with all this too. After all, Popper never said that losers had to roll over and play dead! Rather, they had to re-organize themselves so as to overcome the original criticism and do things of value that their opponents cannot. This is not as hard as it might first sound, if you consider the arbitrariness of the original moment of decision.

Perhaps the biggest misunderstanding that people have about Hegel – and here I’m thinking of Thomas Carlyle’s ‘pop Hegelian’ view of the ‘hero’ in history – is that there is some luminous relationship between a world-historic agent and the ends of history itself.  To be sure, the young Hegel regarded Napoleon as ‘the man of the hour’. But from the standpoint of world-history, Napoleon was simply a signal, a marker, a way station, not necessarily an exemplar of things to come. 

This point is especially controversial in a Christian context, where Christians have tended to think that Jesus wanted his followers to live as he did, with his overriding sense of social justice on the basis of which he placed his own life at risk, which eventuated in his Crucifixion. Thus, church history and dogmatic theology largely consist of stylisations – and, dare I say, dilutions – of the life of Jesus, as recounted in the Gospels, designed for easy mass consumption.

Given this rather flat-footed but institutionally effective strategy for ‘following in Jesus’ footsteps,’ it is easy to see why Hegel’s theological followers – the ‘Young Hegelians’ of Marx and Engels’ German Ideology fame – were considered so politically subversive in the 1830s and 1840s, when they proposed ‘naturalistic,’ ‘symbolic,’ and otherwise ‘demystified’ readings of the life of the Jesus.

However, my point is somewhat different from theirs. I am not so worried about what it would mean for the legitimacy of Christianity if the Gospel accounts of Jesus’ life turn out to be substantially false, thereby undermining the epistemic foundation of, say, the Petrine papacy. Rather, I am more concerned with the meta-level question of what exactly about Jesus’ life (even granting our accurate knowledge of it) might be worth carrying forward as ‘exemplary.’ 

You don't have to be Christian to learn from the life of
Jesus. I've learned from it, for example.
This was a question that the Franciscan order has struggled with throughout its history. As a result, it has often found itself on the heretical side of things. After all, everyone’s life is a product of its time, and as time goes on it becomes intuitively harder to draw clear lessons from what Jesus did in his day to what we should do in ours. Call it the ‘problem of existential induction.’

Finally, let me close this round on something you and I may agree on: Academic training normally blinds one to the problem of existential induction, as it effectively gives one a vested interest in the future imitating the past. This leads academics to overestimate their own powers of judgement, which encourages them to dismiss empirical anomalies and other disruptions to the status quo as simply ignorable local disturbances, perhaps to be blamed on idiosyncratic personalities.

Here academics confuse being well-informed (i.e. knowing the trends and having the right views) with understanding the full potential of the fields that they're in. To understand the full potential, one needs to think more 'counterfactually' about how earlier initiatives managed to fail. Normal academics presume that it was because they were shown to be conclusively false. 

But they may have failed simply because the proponents did not try to mount a 2.0 in light of the first wave of criticism. And by 2.0, I don't just mean ad hoc hypotheses, but a reasonably substantial reconfiguration that enables the supposedly defeated theory to say something new that the opponents cannot. This is why I believe that the only way to rationally mount a future-oriented programme is by learning from the past.

It Depends on Who You Know, Jamming, 29/04/2015

I loved it before I got into communications, and I love it even more now that I know exactly how much hard work goes into them. Corporate hashtag disasters. It sounds petty for me to bring it up, but there’s actually an intriguing social-epistemic concept underneath my juvenile joy in watching someone else’s idiocy.

Rachel Notley, leader of the weird, twisted version of
the NDP that exists in Alberta, hands Premier Jim
Prentice his arse in their recent debate.
This joy combined with my love of political horse races, as I keep an eye on the Alberta election, both for my own dorky entertainment and for the genuine political stakes for my country. If Jim Prentice, old guard of the Harper revolution, returns home with a hero’s entrance only to preside over the Conservative party losing the premiership of the movement’s heartland province . . . well, I’ll drink to that.

After slaughtering him in the leadership debates last week, Alberta NDP leader Rachel Notley started gaining in the polls to the point where it’s becoming a serious idea that she might win a minority government. So Premier Prentice turns for an answer to his campaign manager Randy Dawson, senior partner at the Navigator PR firm.*

* But employed as an individual, of course. Although one conservative activist claims to have created it herself.

His answer is #LifeWithNDP, a hashtag for people to describe all the horror stories of having to live under the governance of a social democratic party. Terrifying tales of waste, high taxes, and leadership stupidity would surely follow. And it did, for a little while.

Because it wasn’t long before people who generally liked the NDP noticed the hashtag and started using it to spread some messages that are very well-known in circles that sympathize with the New Democrats. Like their support of a well-funded education system or fuelling a prosperous middle class by taxing the wealthy at a rate that can fund state-of-the-art public services, like transit, public electricity infrastructure, and road repair.

Reading through conservative #LifeWithNDP tweets,
a lot of them had a very negative, harsh, almost
bullying tone. I think it's an advantage of the NDP
that they rarely sound like bullies and more often
either pleasant or filled with righteous rage for justice.
Who ultimately is in the right wouldn't matter, although you can probably tell by now which side of this political debate I support. What's important for communications practitioners to know, is that any hashtag campaign whose response structure is open-ended is not only uncontrollable, but inherently reactive.

Here's what that means without the philosophical language:** If you design a hashtag campaign about anything controversial, you have to make sure there's only one conceivable way to use its words. Because your opponents in the context of a social media campaign have the same power, as individuals, as you do: one feed, one voice.

** Which sounds a lot like business buzz-speak more often than a lot of us with a philosophical education would like to admit.

An example of this that we discussed in my classes at Sheridan was #McDstories. The idea was that people would use this hashtag to describe different experiences of the awesome time they had at McDonald’s. Of course, the hashtag was flooded pretty quickly with stories of people being treated rudely, eating terrible food, or being horrifyingly grossed out at McDonald’s. Even after McDonald’s pulled the promoted tweet they used to release the hashtag, people kept using it for days to bash the company.

It was easy for us to know how badly that was going to turn out. But the people who designed that campaign weren't people like us. They were high-ranking employees in McDonald's public relations. McDonald’s didn't give them gastro-intestinal illness like the rest of us; it paid their mortgages. These were people whose very identity was defined through their careers in McDonald’s. 

See, it's hard to accept that some
people don't have the same positive
attitude to a company when they
don't work for it.
They would have had only good opinions about McDonald’s, and would have been unable to conceive of anyone who didn’t. Everyone they saw every day loved McDonald’s. That everyone you see every day is also a McDonald's employee is a thought that doesn't always make it into the forefront of your mind. Human intuitions tend to confirm our biases, not critique them.

So it is with Jim Prentice, Randy Dawson, and the Alberta Progressive Conservatives. How many people in their social circles do you think has any positive opinion of the New Democratic Party? Even if we accept Sheila Gunn Reid at her word that it really started with her, you can ask the same question. I’d say absolutely none. Regarding this question, they lived in a bubble of confirmation bias. 

A bubble thicker and harder to burst than the ones in my stomach after a meal at McDonald's. #McDstories.

It doesn’t matter what the ultimate ratio of pro to anti New Democrat was in the #LifeWithNDP hashtag. The fact that it was open-ended gave opponents to conservatives the space to fight it, and the result was that it opened up the social media ecology to a controversy instead of a blunt instrument. 

Maybe a more enlightened, media savvy government could learn the powers and limitations of social media before using it. #LifeWithNDP

Behind the Buzzword of the Influencer, Research Time, 27/04/2015

I’ve still been reading some basic sociological network theory, even though I’ve mostly been blogging for the last few weeks about Chinese philosophy, sci-fi, and my exchange with Steve Fuller. But the final couple of weeks of my college program, no matter how much I tried to space the work out over a longer time, were still pretty packed. So my side reading had to step away from communications simply to give my brain a rest from the intensity.

I know this means that I just said that I read histories of ancient philosophy and argue on the internet about the nature of art for fun. I'm a nerd. You really should have noticed this by now.

Most of typically think of companies as organized
according to a hierarchical structure. And they are. But
most of the actual work is done through individuals
communicating informally and unofficially across
departmental divisions.
The reason I’m reading sociological network theory is to apply it to my communications career. Network theory is incredibly useful for audience analysis, understanding how the people you're communicating with relate to each other and integrate their relationship with you and your organization into their daily lives. 

One field I’d like to work in over this career is internal communications, because these are the information flows throughout an organization that control what it actually does. They constitute a company’s nervous system, and the internal communications workers are metaphorically its spinal column. You have to know everything you can about the activities and cultures of the different parts of an organization, and apply that knowledge to ensuring the harmonious working lives of everyone in it.

Internal communications needs social network theory because the acts of communication among people throughout an organization that actually get work done don’t flow through the channels of reports and orders up and down its hierarchy. It’s in the informal, casual conversations among departments where people actually learn what they need to know to accomplish what they must.

However, it's very difficult to figure out, among all the many casual relationships of a workplace, which ones are the most critical in the activities of the business. Well, you could figure it out if you applied a massive statistical analysis to a data set created from constantly surveilling everyone in the organization without their knowledge. But that’s a little unethical.

I’m not 100% sure a communications practitioner in a company could get away with that. Let alone whether the executives and shareholders would even give you the money.

Gilles Deleuze and Félix Guattari were
thinkers who were very influential to me in
how I analyze and understand the power of
transversal communication and activity.
Social network theory and research reveals that the most important people in any organization aren’t the chief executives, even though they get all the press. Executives, especially as they increase in rank and overall responsibility (and most importantly, liability) for a company’s activities, get the press coverage and shoulder the risks of failure. If the ship goes down, they’re supposed to go with it, which I think is why there's such popular outrage when finance execs receive huge severance payouts as their companies fail and thousands lose their jobs.

But analyzing the daily interactions of workers in large organizations shows that executives aren’t the most productive members. Those are workers in middle-ranking positions who personally interact with many other departments to coordinate activity, manage complex projects, and just plain make friends. These people build their reputations all over the company as social network brokers, people able to mobilize people and knowledge* for whatever needs to be done.

* Or in more buzzy corporate language, cognitive resources. Although I'm more a fan of just saying ‘knowledge.’ 

Such a person has been called an influencer, because they influence their peers' opinions and actions. At least this is the term as it was introduced to me at a conference I attended last Fall at Toronto's Centennial College, as I began learning about intellectual trends in the business world. Yet I don't think the term 'influencer' really gets to the heart of how such a position works in internal communications and organizational social networks. A better, if less buzzy, name is the transversal networker.

Influencers are tough to identify because they don't have a special title or office in the organizational hierarchy. They're just someone uses an institutional position that's otherwise innocuous to do the best work in the company. Their relative invisibility is an asset because they build and maintain their relationships to facilitate productive flows within the meat of a company. 

No one else can really do the work of this kind of interstitial networker. I mean, everyone can build relationships and coordinate projects across departmental and hierarchical divisions of a company if they want to. But no one will do it in exactly the same way as anyone else. The skills and personalities of each person are singular, and all the brokerage that an interstitial networker does are through informal relationships. 

One of Deleuze's famous metaphors for how we think
knowledge should work is a tree: the roots are
foundations for the most beautiful growths of leaves.
He emphasized that knowledge really worked as
rhizomes, centreless networks of information flows.
Well, roots can't live without the rhizome fungi
threaded through them.
They can’t be made official, since their power comes from cutting through and skipping over official communications channels. This is the paradoxical challenge of internal corporate communication that I’d love to take on. A core duty is maintaining official channels. Yet exemplary work comes from facilitating these transversal flows and coordinations. That's where the real innovation in any organization comes from.

This is the kind of person I want to be in my business career, and it’s the kind of person we should all want to be. Someone who knows when official channels are required, but also knows their limitations and how to overcome them without neglecting or destroying them. This gets back to the problem I mentioned earlier about how hard these people are to find. Well, when you start acting transversally in an organization, the transversal actors tend to find each other. Transversality encourages innovation, but such activity needs the official frameworks so the interstitial actors have a place to rest and marshal their own resources. 

Complex webs of fungus are necessary to maintain a tree's health, funnelling nutrients throughout its root system. But the fungus needs the roots as a skeleton to give itself form. Transversality and hierarchy are built on opposing principles, but can only live through mutual aid. It’s the most productive conceptual paradox in living systems.

Difference and Complexity Makes Good Art, Composing, 24/04/2015

Yesterday, I made a modest contribution to the progressives’ side of the Sad and Rabid Puppies debate. I wrote that post from my perspective as a fan of science-fiction literature (as well as literature in general) and a political progressive who believes that our life improves through complexity, diversity, change, and difference. 

But I'm also a science-fiction writer, even if I'm just at the start of my career and I've barely been able to do any promotion. And as a science-fiction writer, I believe that good art comes from exploring diverse styles of narrative, figuring out concepts and characters that have never quite been seen before. The collision of characters with different cultural histories and ways of seeing the world make for new personalities in literature. 

Diversity in this context is literally the injection of difference in art. So of course I'm opposed to the ideology of the Puppies. Art made without bothering to explore difference, whether inserting cultural differences or colliding genres, is not good art. Good art doesn’t repeat the same ideas that it always has. It innovates. 

Not only do the Puppies believe that art whose themes express cultural diversity could only win awards through a conspiracy of empty affirmative action, they believe that award winning art should be simple and conservative. The first insults minorities. The second insults all our intelligences.

Being disturbed is part of enlightenment.
Just go with it.
I have a single book of sci-fi available right now. You can still buy Under the Trees, Eaten at Amazon, and I encourage you to do that. Its protagonist is a female, its other major character is a Métis man from Quebec. Exploring their cultural and gender differences are a key underlying tension in the entire book’s narrative. 

And that isn’t even the main focus of the story. It just accompanies it, giving you an extra layer of ideas in the text for you to think about. I like stimulating art, and so I write it. I’m sorry I haven't been able to promote it much because so much of my time has been taken up by my communications college program. Now that I have more time to explore Toronto, I can find venues for readings and prepare some of the multimedia shows that I want to work on.

Also, artistic impulses never stop until after you die. So I'm already trying out new ideas for fiction work, and I want to keep working in sci-fi.* The Star Trek story I was playing with last month has a curious potential to it.

* I’m going to start revising the A Small Man's Town novel manuscript into linked short stories. But I don't really feel like this is a new project as much as a remix of an old work into a more marketable format.

The Puppies have actually inspired me to expand this idea into something more than an exercise. The sub-genre that the Puppies, especially the Sad ones, believe is ignored would be military science fiction. These are usually sci-fi war novels, the worst of which tend to culturally conservative themes that one-dimensionally value militaristic jingoism, empty patriotism, and an aesthetic of ubiquitous violence.

Star Trek provides an interesting quandary, which the excellent Vaka Rangi blog has led me to explore. Star Trek is a military setting, as Starfleet is an openly military organization, but the overall professed values of the Federation and the values that the Enterprise crew try to live, are those of peace, diversity, and enlightened openness. They live these values not just culturally, but ethically and ontologically. They sometimes face problems that can only be solved by conceiving of entirely new modes of existence.

Why should only Star Trek have a monopoly on such a progressive vision for military science-fiction and humanity as a whole? The characters are all already original. Why not change all the details of the world of Beyond the Farthest Stars, and release it as an epic military sci-fi novel motivated by progressive, diversity-minded political and ethical ideas?

Here’s how such a book would go. I'd drop the episodic structure of a television show and integrate the storylines. The first part would play out pretty much as the first episode summary I wrote on the blog in March. 

Ensign Quentin Nichols joins the crew of the Illumination in a series of comedic incidents as the Academy valedictorian befriends the hard-partying men of the bridge crew, including the pan-sexual helmsman Paul Diamond. 

The eight-person cast of characters is introduced, with a special focus on Security Chief Natalie Bondar's personal history. A standard pilot episode's plot unfolds where main characters are endangered while rescuing the crew of a damaged freighter, but everyone is excitingly saved. 

The freighter itself is carrying contraband, the matter transmuter devices that have made capitalist production obsolete in the Confederation. These machines are tightly controlled because it gives the Confederation their decisive advantage in galactic politics: their entire population is prosperous without any dependence on boom-bust market cycles. The realization of what is essentially a communist utopia has been turned into a weapon of realpolitik and imperial expansion.

In the shadow of the first adventure, BFS’ main plot is introduced. A rival galactic empire may be stoking new conflict on a human colony, a Confederation protectorate that spent 15 years in brutal civil war. The world and the war where Bondar was born.

Part Two finds the Illumination pursuing a lead on the Lacedian covert operative who fuelled the arms trade to the two sides in that war, Lovanek. Through a reasonably plausible narrative contrivance, he and Bondar have personal history. When Bondar and Captain Bajwa find Lovanek, they come under fire from gangsters who’ve come to kill Lovanek, so the crew find themselves having to protect a gunrunner and war criminal. I imagine patterning them after Mexican cartels like the Zetas.

After fighting off the gangsters, the crew discovers evidence linking the Space-Zetas to the stolen matter transmuters, which means Lovanek is involved there too. Bajwa finds himself having to promise Lovanek asylum in the Confederation if he’ll hand over all his evidence about the contraband transmuter trade and betray the Lacedian conspiracy on Bondar’s homeworld. 

While all that political narrative is playing out, I’ll also have a story more like a typical first contact story. A scientist on a world that doesn’t yet have interstellar travel is experimenting with a transporter while the Illumination is doing a scientific scan near the planet. He’s beamed aboard for a short set of adventures, learns much about the potential for creatures of many kinds to live among the stars, and returns home confirmed in his conviction to progress his civilization.

The details of the overall story are still sketchy, of course. I’m not yet sure what that conspiracy involving starting a new war on Bondar’s homeworld will involve. I know it’ll have something to do with the transmuter smuggling and the moral consequences of the Confederation so callously manipulating galactic politics with utopian technology. I know it’ll involve a trip to Khohav ben Zion, the Jewish-Yazidi world-ship where Chief Science Officer Solomon was raised. I know the book will have four parts. I imagine I’ll steal a lot of narrative and world-building construction techniques of balancing micro and macro social scales from Tolstoy’s War and Peace, and from Stendahl’s The Red and the Black.

Above all, the climax of the story must come on Bondar’s homeworld, and it must involve an intense sequence of action and character narrative between Bondar and Nichols. Nichols is a creature of privilege: not only is he valedictorian in an institution that values intelligence, and clearly the meta-textual Mary Sue figure of the story, but he lives in an entirely privileged society. 

Ubiquitous matter transmutation technology means that no one is poor, hungry, or starving. Prosperity is genuinely universal on worlds where the Confederation government allows transmuters, and Nichols has never lived on worlds without this privilege.

Bondar is literally a child of war. Her entire childhood and adolescence, from age 2 to 17, was consumed by a planet-wide civil conflict. She didn’t join the Confederation because of its material prosperity, which she found hypocritical because prosperity is dispersed as a reward for compliance. She joined the Explorer Fleet because her friend Sidarth Bajwa showed her how journeying through the skies progressing and learning was the greatest life there could be.

How Could Anyone Think Differently Than Me When I've Never Met One? Advocate, 23/04/2015

I created the Advocate category of post so I'd have a serious-sounding label when I discussed serious issues. These would be problems on the level of the Syrian-Iraqi war,* the politics of modern anti-Semitism, and the Ukrainian civil war. Now, I'm using this label to talk about a group of cultural conservative nerds who have organized a campaign to hijack the prestigious Hugo awards for science-fiction literature. 

* Can we just start calling it the Third World War? In terms of geographic area, it’s big enough to qualify, with utter chaos in Libya, along with the war’s major front, the collision of anti-Assad insurgents with his Iranian-backed government, alongside Kurdish and Iraqi government forces fighting Islamic State. As well, there’s the desperate and rather clueless American and Canadian involvement in the Syrian-Iraqi front as bombing forces. I think in future, I’ll just refer to this conflict as the Third World War. 

Someone has different morals that challenge the universal
validity of mine? Maybe I should call the Waaaah-mbulance.
I know it doesn’t have quite the same life-and-death, civilization-defining stakes, but it’s a grassroots organized expression of a political movement in my own culture to silence and destroy the social and personal voices of, basically, anyone who isn't a relatively wealthy white male Christian. It’s a political movement that controls much of the United States government, and which has a powerful hold on the Canadian state as well. Its violence is slow and creeping, and so more difficult to notice, but it’s still a war.

Phil Sandifer has already made the definitive statement on the Sad and Rabid Puppies’ hijacking of the Hugo Awards. You should definitely read that. But read it after you finish reading my post, because his essay is a good 20 minutes. 

Phil describes the Sad and Rabid Puppies as expressing a modern fascist ideology and aesthetic for the culture of early 21st century America. It's an analysis that speaks to my own concerns as I work on the Utopias project about the dangers of organizing a political movement to forge an ideal form of society. Any such movement is inherently authoritarian, its reach penetrating every aspect of every individual’s life to maintain each member’s perfect and total conformity to the ideal.

It’s the politics of the end of history, the final perfect form of humanity that, once achieved, must not change because any deviation would be a corruption. This describes the politics of Rabid Puppies leader Vox Day pretty decently.

The injustice of this category of ideology, and the singular horror of Vox Day’s eugenic Christian version, is addressed quite well in Phil’s essay. I want to discuss a side issue that came up in the comments to Phil’s essay, which has more to do with the generally conservative Sad Puppies.

Space battles are fun and war stories are interesting,
but sci-fi literature can accomplish much more than
war story after war story.
The Rabid Puppies are a group of virulent gamergaters that the radical feudalist and eugenicist Christian sci-fi author, editor, and publisher Vox Day organized this year, which successfully placed a slate of poorly written politically conservative military sci-fi in the bulk of the literary nomination categories of the Hugo Awards. This group only organized over 2014. The Sad Puppies are a group of less insane social conservatives led by Brad Torgerson and Larry Correia that has been trying and failing to organize sufficient numbers for their slate of poorly written right-wing military sci-fi for the last three years. 

The Puppies slates are essentially a group that had to organize to get their choices past the popular preferences of dedicated WorldCon members onto the nomination papers. Torgerson himself commented on Phil's essay that he can't believe that works with socially progressive themes would have dominated the Hugo nominations without an organized voting slate among progressive minded people and self-identified Social Justice Warriors™. It’s just that the leftists don’t admit it. Torgerson writes:
“Mr. Sandifer, if you truly believe that a book like ANCILLARY JUSTICE or a story like ‘The Water That Falls On You From Nowhere’ did not benefit from a tremendous groundswell of affirmative-action-mindedness, you're not paying attention. Please phone me when you're interesting in discussing diversity beyond a skin-deep level. Quote Larry Niven: there are minds which think as well as yours, just differently.”
Of course, the reason we don't admit it is because there is no such organized voting block of progressives (Happy Kittens, perhaps?). A focus on more diverse characters and complex storylines and worlds creates more interesting artworks with more potential for artistic achievement. 

Even when I do think of war movies, anti-war movies
that shy away from the glorification of battle tend to be
better than enthusiastic revelling in the battlefield. I
always think of John Wayne's The Green Berets as the
worst offender of a movie that's so jingoistic that I
can't sit through it. Good art provokes thought and
critique, and reinforcing cultural prejudices gets in
the way of solid art.
Yet Torgerson, the aggrieved conservative, does not believe that a popular audience genuinely finds work with socially progressive themes superior. I wonder how much of this attitude is a result of simply not having anyone in his social circles with different political beliefs. If you don’t know anyone with different beliefs, then you'll have trouble understanding how people could hold those beliefs.

It is very easy for people to take their beliefs about how the world is to be self-evident. It’s how political extremism grows, through insularity and a lack of questioning. Being surrounded by people who mirror your own beliefs makes you think that there's no sensible alternative way to think or live. 

Such a social situation is horribly dangerous, because the reaction of such a sheltered person to meeting, at last, a different person is shocking and disturbing. You feel under threat by the existence of difference alone, which is a dangerous attitude to hold in a society where there is any political, cultural, or moral difference at all. 

Beyond this, when your own social world as a person contains such remarkable moral unity, you can easily come to believe that the larger world has the same character as your small world. Your small world seems the size of the whole world. So not only can you easily become afraid of difference, but you can’t even conceive of difference. 

The most reasonable explanation for the existence of widespread difference, if difference intrudes upon your insular small world, is not that the general population differs from your opinion and you’re actually a minority. It's that the different one is an insidious minority disturbing the self-evident truths that the majority, along with you, must know.

Torgerson accuses the left of being such a community, a small insular world whose members can't conceive of anyone different from them. Yet the conservatives are the ones who actually had to organize a hijack on their own behalf. 

Time Makes a Sage, Research Time, 22/04/2015

How quickly does humanity forget? Maybe it only takes a thousand years. If there are creatures out there who think on the temporal scales of planets and stars, and we can certainly conceive of them if we could produce Douglas Adams, they would condescend to us as we to do gnats, goldfish, and fireflies. Pretty creatures, we humans, who live in the pure flash of the present.

Only a thousand years. 

How long is an infinity for human memory?
I say this after reflecting on a curious point that Barry Allen raises in the chapter of his Vanishing Into Things about the Neo-Confucian school of medieval Chinese philosophy. This was a school of thought that began with a scholar named Zhi Xi, who was the first to organize and codify the works of Confucian genesis as we have them today. These would be the Four Books: The Analects, Mengzi, The Great Learning, and Doctrine of the Mean.

The funny part is, as Barry mentions, Kongzi, the man we call the Institute, could never have read all the books that constitute the foundational corpus of the philosophical tradition that we give his name. The foundation itself was only indexed and laid well over a thousand years after his death.

There's that number again. Over a thousand years.

In the Chinese language, the idiom for the number so large as to be uncountable is “the ten thousand things.” Not meant to be taken literally, of course, just a poetic way of expressing in everyday colloquial language that some idea, experience, or referent lies just beyond the human ability to make direct sense of it.

The Russians say “numberless" to express this idea. For English, it used to be infinite, though that term has taken on quite a few very precise meanings over the last couple of centuries. Blame Georg Cantor for taking some of the poetry out of European languages and blasting it into logic and mathematics instead. Perhaps we could say sublime, but I think we’ve drifted away from that word by now.

It’s a reasonable question to ask though. What are the limits to human understanding? In the context of this curious little historical idea I had the other day reading the chapter of Vanishing Into Things about Neo-Confucianism, you could ask it with a slightly different focus. How long does it take for us to lose our connection with our ancestors?

At my current age, I even realize that the band Sublime
just isn't the same as it felt when I was 19.
I have this thought because I compared the reverential attitude Zhi Xi and the other scholars in this revival of Confucian philosophy held toward Kongzi, Mengzi, and Xunzi, with the reverence that those three legendary thinkers had for their own era of sages and idealized perfections, the Xia Dynasty. 

There were important differences, of course. So little of Xia writing and thought survived in a direct form even in Kongzi’s day, for example, and there was so much evidence of the Confucian genesis that Zhi Xi could edit the whole volume in a single, legendary omnibus. But there is still that feeling of reverence, as though achieving equal wisdom to them is a goal. 

Kongzi thought of himself as a sage, probably not even someone who was wise enough to idolize at all. Kant never thought of himself so highly either. Even so, I’ve met lots of Kantians.

It was about a thousand years between the Xia era and Kongzi’s life. It was just over a thousand years when Zhi Xi began the Confucian revival, a reaction to the growing prominence of Buddhist philosophy in Chinese society. New gods returned.

A thousand years doesn’t take that many generations to unfold. Humanity's memory is short.

Inspired by the Original Wu II: Creative Philosophy Is About Thinking Differently, Composing, 21/04/2015

Continued from last week . . . Here’s the short version of what I’m thinking of doing with Sunzi’s Art of War for the Utopias project. I literally thought of this idea on a subway Friday morning, so it won’t be very well-developed or investigated, barely even thought about beyond the initial notion that it sounds like a cool idea. This is an idea so raw, it smells funny.

The United States, for example, has not been very good
at waging war without relying on brute force.
Sunzi stresses the centrality of deception for war. By deceiving and tricking your enemy, you can minimize the human and material cost of war by enormous measures. It’s a way of managing conflict using intelligence and interventions in the initial conditions of phenomena before they grow out of control. The powers of individuals working together, applying their individual ingenuities and skills to a common goal, is the centre: generals, spies, soldiers, technicians, and citizenry all contributing individually.

The general remains central in Sunzi’s own text, appropriate for the authoritarian political traditions of ancient China. But the importance of ingenuity and intelligence to succeeding in conflict offers an alternative, and probably more successful, model of global conflict and intervention. The American government could certainly learn that there are many more subtle methods to undercutting your enemies than the blunt force of economic sanctions and bombing campaigns.

Bouncing ideas of Sunzi off of a philosophical exploration of the First World War would, according to many conversations I’ve had about academic rigour, never fly. Jünger, to take an example that I’ll visit in detail soon, never had any detailed engagement with Sunzi’s writing, and himself had little intellectual engagement with Clausewitz, being only an ordinary soldier. But this is, perhaps one advantage to coming from outside the university system. 

Be aware that I’m not trying to excuse actual sloppiness in scholarship. I actually see Barry Allen as doing something similar with Vanishing Into Things, where comparative philosophy becomes not about historical influence or conceptual cataloguing alone, but seeing if some interesting new concept can emerge from the collision of thoughtful readings of texts that are otherwise alien to each other.

Barry’s third chapter is a dialogue of Sunzi’s with Clausewitz’s thought, with just the framework I’ve described. There’s no historical evidence that Clausewitz ever read Sunzi himself, and if he did, he probably would have found it irrelevant frippery, what with all the Chinese scholar’s talk of intelligence and deception when only physical force and the brutality of character to finish the job is needed for war. He plays the texts off each other to see what we can learn.

Alan Sokal with the best descriptive caption he could get.
I see Steve Fuller offering a similar philosophical technique in the closing section of his Knowledge: The Philosophical Quest in History.* I’ll talk about this in more detail on Friday when we publish our last dialogue on the blog. But it has to do with his last word (even though it’s never the last word as long as someone keeps trying to have it) on the Sokal Hoax.

* Given how much Barry couldn’t stand Steve’s book, I think he’d understand my impish smirk when I identify something in common between his thinking and Fuller’s. I have some pretty serious issues with the content of Fuller's philosophy, although I find much to admire in his method and approach, broadly conceived. More details on Friday.

Essentially, his critique of Alan Sokal’s largely successful denunciation of humanities scholars trying to discuss scientific ideas in the context of humanities scholarship, is that Sokal didn’t understand the audience that the humanities is written for. Philosophers in the intellectual circles of cultural studies who gave political readings of scientific concepts weren’t talking to the scientists themselves, but trying to craft more populist versions of the essential concepts of new scientific principles. 

Let me put it this way. How many actual physicists in the 18th century literally thought of the universe as a clock? Probably not many, and none of the good physicists. But the clockwork image of the universe dominated the popular philosophy of the post-Newton era. It inspired the ontology of mechanical determinism. The cultural studies model was trying to craft the same broadly applicable popularly-aimed intellectual concepts for the modern sciences like quantum physics.

Here’s a cheat code for my own work. Ecology, Ethics, and the Future of Humanity does the same kind of thing, crafting politically actionable, popularly understandable (if still a little intellectual and pretentious) conceptions of core concepts in ecological science.

It’s as if I’m saying to the reader, “Here is a different way of thinking than we’re generally accustomed to. Let’s consider its meaning, implications, and capacities, and see if there’s anything we can learn from it to adapt to the situation we face here and now.” This is philosophical creativity.