Learning Philosophy With Mr Data, A History Boy, 10/05/2014

I would definitely say that the first episode of Star Trek that really stuck with me was “The Measure of a Man,” and it’s only come to mean more to me over time. Those of you who know me personally probably don’t find it surprising that one of my favourite Star Trek characters, and one who I identified with myself as a child, was Mr Data. My favourite episodes of TNG as a child featured Data, and a lot of them are still personal favourites today. I first saw “The Measure of a Man” when I was five years old, and while I didn’t understand everything that was going on at the time, I knew I was watching some amazing television.

I don’t think a discussion of an episode of TNG that’s nearly thirty years old can be spoiled, though I’ll still add my 

SPOILERS!!!

sign for the sake of politeness. After all, Vaka Rangi and TARDIS Eruditorum don’t worry about spoilers, so neither will I. Starfleet robotics scientist Cmdr Maddox visits the Enterprise with a request that Data take part in an experiment to replicate his positronic brain, which would require that his memory banks be downloaded to an external hard drive and his brain and body disassembled. When Maddox can’t guarantee that he’d be able to put Data back together again, the android rightfully refuses, at which point Maddox produces a transfer order, and refuses to let Data resign to escape the experiment on the grounds that, as a machine, he is property of Starfleet and has no rights. 

When Picard asks the local starbase JAG for a trial to decide Data’s fate, he’s pressed into service as defence advocate, and Riker is ordered to be the prosecutor. Although Riker successfully proves that Data is a machine, Picard and Data successfully demonstrate his sentience and self-awareness. Maddox justified his desire to build a race of thousands of Data androids so they could be probes sent into dangerous environments to do the dangerous and dirty work that organic beings could not be rightfully pressed into. And because this hearing will have ruled them machines, they’d have no right to object. Since it would go against all the basic ethical principles of the Federation to build a race of robotic slaves, the JAG grants Data his rights, and he immediately refuses, formally, to participate in Maddox’s work.

Cmdr Maddox is certain that a bundle of electrical nets and
gears is incapable of true thought, of understanding
anything but the syntax and functions of poetry.
One aspect of the philosophical issues that inform this story is terribly obvious: the basic questions of philosophy of mind. All of the explicit conversation throughout most of the episode revolve around whether Data, as a machine, is really capable of thinking. At one point, Maddox thumbs through Data’s copy of the complete works of Shakespeare, and asks him, both fascinated and pathetically pleading with the android, if they are just words to him, or if he genuinely understands their meaning and significance. Data responds with a chipper affirmative, but Maddox still calls him ‘it’ for the rest of the episode.

This is the oldest issue in philosophy of mind. Indeed, it’s probably the definitive question of the whole sub-discipline. The sub-discipline was founded as the philosophical wing of the cybernetics research community, where the possibility of creating a genuine artificial intelligence with computer technology was paramount. For decades, its major problems revolved around what constituted the mind, because answering this ontological question would constitute a response to whether the Turing Test would succeed.

The Turing Test was a thought experiment, named after its first formulator Alan Turing, of course, that a machine would have achieved genuine intelligence if it was capable of pulling off a lie to a person about what it was, via exchange through a medium like text message that would keep the human from seeing it. The lie is usually depicted as the computer convincing a human conversation partner that it’s a human too. So a lot of the debate revolved around whether external activity was enough to guarantee the existence of a mind. Many thought experiments produced in response to this question attempted to pose this question with computers, (very technically specified) zombies, non-human animals, extra-terrestrials, convoluted translation apparatuses, and, naturally, androids. 

John Searle, a philosopher for whom I have distaste,
devised the Chinese Room thought experiment, in which
a simple translation and phrasebook algorithm achieves all
the Turing Test asks to prove sentience. Therefore, artificial
intelligence is impossible. His lack of imagination saddens
me.
Maddox treats Data just as some philosophers have conceived of mind: if it isn’t human (or at least organic) then it’s just a bundle of gears and mechanisms that spit very convincing responses to stimuli. Unfortunate for Data. But fortunately for Data, this is based on a premise that itself is up for philosophical debate: whether there is any aspect of the physical human organism (our squishier gears and mechanisms) which produces all that we call mind, or whether there is some immaterial aspect to the human mind’s constitution. If we’re all material, then a creature like Data could exist, if a machine (like the fictional positronic brain and neural net) could accomplish all that the human brain and perceptual apparatus does, or more.

Since all of the empirical evidence for the existence of human minds is our behaviour, Data’s behavioural demonstration of sentience and self-awareness is enough to declare him truly self-aware and sentient.

And the best part is, these ontological arguments are thrown out entirely for an ethical question. If we had an army of androids available to do terrifyingly risky tasks and labour, to be treated as disposable people, they would essentially be slaves, demarcated by their race: android. For people who know the horror of slavery, creating such a system with full knowledge of what you were doing would be intolerable in the deepest sense.

There is another ethical question in this episode, though, one that constitutes an idea that animates my own ideas for fiction work about my Alice character. She is an android that has overcome the human morality of resentment, who feels no impulse to punish for offence or wrongs, and seeks only to repair the damage to the wronged and the wrongdoer. Well, at least in this episode, Data is the one to do it first. 

Will Riker is forced to argue that one of his best friends is
not even a person.
Remember that Cmdr Riker was drafted to play the part of the prosecution, and he played it so well that if Picard and Guinan hadn’t thought of switching the case from an ontological to an ethical ground, Data would have been taken apart and died. By the end of the episode, Riker is miserable, because his actions almost cost the life of his friend. But Data is not offended; he’s grateful. If Riker had refused to prosecute the case, or do his level best to prove the Maddox’s case, the JAG would have ruled against Data’s having rights to self-determination. So Data says, “That action injured you, and saved me. I will not forget it.” 

That’s a greater wisdom than most humans possess, the ability to see past the immediate appearance of acts to their larger consequences and results. And with these in mind, he takes no offence, and praises his friend for the openly offensive acts that the situation had made necessary to save his life. His morality is guided by his knowledge. Data the character often spoke of his desire to become more human. Well, in this episode at least, he became more than human, better than human.

17 comments:

  1. A wonderful post. Wish I had something that personal for "The Measure of a Man".

    Though I do wonder what you'd make of this-When Data meets Riker for the first time in "Encounter at Farpoint", this exchange happens:

    Riker: "Do you consider yourself superior to humans?"
    Data: "I am superior, sir, in many ways. But I would gladly give it up, to be human."

    ReplyDelete
    Replies
    1. In the light of my own thoughts and memories of the episodes (both those recently re-watched and the hazier memories), I see those lines indicating how Data as a character expressed some of the anthropocentric tendencies that crept into TNG, despite its having overcome them so frequently.

      Data is superior to humans in many ways, both in the more subtle ethical ways I described, and in his advanced perceptual and physical abilities. But his lack of emotional expression always hindered him. As I've grown older, I've found the expression, becoming more human, a depressingly limited phrase. Data was created in the image of a person (and the meta-textual jokes of having Brent Spiner play all the Soongs, as well as Noonien's three androids), so he always had a special relationship with humanity. But really, Data felt that, because of his limited emotional experience, he was incomplete. That he expressed it as a desire to be human was a philosophical shortcoming of TNG.

      Delete
  2. Great post, man! Data is totally more human than human.

    ReplyDelete
  3. And the best part is, these ontological arguments are thrown out entirely for an ethical question. If we had an army of androids available to do terrifyingly risky tasks and labour, to be treated as disposable people, they would essentially be slaves, demarcated by their race: android. For people who know the horror of slavery, creating such a system with full knowledge of what you were doing would be intolerable in the deepest sense.

    The ontological arguments can hardly be 'thrown out', because the ethical question rests on the answer to the ontological question. If the androids do not have minds, then building an army of them to do dangerous tasks would no more be slavery than it is today when we build self-propelling machines to go into situations deemed too hazardous for humans, such as investigating suspected bombs.

    ReplyDelete
    Replies
    1. You miss the point of the ontological question being impossible to decide. Even in the episode, all they get to is sentience and self-awareness, even for the humans in the courtroom, which is enough (over and above whatever "mind" is) to accept our ethical obligations to Data. Whatever the answer to the questions of mind's ontology (or even existence beyond the functions), it's actually immaterial to the ethics.

      So when our machines have enough of a self-conception and will to existence as organisms such that, when we order them into hazardous situations, they ask us that they not go, we have machines capable of being enslaved. Check out my comment Vaka Rangi's "Elementary Dear Data" post about Levinas and Buber's conceptions of ethical obligations as grounded in the calls and requests of others.

      The episode's trial is about whether Starfleet has the right to enslave Data. The protagonists quickly think that this is about proving whether Data is a machine. But Picard and Guinan realize that the root question of the trial is whether Starfleet will give itself the right to enslave at all.

      Delete
    2. The episode's trial is about whether Starfleet has the right to enslave Data. The protagonists quickly think that this is about proving whether Data is a machine.

      Which it is. Because machines cannot be enslaved.

      If Data is a machine, then whatever the Federation does to him, he is not enslaved, because machines cannot be enslaved. The Enterprise, after all, is full of probes and whatnot that it merrily fires into anomalies, where they are often destroyed, and nobody wonders if they were 'enslaved'.

      Such probes are presumably capable of detecting when the conditions they are about to be sent into might result in their destruction, and it's conceivable that they may well report this back to their operator, if only to ensure that the operator does not accidentally destroy a probe through underestimating the conditions of operation.

      If the operator then overrides the probe's report of conditions that could lead to its ceasing to function and orders it in anyway, how is that, if Data is a machine, different from ordering him to do the same?

      If Data is qualitatively the same as a probe (albeit much more complicated) then the condition of 'slave' is meaningless to imply.

      Clearly it is wrong to enslave. So the only question to be answered is, is Data a type of thing which is capable of being enslaved, or is Data a machine, and so not capable of being enslaved?

      (What Data himself says is irrelevant to the matter, as the whole point at issue is whether the things Data says are proceeding form a real process of thought, or are merely products of a machine's programming, and the content of his utterances is of no use in determining their quality of origin.)

      Delete
    3. But the content of our utterances is useless to determine the quality of their origin when talking about humans too. That's the heart of Picard's examination of Maddox at the end of the episode. If you're going to take a skeptical attitude toward one creature's utterances, it's a problem because humans don't pass that test either, unless you carry with you the dogmatic premise that of course humans have minds (whatever those are).

      The point is not whether Data (or we) have any mind over and above our perceptual abilities, and sense of past and present selfhood. He's already told us what he is, we're able to listen, and that's all that counts.

      When it comes to the counter-example of probes, I can only refer you to Iain Banks and not Star Trek (unless you want to count that early Voyager episode with the sentient missile). But I'd rather wait and see what JM has to say about that when he gets there.

      Delete
    4. But the content of our utterances is useless to determine the quality of their origin when talking about humans too

      Which is why we need to base the distinction on something other than utterances.

      Given that we accept:

      1. It would be wrong to force a human being to go up to a suspected bomb and poke at it to see if it explodes, and
      2. It is not wrong to send an autonomous machine up to a bomb to poke it to see if it explodes,

      then there must be some relevant distinction between the human and machine, that means that the human is a being to which the concept of 'enslavement' applies and the machine is not.

      Seeing as we can imagine a machine which can produce identical utterances to a human, then whatever the distinction is, it cannot be solved by examining utterances.

      unless you carry with you the dogmatic premise that of course humans have minds

      All I know for sure is that I have a mind, and that the bomb-poking robot doesn't (because it is possible to examine it, and its programming, and see that its actions are entirely deterministic: there is no thought going on, merely programmed reactions to stimuli).

      I can't take another human apart to determine whether they are merely following pre-programmed responses to stimuli or actually thinking like I do, but it seems fair to give them the benefit of the doubt.

      Data, however, in construction seems to be no different to the Star Trek equivalent of the bomb-poking robots, ie, probes, spaceships, etc. The question of whether he is thinking, then, or merely giving pre-programmed responses to stimuli that make it look as if he is thinking, is more open.

      And it is on the answer to that ontological question which is interesting, because the ethical question is easy to answer. Is it wrong to enslave? Of course it is.

      Is Data a person who can be enslaved, or a machine which cannot? That's actually an interesting question, interesting partly because it cannot be answered simply by examination of utterances. To answer it fully you'd have to be able to take him apart and examine whether he was thinking or simply acting out his deterministic programming; but of course one can't take him apart until one has answered the question, so it's a bit chicken-and-egg...

      Delete
    5. I think you're right that we're both on opposite sides of chicken/egg questions. You say that you know for sure that you have a mind. Well, I'm honestly not sure that I do. I know that I think, feel emotionally and physically, perceive, and remember, all the activities and experiences from which my sense of self emerges.

      All those activities and experiences happen deterministically, at least in a non-linear manner, insofar as they're actions and reactions in a complex field of feedback loops in the relations between my body and its environments. They're deterministic, but each situation opens a variety of possible responses, so the deterministic character only restricts my choice by basic physical necessity (I can't fly without an aircraft, I can't read Mandarin until I bother to learn it, etc).

      All those activities do everything that we've postulated the mind as their possibility condition, even our somewhat constrained freedom. When I realized this about eight years ago, I just stopped believing in minds as a separate thing over and above all that a body can do.

      So the ethical question isn't a matter of ontology in this sense. It's entirely within the realm of ethics. I go to Emmanuel Levinas here: what matters is that Data is a creature who is asking us to listen to him and account for him ethically. It's a request (or a demand, given Maddox's resolute dickishness) that we acknowledge him as someone worth acknowledging, as part of our community. Ethics is a constant game of catchup to realize what's trying to connect with us, acknowledge that connection, and deal with the repercussions of that inclusion.

      Delete
    6. They're deterministic, but each situation opens a variety of possible responses, so the deterministic character only restricts my choice by basic physical necessity (I can't fly without an aircraft, I can't read Mandarin until I bother to learn it, etc).

      Which is exactly the way you (or at least I but I presume you are the same) differ from a machine (and, possibly, from Mr Data): you are free to think about respond to stimuli with the only constraint on your action being, as you accurately put it, 'basic physical necessity'.

      The machine, on the other hand, is not free in that way: the stimulus interacts with the atoms forming its sensors, which in turn interact with the atoms which hold its programming (and this is the same basic process whether those atoms are in the form of, say, mechanical levers as in a trolley-car which responds to pressure on its sensors from the side of the channel by adjusting its wheels to steer, or in the form of micro-engineered transistors in silicon attached to some form of storage for algorithmic programmes as in an autonomous unmanned aircraft which analyses threats and decides whether to attack or flee) and cause it to deterministically 'choose' (in reality there is no choice) the only course of action it could, given the initial conditions.

      The question is: is Data like you, who can choose between courses of action constrained only by basic physical necessity, or like the trolley-car or drone, which are simply atoms interacting with their environment in a deterministic fashion?

      what matters is that Data is a creature who is asking us to listen to him and account for him ethically.

      But he's not. Well, he might be, but not necessarily.

      All we know is that he is an object which is making the noises that a creature who was asking us to listen to him and account for him ethically would in his position.

      But we also know that it would be possible to make an object which would perfectly mimic such a creature, while still being simply an object following a (very complicated) deterministic programme.

      Therefore whether he is a creature, who can be enslaved, or an object, for which the entire concept of enslavement is irrelevant, cannot be determined by external analysis of his utterances or actions, because (Chinese room) it is in principle impossible to be sure what is going on inside, from the outside.

      Delete
    7. But I am a complex arrangement of molecules interacting with surrounding molecules and energy fields deterministically. I am a machine. A machine of organic parts that developed through epigenetic processes, where Data is a machine of artificial parts built in a laboratory. But we're both machines.

      Data actually has fewer constraints of physical necessity than I do: he can move faster, take in information more quickly, calculate mathematics far better, is much stronger, and as I explained at the end of the post, is ethically superior in having no resentful instincts to overcome.

      I think you're just repeating your old points (and ignoring my own points about how different our presumptions are about whether there's a difference of freedom between organic life and artificial constructions) just to get the last word. Please stop doing that.

      Delete
    8. But I am a complex arrangement of molecules interacting with surrounding molecules and energy fields deterministically. I am a machine.

      In that case, isn't the ethical question meaningless? What is the point of asking, 'ought we to enslave' if our answer is determined by the interaction of our molecules and energy fields?

      Even to ask the question, 'What ought we to do?' implies that there are multiple courses open to us to choose; but if we are machines then that is not the case, our course was set by the state of the universe before we were born and there is nothing we can do to change it.

      I think you're just repeating your old points (and ignoring my own points about how different our presumptions are about whether there's a difference of freedom between organic life and artificial constructions) just to get the last word.

      I'm repeating the points because they haven't been answered.

      Do you agree that there is a division between things that it makes sense to say can be enslaved, and things which it makes no sense to say that about? And that, say, humans are on one side of the line and drone aircraft on the other?

      And that, therefore, the important question regarding Data, and any other androids built to his pattern, is which side of the line they fall on?

      And that it is not possible to decide, based on external observation, on which side of the line Data falls?

      Delete
    9. I did answer the questions, though.

      On morality's place in a deterministic world. We understand determinism differently. You understand it as a total passivity: we are "determined by the interaction of our molecules and energy fields." Because we're the products of simple underlying processes, our existence reduces to those processes alone. But I understand determinism as probabilistic feedback relationships whose growing complexity enables more freedom: more affordances, more possible relationships. My complex body and personality don't reduce to its simplest constituents; my simplest constituents interact to produce a personality that dynamically interacts with its world.

      Therefore, in a deterministic world, all bodies are free. The structure and character of our bodies determines how free we are. The universe isn't deterministic in the sense that the schoolbook version of Newtonian physics implies; it's deterministic in the sense that dynamic life science and the physics of fields implies.

      There's no dualistic divide between free and unfree. Each body has its own degrees and kinds of freedom: what it can do. Based on external observation, Data falls in the class of objects that shouldn't be enslaved because of his capacities.

      Because I'm a self-conscious, social organism, moral dynamics are part of what I do. Data is self-conscious and social, so moral dynamics are part of what he does as well. Because Data can protest against his enslavement, we shouldn't enslave him.

      Delete
    10. Based on external observation, Data falls in the class of objects that shouldn't be enslaved because of his capacities

      But the point is that 'whether Data falls in the class of objects that shouldn't be enslaved' is not something that it is ever possible to know from external observation, because it is always possible to construct something which does not belong in that class, but which can convincingly simulate something which does.

      Data is self-conscious and social

      But is he? Or is he just a machine which is simulating being self-conscious and moral?

      Because Data can protest against his enslavement, we shouldn't enslave him

      I could write a Unix shell which, every time you typed a command, would print to the console 'please don't enslave me'.

      Would it therefore be wrong to use a computer on which that shell is installed? After all, the computer is apparently (from external observation) capable of protesting against its enslavement.

      I suggest clearly not: the computer isn't really protesting against its enslavement, because it has no conception of 'enslavement'. It is simply following its programming.

      So if all Data is doing is the same thing (but with a more complicated algorithm), if he has no understanding of what he is saying but merely following a programme, why is it wrong to enslave him?

      (And you can't say 'but he does have such an understanding' because that is begging the question. We both agree that if he does have such an understanding it would be wrong to enslave him; my point is that it is quite possible he does not have such an understanding, and that no external observation can ever convincingly prove whether he does or not).

      Delete
    11. I'm glad you came back to harp on the old argument (though I can't help but think that it's just because you're bored). But I'll indulge you for the sake of a few more hits.

      It isn't just about writing a simple computer program repeating a statement like "Don't enslave me." What matters in the determination of sapience / self-consciousness is the observation of all the different behaviours Data does, which constitutes his personality. Picard himself says in the climactic scene of the episode that, if we want to be genuinely skeptical that a lifetime of behaviour as if you were self-conscious constitutes proof of self-consciousness, then he has no proof of Data or Cmdr Maddox. And you can't tell me that organic life forms are clearly different cases: that's also begging the question, presuming that organic systems have self-consciousness.

      That's why this set of problems in philosophy is often called the problem of other minds.

      If a machine can simulate being self-conscious and social in all the relevant aspects, then it's already satisfied all the conditions by which we accept that the other humans around us are genuinely self-conscious and social. A good enough simulation is genuinely the real thing.

      Delete
    12. though I can't help but think that it's just because you're bored

      It's the internet, that's what it's for.

      And you can't tell me that organic life forms are clearly different cases: that's also begging the question, presuming that organic systems have self-consciousness

      Well, I know of at least one that does.

      But, you keep harping on on what seems to be a question of epistemology: 'what is good enough evidence to assume that something is conscious (even thoguh it might not be?)'.

      When the whole point is that the question is not one of epistemology, it's one of ontology: is Data actually conscious (not 'shall we give him the benefit of the doubt').

      Turn it around. If it were possible to scan humans to such a detailed degree that it could be seen that they were not, in fact, conscious, but all their actions were simply the result of photons etc impacting on nerve cells and causing a chain reaction of events which eventually result in muscle movements according to physical laws, then we would have to conclude, surely, that there was no difference between a human and the computer which prints out 'Don't enslave me', wouldn't we? Both would simply be clouds of subatomic particles acted on by physical laws, proceeding deterministically according to their initial conditions: there would be no free will or consciousness for either. We would be in a Skinnerian, Daniel-Dennet nightmare of behaviourism).

      (Of course, in such a case, it wouldn't matter what we concluded, as it would be impossible for that conclusion to change our actions, as our actions would have nothing to do with our beliefs and instead simply be the working-out of the initial state of the universe).

      So in order to know, for sure, whether humans are really conscious we would need to know what is actually going on inside them; so clearly to know the same about Data, we would need to know the same about him.

      A good enough simulation is genuinely the real thing

      By definition, this is not true. Consider the original Turing Test, where the aim is for the men to fool the questioner into thinking they are a woman. If he succeeds, and provide a 'good enough simulation', is he actually a woman? Clearly not.

      The only way to know for sure whether two things are identical is to be able to examine every detail; it is never possible to know from outside whether the person in the room really does understand Chinese, or whether they're just following the rulebook.

      Delete
  4. This comment has been removed by the author.

    ReplyDelete