tag:blogger.com,1999:blog-8708273719674528189.post1776494938671983493..comments2024-03-22T00:20:38.510-07:00Comments on Adam Riggio writes: Learning Philosophy With Mr Data, A History Boy, 10/05/2014Adam Riggiohttp://www.blogger.com/profile/14606510835439580828noreply@blogger.comBlogger17125tag:blogger.com,1999:blog-8708273719674528189.post-3384274419590790052015-06-04T02:38:09.003-07:002015-06-04T02:38:09.003-07:00though I can't help but think that it's ju...<i>though I can't help but think that it's just because you're bored</i><br /><br />It's the <i>internet</i>, that's what it's <i>for</i>.<br /><br /><i>And you can't tell me that organic life forms are clearly different cases: that's also begging the question, presuming that organic systems have self-consciousness</i><br /><br />Well, I know of at least one that does.<br /><br />But, you keep harping on on what seems to be a question of epistemology: 'what is good enough evidence to assume that something is conscious (even thoguh it might not be?)'.<br /><br />When the whole point is that the question is not one of epistemology, it's one of ontology: is Data actually conscious (not 'shall we give him the benefit of the doubt').<br /><br />Turn it around. If it were possible to scan humans to such a detailed degree that it could be seen that they were not, in fact, conscious, but all their actions were simply the result of photons etc impacting on nerve cells and causing a chain reaction of events which eventually result in muscle movements according to physical laws, then we would have to conclude, surely, that there was no difference between a human and the computer which prints out 'Don't enslave me', wouldn't we? Both would simply be clouds of subatomic particles acted on by physical laws, proceeding deterministically according to their initial conditions: there would be no free will or consciousness for either. We would be in a Skinnerian, Daniel-Dennet nightmare of behaviourism).<br /><br />(Of course, in such a case, it wouldn't matter what we concluded, as it would be impossible for that conclusion to change our actions, as our actions would have nothing to do with our beliefs and instead simply be the working-out of the initial state of the universe).<br /><br />So in order to know, for sure, whether humans are really conscious we would need to know what is actually going on inside them; so clearly to know the same about Data, we would need to know the same about him.<br /><br /><i>A good enough simulation is genuinely the real thing</i><br /><br />By definition, this is not true. Consider the original Turing Test, where the aim is for the men to fool the questioner into thinking they are a woman. If he succeeds, and provide a 'good enough simulation', is he actually a woman? Clearly not.<br /><br />The only way to know for sure whether two things are identical is to be able to examine every detail; it is never possible to know from outside whether the person in the room really does understand Chinese, or whether they're just following the rulebook.SKhttps://www.blogger.com/profile/09102522819364312684noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-503450069288772952015-06-04T02:36:06.181-07:002015-06-04T02:36:06.181-07:00This comment has been removed by the author.SKhttps://www.blogger.com/profile/09102522819364312684noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-50423234817282925802015-06-03T18:14:13.548-07:002015-06-03T18:14:13.548-07:00I'm glad you came back to harp on the old argu...I'm glad you came back to harp on the old argument (though I can't help but think that it's just because you're bored). But I'll indulge you for the sake of a few more hits. <br /><br />It isn't just about writing a simple computer program repeating a statement like "Don't enslave me." What matters in the determination of sapience / self-consciousness is the observation of all the different behaviours Data does, which constitutes his personality. Picard himself says in the climactic scene of the episode that, if we want to be genuinely skeptical that a lifetime of behaviour as if you were self-conscious constitutes proof of self-consciousness, then he has no proof of Data or Cmdr Maddox. And you can't tell me that organic life forms are clearly different cases: that's also begging the question, presuming that organic systems have self-consciousness.<br /><br />That's why this set of problems in philosophy is often called the problem of other minds. <br /><br />If a machine can simulate being self-conscious and social in all the relevant aspects, then it's already satisfied all the conditions by which we accept that the other humans around us are genuinely self-conscious and social. A good enough simulation is genuinely the real thing. Adam Riggiohttps://www.blogger.com/profile/14606510835439580828noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-88976651799718331802015-06-03T02:53:10.741-07:002015-06-03T02:53:10.741-07:00Based on external observation, Data falls in the c...<i>Based on external observation, Data falls in the class of objects that shouldn't be enslaved because of his capacities</i><br /><br />But the point is that 'whether Data falls in the class of objects that shouldn't be enslaved' is not something that it is ever possible to know from external observation, because it is always possible to construct something which does not belong in that class, but which can convincingly simulate something which does.<br /><br /><i>Data is self-conscious and social</i><br /><br />But is he? Or is he just a machine which is <i>simulating</i> being self-conscious and moral? <br /><br /><i>Because Data can protest against his enslavement, we shouldn't enslave him</i><br /><br />I could write a Unix shell which, every time you typed a command, would print to the console 'please don't enslave me'.<br /><br />Would it therefore be wrong to use a computer on which that shell is installed? After all, the computer is apparently (from external observation) capable of protesting against its enslavement. <br /><br />I suggest clearly not: the computer isn't really protesting against its enslavement, because it has no conception of 'enslavement'. It is simply following its programming. <br /><br />So if all Data is doing is the same thing (but with a more complicated algorithm), if he has no understanding of what he is saying but merely following a programme, why is it wrong to enslave him?<br /><br />(And you can't say 'but he does have such an understanding' because that is begging the question. We both agree that if he does have such an understanding it would be wrong to enslave him; my point is that it is quite possible he does not have such an understanding, and that no external observation can ever convincingly prove whether he does or not).SKhttps://www.blogger.com/profile/09102522819364312684noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-42532778885238584402014-12-17T07:45:37.409-08:002014-12-17T07:45:37.409-08:00I did answer the questions, though.
On morality&...I did answer the questions, though. <br /><br />On morality's place in a deterministic world. We understand determinism differently. You understand it as a total passivity: we are "determined by the interaction of our molecules and energy fields." Because we're the products of simple underlying processes, our existence reduces to those processes alone. But I understand determinism as probabilistic feedback relationships whose growing complexity enables more freedom: more affordances, more possible relationships. My complex body and personality don't reduce to its simplest constituents; my simplest constituents interact to produce a personality that dynamically interacts with its world. <br /><br />Therefore, in a deterministic world, all bodies are free. The structure and character of our bodies determines how free we are. The universe isn't deterministic in the sense that the schoolbook version of Newtonian physics implies; it's deterministic in the sense that dynamic life science and the physics of fields implies. <br /><br />There's no dualistic divide between free and unfree. Each body has its own degrees and kinds of freedom: what it can do. Based on external observation, Data falls in the class of objects that shouldn't be enslaved because of his capacities.<br /><br />Because I'm a self-conscious, social organism, moral dynamics are part of what I do. Data is self-conscious and social, so moral dynamics are part of what he does as well. Because Data can protest against his enslavement, we shouldn't enslave him.Adam Riggiohttps://www.blogger.com/profile/14606510835439580828noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-84500715688370478532014-12-17T06:26:59.016-08:002014-12-17T06:26:59.016-08:00But I am a complex arrangement of molecules intera...<i>But I am a complex arrangement of molecules interacting with surrounding molecules and energy fields deterministically. I am a machine.</i><br /><br />In that case, isn't the ethical question meaningless? What is the point of asking, 'ought we to enslave' if our answer is determined by the interaction of our molecules and energy fields?<br /><br />Even to ask the question, 'What ought we to do?' implies that there are multiple courses open to us to choose; but if we are machines then that is not the case, our course was set by the state of the universe before we were born and there is nothing we can do to change it. <br /><br /><i>I think you're just repeating your old points (and ignoring my own points about how different our presumptions are about whether there's a difference of freedom between organic life and artificial constructions) just to get the last word.</i><br /><br />I'm repeating the points because they haven't been answered. <br /><br />Do you agree that there is a division between things that it makes sense to say can be enslaved, and things which it makes no sense to say that about? And that, say, humans are on one side of the line and drone aircraft on the other?<br /><br />And that, therefore, the important question regarding Data, and any other androids built to his pattern, is which side of the line they fall on?<br /><br />And that it is not possible to decide, based on external observation, on which side of the line Data falls?SKhttps://www.blogger.com/profile/09102522819364312684noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-32033567778623623732014-12-17T05:23:15.400-08:002014-12-17T05:23:15.400-08:00But I am a complex arrangement of molecules intera...But I am a complex arrangement of molecules interacting with surrounding molecules and energy fields deterministically. I am a machine. A machine of organic parts that developed through epigenetic processes, where Data is a machine of artificial parts built in a laboratory. But we're both machines.<br /><br />Data actually has fewer constraints of physical necessity than I do: he can move faster, take in information more quickly, calculate mathematics far better, is much stronger, and as I explained at the end of the post, is ethically superior in having no resentful instincts to overcome.<br /><br />I think you're just repeating your old points (and ignoring my own points about how different our presumptions are about whether there's a difference of freedom between organic life and artificial constructions) just to get the last word. Please stop doing that.Adam Riggiohttps://www.blogger.com/profile/14606510835439580828noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-23579874574908034352014-12-17T02:06:32.646-08:002014-12-17T02:06:32.646-08:00They're deterministic, but each situation open...<i>They're deterministic, but each situation opens a variety of possible responses, so the deterministic character only restricts my choice by basic physical necessity (I can't fly without an aircraft, I can't read Mandarin until I bother to learn it, etc). </i><br /><br />Which is exactly the way you (or at least I but I presume you are the same) differ from a machine (and, possibly, from Mr Data): you are free to think about respond to stimuli with the only constraint on your action being, as you accurately put it, 'basic physical necessity'. <br /><br />The machine, on the other hand, is not free in that way: the stimulus interacts with the atoms forming its sensors, which in turn interact with the atoms which hold its programming (and this is the same basic process whether those atoms are in the form of, say, mechanical levers as in a trolley-car which responds to pressure on its sensors from the side of the channel by adjusting its wheels to steer, or in the form of micro-engineered transistors in silicon attached to some form of storage for algorithmic programmes as in an autonomous unmanned aircraft which analyses threats and decides whether to attack or flee) and cause it to deterministically 'choose' (in reality there is no choice) the only course of action it could, given the initial conditions. <br /><br />The question is: is Data like you, who can choose between courses of action constrained only by basic physical necessity, or like the trolley-car or drone, which are simply atoms interacting with their environment in a deterministic fashion?<br /><br /><i>what matters is that Data is a creature who is asking us to listen to him and account for him ethically.</i><br /><br />But he's not. Well, he might be, but not necessarily. <br /><br />All we know is that he is an object which is making the noises that a creature who was asking us to listen to him and account for him ethically would in his position.<br /><br />But we also know that it would be possible to make an object which would perfectly mimic such a creature, while still being simply an object following a (very complicated) deterministic programme. <br /><br />Therefore whether he is a creature, who can be enslaved, or an object, for which the entire concept of enslavement is irrelevant, cannot be determined by external analysis of his utterances or actions, because (Chinese room) it is in principle impossible to be sure what is going on inside, from the outside. SKhttps://www.blogger.com/profile/09102522819364312684noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-34447835653251686672014-12-16T08:05:10.664-08:002014-12-16T08:05:10.664-08:00I think you're right that we're both on op...I think you're right that we're both on opposite sides of chicken/egg questions. You say that you know for sure that you have a mind. Well, I'm honestly not sure that I do. I know that I think, feel emotionally and physically, perceive, and remember, all the activities and experiences from which my sense of self emerges. <br /><br />All those activities and experiences happen deterministically, at least in a non-linear manner, insofar as they're actions and reactions in a complex field of feedback loops in the relations between my body and its environments. They're deterministic, but each situation opens a variety of possible responses, so the deterministic character only restricts my choice by basic physical necessity (I can't fly without an aircraft, I can't read Mandarin until I bother to learn it, etc). <br /><br />All those activities do everything that we've postulated the mind as their possibility condition, even our somewhat constrained freedom. When I realized this about eight years ago, I just stopped believing in minds as a separate thing over and above all that a body can do.<br /><br />So the ethical question isn't a matter of ontology in this sense. It's entirely within the realm of ethics. I go to Emmanuel Levinas here: what matters is that Data is a creature who is asking us to listen to him and account for him ethically. It's a request (or a demand, given Maddox's resolute dickishness) that we acknowledge him as someone worth acknowledging, as part of our community. Ethics is a constant game of catchup to realize what's trying to connect with us, acknowledge that connection, and deal with the repercussions of that inclusion.Adam Riggiohttps://www.blogger.com/profile/14606510835439580828noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-90889992092398468332014-12-16T07:36:33.884-08:002014-12-16T07:36:33.884-08:00But the content of our utterances is useless to de...<i>But the content of our utterances is useless to determine the quality of their origin when talking about humans too</i><br /><br />Which is why we need to base the distinction on something other than utterances. <br /><br />Given that we accept: <br /><br />1. It would be wrong to force a human being to go up to a suspected bomb and poke at it to see if it explodes, and <br />2. It is not wrong to send an autonomous machine up to a bomb to poke it to see if it explodes,<br /><br />then there must be some relevant distinction between the human and machine, that means that the human is a being to which the concept of 'enslavement' applies and the machine is not. <br /><br />Seeing as we can imagine a machine which can produce identical utterances to a human, then whatever the distinction is, it cannot be solved by examining utterances. <br /><br /><i>unless you carry with you the dogmatic premise that of course humans have minds</i><br /><br />All I know for sure is that I have a mind, and that the bomb-poking robot doesn't (because it is possible to examine it, and its programming, and see that its actions are entirely deterministic: there is no thought going on, merely programmed reactions to stimuli). <br /><br />I can't take another human apart to determine whether they are merely following pre-programmed responses to stimuli or actually thinking like I do, but it seems fair to give them the benefit of the doubt. <br /><br />Data, however, in construction seems to be no different to the <i>Star Trek</i> equivalent of the bomb-poking robots, ie, probes, spaceships, etc. The question of whether he is thinking, then, or merely giving pre-programmed responses to stimuli that make it look as if he is thinking, is more open.<br /><br />And it is on the answer to that ontological question which is interesting, because the ethical question is easy to answer. Is it wrong to enslave? Of course it is. <br /><br />Is Data a person who can be enslaved, or a machine which cannot? That's actually an interesting question, interesting partly because it cannot be answered simply by examination of utterances. To answer it fully you'd have to be able to take him apart and examine whether he was thinking or simply acting out his deterministic programming; but of course one can't take him apart until one has answered the question, so it's a bit chicken-and-egg...SKhttps://www.blogger.com/profile/09102522819364312684noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-41622397700958915522014-12-15T11:50:36.160-08:002014-12-15T11:50:36.160-08:00But the content of our utterances is useless to de...But the content of our utterances is useless to determine the quality of their origin when talking about humans too. That's the heart of Picard's examination of Maddox at the end of the episode. If you're going to take a skeptical attitude toward one creature's utterances, it's a problem because humans don't pass that test either, unless you carry with you the dogmatic premise that of course humans have minds (whatever those are). <br /><br />The point is not whether Data (or we) have any mind over and above our perceptual abilities, and sense of past and present selfhood. He's already told us what he is, we're able to listen, and that's all that counts. <br /><br />When it comes to the counter-example of probes, I can only refer you to Iain Banks and not Star Trek (unless you want to count that early Voyager episode with the sentient missile). But I'd rather wait and see what JM has to say about that when he gets there.Adam Riggiohttps://www.blogger.com/profile/14606510835439580828noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-70711479153214216192014-12-15T11:37:57.786-08:002014-12-15T11:37:57.786-08:00The episode's trial is about whether Starfleet...<i>The episode's trial is about whether Starfleet has the right to enslave Data. The protagonists quickly think that this is about proving whether Data is a machine. </i><br /><br />Which it is. Because machines cannot be enslaved. <br /><br />If Data is a machine, then whatever the Federation does to him, he is not enslaved, because machines cannot be enslaved. The Enterprise, after all, is full of probes and whatnot that it merrily fires into anomalies, where they are often destroyed, and nobody wonders if they were 'enslaved'. <br /><br />Such probes are presumably capable of detecting when the conditions they are about to be sent into might result in their destruction, and it's conceivable that they may well report this back to their operator, if only to ensure that the operator does not accidentally destroy a probe through underestimating the conditions of operation. <br /><br />If the operator then overrides the probe's report of conditions that could lead to its ceasing to function and orders it in anyway, how is that, if Data is a machine, different from ordering him to do the same?<br /><br />If Data is qualitatively the same as a probe (albeit much more complicated) then the condition of 'slave' is meaningless to imply. <br /><br />Clearly it is wrong to enslave. So the only question to be answered is, is Data a type of thing which is capable of being enslaved, or is Data a machine, and so not capable of being enslaved?<br /><br />(What Data himself says is irrelevant to the matter, as the whole point at issue is whether the things Data says are proceeding form a real process of thought, or are merely products of a machine's programming, and the content of his utterances is of no use in determining their quality of origin.)SKhttps://www.blogger.com/profile/09102522819364312684noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-8184540170416516042014-12-15T10:57:43.821-08:002014-12-15T10:57:43.821-08:00You miss the point of the ontological question bei...You miss the point of the ontological question being impossible to decide. Even in the episode, all they get to is sentience and self-awareness, even for the humans in the courtroom, which is enough (over and above whatever "mind" is) to accept our ethical obligations to Data. Whatever the answer to the questions of mind's ontology (or even existence beyond the functions), it's actually immaterial to the ethics. <br /><br />So when our machines have enough of a self-conception and will to existence as organisms such that, when we order them into hazardous situations, they ask us that they not go, we have machines capable of being enslaved. Check out my comment Vaka Rangi's "Elementary Dear Data" post about Levinas and Buber's conceptions of ethical obligations as grounded in the calls and requests of others. <br /><br />The episode's trial is about whether Starfleet has the right to enslave Data. The protagonists quickly think that this is about proving whether Data is a machine. But Picard and Guinan realize that the root question of the trial is whether Starfleet will give itself the right to enslave at all. Adam Riggiohttps://www.blogger.com/profile/14606510835439580828noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-54864534975676014112014-12-15T08:53:29.079-08:002014-12-15T08:53:29.079-08:00And the best part is, these ontological arguments ...<i>And the best part is, these ontological arguments are thrown out entirely for an ethical question. If we had an army of androids available to do terrifyingly risky tasks and labour, to be treated as disposable people, they would essentially be slaves, demarcated by their race: android. For people who know the horror of slavery, creating such a system with full knowledge of what you were doing would be intolerable in the deepest sense.</i><br /><br />The ontological arguments can hardly be 'thrown out', because the ethical question rests on the answer to the ontological question. If the androids do not have minds, then building an army of them to do dangerous tasks would no more be slavery than it is today when we build self-propelling machines to go into situations deemed too hazardous for humans, such as investigating suspected bombs. SKhttps://www.blogger.com/profile/09102522819364312684noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-23209575408433339352014-05-11T10:17:04.237-07:002014-05-11T10:17:04.237-07:00In the light of my own thoughts and memories of th...In the light of my own thoughts and memories of the episodes (both those recently re-watched and the hazier memories), I see those lines indicating how Data as a character expressed some of the anthropocentric tendencies that crept into TNG, despite its having overcome them so frequently.<br /><br />Data is superior to humans in many ways, both in the more subtle ethical ways I described, and in his advanced perceptual and physical abilities. But his lack of emotional expression always hindered him. As I've grown older, I've found the expression, becoming more human, a depressingly limited phrase. Data was created in the image of a person (and the meta-textual jokes of having Brent Spiner play all the Soongs, as well as Noonien's three androids), so he always had a special relationship with humanity. But really, Data felt that, because of his limited emotional experience, he was incomplete. That he expressed it as a desire to be human was a philosophical shortcoming of TNG.Adam Riggiohttps://www.blogger.com/profile/14606510835439580828noreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-77888027824251071022014-05-10T17:12:37.502-07:002014-05-10T17:12:37.502-07:00Great post, man! Data is totally more human than ...Great post, man! Data is totally more human than human.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8708273719674528189.post-86425718355318153422014-05-10T16:04:07.251-07:002014-05-10T16:04:07.251-07:00A wonderful post. Wish I had something that person...A wonderful post. Wish I had something that personal for "The Measure of a Man".<br /><br />Though I do wonder what you'd make of this-When Data meets Riker for the first time in "Encounter at Farpoint", this exchange happens:<br /><br />Riker: "Do you consider yourself superior to humans?"<br />Data: "I <i>am</i> superior, sir, in many ways. But I would gladly give it up, to be human."Anonymoushttps://www.blogger.com/profile/03828341842948036592noreply@blogger.com