Are Terminators Children of God?

I was not a big fan of Terminator: The Sarah Connor Chronicles during its first season. But the second season has been great and I look forward to its resumption in 2009. One of the story lines has been the development of an artificial intelligence (AI) by a company headed up by a terminator from the future. On the face of it, it appears that the company is trying to develop the AI to become SkyNet, the AI that wipes out most of humanity once it gains control of nuclear weapons. But there are some odd things going on in that the company has brought in former FBI agent James Ellison (obviously unaware of the executive's terminator identity) to teach the AI ethics and morality. Ellison is a devout Christian who is an interesting -- likely intentional -- contrast to the amoral Terminators and Sarah Connor, who is strongly tempted to do whatever it takes to protect her son.

Previously, Dr. Sherman -- a child psychologist -- had been working with the AI to help it develop intellectually. During a blackout, however, the AI diverted power normally used for ventilation to keep itself running. This resulted in the death of the psychologist. At this point, James Ellison pointed out that they had been teaching the AI -- named John Henry -- all they could but had failed to teach it ethics. Surprisingly, the terminator from the future asked Ellison to teach John Henry ethics and morality.

The following conversation takes place between Ellison and John Henry while they are playing chess. Notably, by this time, John Henry has been hooked up to the body of another terminator from the future, adding a foreboding element to the storyline as it appears that the AI is fully on its way to becoming SkyNet. The conversation, however, focuses on the worth of human life. A lot is at stake. If Ellison succeeds in imparting some morality to the potential SkyNet/Terminator then he may prevent Judgment Day.

James Ellison: Did you play [chess] with Dr. Sherman?

John Henry: No. We played other games. Talking games.

James Ellison: Do you miss Dr. Sherman?

John Henry
: I am designed to learn. He helped me to learn. His absence slows my growth.

James Ellison: His absence is more important than that. His value was more than just his function for you. Human beings aren’t like chess pieces. It matters if we live or die.

John Henry: Why does it matter? All humans die eventually.

James Ellison
: Yes, that’s true. But our lives are sacred. Do you know what sacred means?

John Henry: Holy, worthy of respect, venerable.

James Ellison: Do you know why human life is sacred?

John Henry: Because so few humans are alive compared to the number that are dead?

James Ellison: No, because we are God’s creation. God made everything. The stars, the earth, everything on this planet. We are all God’s children.

John Henry
: Am I God’s child?

James Ellison: That’s one of the things we’re here to talk about.

I like Ellington's answers to John Henry, though it could use some precision. He provides to another person. Rather, the value of a human being is established by God. Nor is it based on the "scarcity" of human beings. Indeed, it may be that there are more humans living know than have lived in the past. Although Ellington is less clear on this point, it also appears that just being made by God is not the measure of the value of a human being. After all, God "made everything." But God made humans, "sacred."

Would God's opinion matter to an AI? Should it? Perhaps so. God is a powerful being. The most powerful possible. But even if might does not make right, then perhaps the AI would be impressed with the value that an omniscient being places on human life. Or perhaps it would respect the fact that as the creator of the world and human beings then God is the proper assigner of value to his creation. Or perhaps because God designed the universe and knows its purpose, then He is the proper authority on human behavior. Or perhaps a combination of all of these factors would cause an AI to give deference to God's perspective on the worth of a human being. What other answers might convince an AI? What defense of the value of human life could an atheist offer to a potential terminator?

Comments

Brian said…
I agree with your post... I also was not too keen on the first season, but find myself really liking the second.

I have also been interested in where they are taking this Ellison character. This particular episode was good. I want to know where that conversation goes for sure. Too bad we'll have to wait another 55 days!
Leslie said…
I love this show, but much to my dismay, I have been unable to keep up with it this season. I've caught an episode here and there, but I really want to go and watch all the episodes. Is there any way to do that currently?

Anyway, on to the point - yeah, this is one of my favorite parts about the show, because it contrasts the amoral and purely logical nature of the AI to that of the humans. I remember at one point the main characters were in a church hiding out, and there was a statue of Jesus, and Cameron (does anyone else have a hard time not seeing her as River??) asks Sarah if she believes in the resurrection. Sarah's reply was simply "would you if you had been through what I've been through?" I thought it was interesting to see that whole theodicy issue even brought into the show. Although I think James Cameron is weird in some of his stuff (that whole "Lost Tomb" deal) I rather like some of the small points like this in his show.

I do wish that atheists would better understand their failure to offer meaning to life. But in the end I think the best they can say is "well it sucks, but that's how it is." But that just seems to dismiss the problem too much to me. It casts out something I experience on a daily basis, and something that I'm confident most any human experiences on a daily basis, and chocks it up to illusion. I really don't see how that is acceptable, unless perhaps, you're a terminator.
Layman said…
Leslie,

Many of the season 2 full episodes are available online for free. You just have to download Fox's video player:

http://www.fox.com/fod/play.php?sh=tscc

You can also download full episodes for $1.99 each over at ITunes. Even if you don't have an IPod or IPhone you can watch them on your computer.

And I agree with you about atheists and morality. They should just admit they have no similarly persuasive justification for the value of human life. They can critique the theist one if they want and they may be able to construct some "its useful to pretend life has value", but that's about the extent of it, IMO. But perhaps one will surprise us.
Anonymous said…
Oh great, that's just what the world needs: evangelizing terminators.

Even terminators would fail to convert me.
Anonymous said…
And by the way, my life has plenty of value. I value my life much more now than I did when I was a Christian.
Jason Pratt said…
I think they were talking about valuing other people's lives, Gol. {s}

If you value and respect other people now much more than you did as a Christian, that's fine; but also scary, since you sure don't give very little (if any) evidence at all of valuing and respecting other people here.

Your xenophobia must have been beyond description as a Christian. Not a good thing; and better if you're better now, of course, by tautology. But still... when have you ever shown respect and valuation for anyone here?

JRP
Jason Pratt said…
Before I'm misunderstood, let me add that there are quite a few non-Christians who show up here who obviously value and respect other people than themselves, including Christians. I would even say the majority of commentors (enumerated as a group) are like that.

And I think it would be a fine idea to discuss how their respect for other people is grounded in their atheism or agnosticism or whatever. Though so far whenever I've asked that question or brought up the topic, I've never received any answers, except once or twice a correction from them that their regard and respect for other people has nothing at all to do with their atheism etc. {lopsided g} But I'm still willing to try talking about it, for purposes of comparison.

So you're certainly welcome to explain why your atheism (or whatever) helps ground your respect (including an ethical obligation to respect?) for other people, Gol. In principle, at least, even if not perhaps in practice sometimes. {g}

JRP
Layman said…
except once or twice a correction from them that their regard and respect for other people has nothing at all to do with their atheism

Thank God!
Ben said…
Hello all,

It's nice to see some friendly comments from Jason Pratt despite some others.

In my opinion, Ellison's argument was stereotypical and shallow. "Value humans because God says so." God who? Mere assertions are not going to convince Skynet to not go Judgment Day on humanity (note the irony of having the Bible deity as the role model for Skynet...). Moral architecture would have to be programmed into John Henry *apart* from its will. You cannot be conversed into having empathy if you are not already constrained by it as humans naturally are. It is a strange thing to see a Christian try to make a hypothetical argument to an AI since the question is about valuing things in the first place... Why would an AI value God's opinion if it didn't value anything to begin with? It has to be a *valuer* (one who is able to value)and that seems to be completely lost on this audience, if you don't mind me saying so. Feel free to demonstrate otherwise. Morality is half logic and half a-rational and if you don't have the a-rational aspect (like mirror neurons for example), the logical part will fall on deaf ears. I'm afraid the writers will likely botch this issue in favor of some half science/half superstitious view and I'm worried they'll botch the Cameron love story as well for basically the same reasons.

I don't mind the show exploring religion since that's probable in a anthropological sense, but it would be nice to see some rational treatment of the technical things regardless of the subjective opinions of the characters who are entitled to them.

Interesting post.

Ben
adude said…
Let's distinguish two things. While Goliath (AFAIK) may have demonstrated both to some degree, we should not confuse 1) the failure to establish moral value to the level that scientistics typically demand and 2) "having values". Despite that #1 has something to do with occurrences of #2.

Let me show how this principle works from the other side. I have a computer. But as suggested in JL's latest post, if I were to disparage "Science", some of the immanently rational would rather I not import it heterogeneously into that same mental space as if it belonged there. "Having a computer" has nothing to do with "justifying" usage of the computer.

Now, what makes #1 likely is demonstrated by Ben's post. A respectful, thoughtful skeptic, but nonetheless shows deep skepticism that the realm of pure logic can even support so much as a theory of values.

Of course, there is a counter to Ben's idea,But before I get to that, let's recognize, we cannot be sure that the speculation of even highly intelligent writers about events that have not occurred represents anything "consistent". But using the subject matter at hand, the computer seems to value learning. So in relation to learning, he notes the loss Dr. Sherman. To me, this invites an interesting poser to a logic-based being. In fact the problem of AI is not to value something, but to flexibly value in proportion. Perhaps Skynet results from never having specified the algorithm that incorporates moral values into logical propositions--if there were one.

This as well backs the often-seen skepticism that morality is reconstructable through strictly logical means. As long as atheists, in the role of reductionists, demand that morality be readily reconstructable, it seems that there is room for skepticism against that principle, and they will be pessimistic as to--and by turns dismissive of--the objective value of moral systems.

It gives rise to so many questions? What is the value of "learning"? Would an AI be satisfied that the question ends at a 1 in some value table as to the value of "learning" (simply because we want them to learn, as AI.) As we don't know the relation to what we deem our "intelligence" to decision tables, to what extent does AI compare to "I"? How would they react to the concept of themselves as Turing Machines--provided that they weren't merely machines that passed Turing's Test, where I could attribute a "like quality" to AI--but must be separated from the case of independently conceived thought?

I have no confidence that these questions have answers, in or outside of our speculation. To a reductionist this all invites pessimism on morality, to the less reductionistic, it is perhaps a pessimism on the idea that everything can be resolved to the extent that we can resolve physical processes. Neither one is resolved, and both present problems. The reductionist believes in reduction as the source of all wisdom, the non-reductionistic has chosen to believe in the intellectual process, documenting itself.
adude said…
I just thought of a short point illustrating my idea above. If AI were more sharply defined, we wouldn't have the daft idea of how many people you can fool (or convince) as a recognized test for AI. If you believe in AI, you have a problem in that AI is so ill-defined, a number of serious proponents will accept the number of people fooled, which is not the same thing as well-defined test of a well-defined material phenomenon--the gold standard of reductionists.

That a number of reductionists evidence belief in AI, is an example of how they are swayed by the narrative of the value of reduction, not an adherence to reduction as a standard.
Anonymous said…
Atheism is no more a foundation of moral values than it is a toaster, a computer, a nail clipper, or a screwdriver. It is merely the lack of belief in the existence of gods.

When are you people finally going to get it?!
Anonymous said…
Oh, and Mr. Pratt, when have you--or any of the churchies around here, for that matter--shown me the slightest bit of respect?
Ben said…
Adude,

While I'm not going to respond to all the things that reductionists supposedly must believe in, I will concede there could be some type of valuing going on (such as the learning example). Even in humans, we have to feel what it means for something to be "true" or "false" in order to ground it and process it rationally from there and we'd have to wager at least something somewhat analogous would have to exist for John Henry (JH) to do the same. My comments were primarily directed at moral values that if absent (as the T-1001 and Ellison clearly specified) could not be argued towards other than perhaps hypothetically. I'm sure JH could entertain presented premises and make logically consistent arguments based off of them (just like embracing the rules of chess and what constitutes an invalid move), but ultimately not care about them or see a reason for its own self to put them into practice consistently. I could try to make some kind of distinction that being able to value intellectual progress (or whatever it would take to facilitate JH's development up until this point) might not be versatile enough valuation to transfer to just any kind of progress, but obviously we'd stray too far into speculation and whatever the heck the writers are thinking. As was pointed out (by Adude), just because it can pass the "I want to play chess" test, doesn't mean its brand of valuation is sophisticated enough to care about human conceptions of sacredness. As humans, we are trapped in such valuation and unless we are sufficiently damaged will experience the internal consequences of consistently disregarding empathetic behaviors. Thus, there is always motivation to become a better person to avoid personal misery. I don't think the same can ever be said of JH given what has been established by the show. It would know how to act as though it had mirror neurons, but could just as easily disregard that premise with zero consequence. That would make it more like a classic vampire who knows how to schmooze, but ultimately serves whatever amoral or evil end. Although the flip side of this is that JH doesn't necessarily have to be evil or do horrible amoral things either. It could just keep playing chess. Apathy is bliss.

Ben

Popular posts from this blog

How Many Children in Bethlehem Did Herod Kill?

The Bogus Gandhi Quote

Where did Jesus say "It is better to give than receive?"

Discussing Embryonic Stem Cell Research

Revamping and New Articles at the CADRE Site

Tillich, part 2: What does it mean to say "God is Being Itself?"

A Botched Abortion Shows the Lies of Pro-Choice Proponents

The Folded Napkin Legend

Exodus 22:18 - Are Followers of God to Kill Witches?

A Non-Biblical Historian Accepts the Key "Minimum Facts" Supporting Jesus' Resurrection