An Illustration of the Problem of Evil from I, Robot

This past weekend, I finally saw I, Robot, a movie starring Will Smith, which is loosely (actually, very loosely) based on the collection of short stories by Isaac Asimov. If you haven’t seen it, the movie concerns a police detective named Spooner who lives in the year 2035 when robots are plentiful and do most of the world’s menial chores. Spooner gets a call from a holograph of Dr. Alfred Lanning, chief roboticist at U.S. Robots, in which the holograph informs Spooner that Dr. Lanning (the real one, not the holograph) had just committed suicide. The story then follows Detective Spooner's efforts to prove that Lanning’s death was not a suicide, and in the process he must face hordes of killer robots who (which?) openly violate the first law of robotics. I don’t want to give away the plot as I thought the movie was enjoyable and if you haven't seen it, it is worth $2.00 at a video rental place.

At one point, Spooner meets up with a robot who is apparently the mastermind behind all of the odd things that have been happening. It is a large positronic robot that seeks to take over the world. The question arises as to how a robot could possibly plot to take over the world, killing people on the way, in violation of the three laws of robotics. For those of you who have never been initiated into the robot universe of Issac Asimov, the three laws are as follows:

1.A robot may not injure a human being, or, through inaction, allow a human to come to harm.

2.A robot must obey orders given to him by human beings except where such orders would conflict with the First Law.

3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws are imprinted onto the positronic brain, and supposedly it is impossible for a robot to violate them. Yet, the robot mastermind who appears to be behind the plot does so. On what basis? Well, according to this robot: it was for humanity's own good. The robot mastermind had been observing human beings for some time, and had noticed that when we were in control of our own destinies, we would engage in killing, wars and other things that were harmful to us. The robot reasoned that since we are unable to care for ourselves and prevent the killing, wars and other bad things, it needed to take over the world for humanity's protection. It was, in effect, seeking our best interests, taking action to care for us and prevent us from coming to harm (pursuant to the first law which states that a robot cannot, “through inaction, allow a human to come to harm.”) It turns out that Spooner does not appreciate the thoughtful efforts of the robot to take care of humanity, and struggles to overcome a very powerful foe who is in the process of rounding up humanity (or at least the humanity of Chicago) to take care of them.

So, how does this relate to the problem of evil? Very simple. One of the ways in which the problem of evil is presented is “how can a loving God allow so much killing, i.e., so many wars and murders?” The underlying argument is that a good, loving, all-powerful God would not allow such things to occur. It is a good objection, and we should take it very seriously.

To a person who asks such a question, you can ask “did you see the movie I, Robot?” If they have, you can say “then you would like a God who acts like the robot mastermind in I, Robot who corrals us up, cares for us and makes sure that we have no wars or killings, right?”

If the person is honest, they will agree that they do not want such a God. You can then press them on what could be wrong with what the robot mastermind was trying to do in the movie. After all, the robot was being a benevolent dictator—-making sure that we were taken care of since history had shown that we cannot take care of ourselves. Of course, the question you are really asking is “do you really want God to act like the robot mastermind in I, Robot?” You see, the problem of evil presumes that the highest and best good for mankind is a lack of harm. But just because that is a goal to which we aspire, it does not mean that it is the highest possible goal that God has. God could have other desires for us which are equally and more important to him than our security. This movie shows that we instinctively know that there are other values besides security because we don’t want security at the price offered by the mastermind robot. We instinctively know that there is something wrong with his plan even if we may differ on the reasoning.

So, you can ask, if we can see that there are reasons for preferring some war or killing over security, then why can’t we accept the idea that God can also have reasons for allowing war or killing?

Comments

Popular posts from this blog

How Many Children in Bethlehem Did Herod Kill?

The Bogus Gandhi Quote

Where did Jesus say "It is better to give than receive?"

Discussing Embryonic Stem Cell Research

Tillich, part 2: What does it mean to say "God is Being Itself?"

Revamping and New Articles at the CADRE Site

The Folded Napkin Legend

A Botched Abortion Shows the Lies of Pro-Choice Proponents

Do you say this of your own accord? (John 18:34, ESV)

A Non-Biblical Historian Accepts the Key "Minimum Facts" Supporting Jesus' Resurrection