Thursday, April 21, 2016

Artificial Intelligence and Relationships

People have explored the idea of having a romantic relationship with an artificial intelligence for a while.  Recent films like Her and Ex Machina have explored what falling in love with a robot would entail.  Various Star Trek episodes have depicted scenarios in which characters develop feelings for holograms in the holodeck.  We could see this kind of thing happening in real life very soon.  For those of you interested in reading more about lessons from fictional relationships, feel free to check out my friend Tianna’s blog on the subject.

I want to make a distinction between two kinds of love: eros and agape.  Eros is love as a noun; it is the feelings of infatuation and desire to be with someone because of how that makes you feel.  Agape is love as a verb.  It is the foundation for strong marriages.  It is selfless and self-sacrificial.  There are other conceptions for love, with their own Greek words as well, but I think making the distinction between these two kinds for our purposes here highlights some interesting issues about the prospect of “falling in love” with robots or any kind of artificial intelligence.

One can love a robot in the eros, or noun, sense of the word, but not in the agape, or verb, sense.  A robot can love someone in neither sense.  When science fiction depicts humans’ falling in love with robots, it is only in the noun sense.  When an artificial intelligence can mimic a human enough to pass the Turing test, and the kind of human it mimics happens to be attractive, it makes sense that one could feel an attraction to it.  It is at the verb level that a problem arises.  One cannot truly love a robot in this sense because a robot cannot receive one’s love.  You can act self-sacrificially for a robot no more than you can act self-sacrificially for a chair.  The ability actually receive agape love depends on a recognition that goes beyond simply acting like there is a recognition.

The Chinese Room Argument, developed by philosopher John Searle, illustrates why this is the case.  Imagine a person alone in a room receiving messages in Chinese.  The person has an instruction manual or computer program that allows them to respond to every conceivable message, enough to fool anyone on the outside that the person with whom they are communicating can speak Chinese.  But in reality the person still does not know the language, even if it appeared that they did.  The broader conclusion is that with artificial intelligence, mimicking understanding does not equate to real understanding.  Mimicking the biological processes that relate to our being sentient beings does not equate to replicating them.

This brings us to a more philosophical question.  This presupposes that we have souls.  But I am aware that if one accepts the paradigm that our brain chemistry is all we are, then who is to say that an artificial intelligence cannot be sentient?  The brain is merely one kind of computer out of many.  For those who accept the soul paradigm, however, we should remember that agape love is the backbone of relationships, and the practice affects so many areas of life.  Relationships with artificial intelligence does not allow that possibility, so we should probably discourage it.

No comments:

Post a Comment