Thursday, March 31, 2016

When Emerging Technologies Affect the Human Psyche: Virtual Reality

The consumer release of the Oculus Rift headset presents us with an opportunity to talk about virtual reality.  This is something that I personally am very much looking forward to—trying out a first-person shooter VR video game is definitely on my bucket list.  But as with all emerging technologies, the new wave of VR presents us with opportunities for good use and abuse.

Let me first acknowledge that VR headsets have potential for many good uses.  They could be used train pilots and surgeons and treat phobias and PTSD. They could also possibly be attached to bicycle machines at gyms for a more entertaining work-out experience, perhaps giving some people that extra incentive to work on the cardio.  And as with video games, there is a proper place for their entertainment value.

We should also be aware, however, of the potential concerns with commercialized VR headsets as they become cheaper, more advanced, and more widespread.  We can look at past examples to get some idea of what we might face.  Second Life is a kind of online virtual reality game created in 2003 in which players create avatars of themselves and interact with other players in ways that mimic the real world, for example, via social events or economic exchanges.  As one might expect, there have been players who have become addicted to the game, prioritizing time playing on Second Life at the expense of their real life in the real world.

I would not be surprised if similar applications become available on VR headsets.  And again, while such games may not be bad in and of themselves, we ought to remind people that human flourishing can only truly take place in the real world, and that neglecting the real world can have negative consequences.

I came across a short film about VR that admittedly may seem a little too dystopian to count as an actual warning.  I do think, however, it depicts the idea that while not everyone will become addicted to VR, those that do, and the problems they face because of it, still matter.
 


Monday, March 28, 2016

When Emerging Technologies Affect the Human Psyche: Weapons of War

Emerging technologies change how we relate to the realities of the world, and to each other.  Technology in warfare is no different.  Whether it was the cannon, the machine gun, or the nuclear bomb, the change expanded the horizon of our capabilities.  And since war is a human activity, the way we interact with these newfound capabilities, on a mental, emotional, and even physical level, affects its trajectory and outcomes. In a TED talk, military analyst P.W. Singer tells a story about an American explosive ordinance disposal (EOD) team in Iraq on a mission to diffuse an improvised explosive device (IED).  When one member of the team got close enough to the bomb to begin to attempt to diffuse it, it exploded.  But the story ended with a twist:  that team member was not a human soldier, but a robot.  No condolence letter needed to be written to any family members.

When talking about emerging technologies in warfare, whether they be drones in the sky or robots on the ground, we are dealing with two aspects of human nature that, in the context of war, must be carefully balanced.  On the one hand, we want to preserve our lives, and the lives of others.  On the other hand, drone technology that is now used to that end has psychological consequences that affect behavior.  Singer said something that surprised me:  drone pilots who operated drones that fly in Iraq from bases on U.S. soil had higher rates of PTSD than units who were overseas.  There is a dichotomy of two experiences with virtually no transition between them: pilots will go to shifts, then go home for the night.  A pilot can fire rockets at real people and then be with their family a few hours later.  War became an experience not much different from other lines of work.  And then, of course, there is the classic question of whether or not it is easier to make rash decisions when one is disconnected from the violence on the ground.

There are other kinds of emergent technologies that could be weaponized in the near future when applied to robotics.  Ahead of the release of Batman v. Superman, the Film Theorists YouTube channel highlighted a method by which Batman could beat Superman without the use of Kryptonite:  scientists at UC Berkeley created robotic muscle fibers that are a thousand times stronger than human muscles.  My question is, how might this change the psychology of war if we begin to use this technology in robots on a regular basis?  Presumably, there is great potential in its use in rescue operations, and so forth.  But if we use our technology as an extension of our capabilities, to what other uses might we investigate?

This stuff might seem more distant from our daily lives, but there are real policy impacts to examine.  What do you think?  How ought we to approach emerging technologies in warfare going forward?

Thursday, March 24, 2016

Three Common Themes in Sci-Fi

Science-Fiction stories can be entertaining, but they also can tell us a lot about ourselves.  Here are three themes often show up in science-fiction stories.  Click on the link to read more about how they relate to being human.  Also, tell me if you think I am missing something crucial.

1)   Space Exploration

There’s something exciting about travelling at warp speed in the U.S.S. Enterprise, traveling to Jupiter aboard the Discovery One, travelling through the wormhole in the Endurance, or experiencing the beauty of Pandora.

2)   Artificial Intelligence

“I’m sorry, Dave.  I’m afraid I can’t do that.”  The classic line epitomizes our fear of A.I.-gone-crazy.

3)   Playing God

Mary Shelly’s Frankenstein is known as “The Modern Prometheus,” and today we continue to tell modern Frankenstein tales.  At what point does human artifice cross the line?

Playing God or Playing Creation?

Christians recognize that God has given us a certain creative freedom with creation.  We take natural resources and build things out of them.  The entire world reflects what good and bad can come from that.  But every once in a while, science-fiction reminds us that such pursuits have consequences.

The Frankenstein archetype warns against “playing God,” against attempting to bring what seems to be encoded in nature as outside of our control into our control.  In struggling to equip ourselves with the ability to reach that point, we neglect to equip ourselves with the ability to handle the consequences.

One more modern rendition of this that comes to mind for me is Rise of the Planet of the Apes.  Scientists’ attempt to master just one aspect of the human genome leads to disastrous consequences for the entire planet.  So maybe the apocalypse is a bit of an extreme result to expect.  But parallels need not be literal.

I wrestle with this question.  I feel like modern medicine reflects the idea that God has given us the ability to harness our resources to promote human flourishing.  But when does it become “playing God”?  In an earlier post, I talked about Michael Sandel’s concerns regarding genetic engineering.  I wonder if his concerns provide an example of a framework for distinguishing between the two.  But even that involves nailing down the essentials of human nature.  I dealt with that in my first post, but I can’t claim that list as definitive or exhaustive.

What Stories Tell Us about Artificial Intelligence

Sometimes I wonder if our fear of A.I. is based more on science or on what we tell ourselves about it.  HAL 9000, Ultron, Aria, and Skynet have painted a picture for us that depicts one inevitable outcome: A.I. goes crazy and tries to kill us all.  But there is another side to the coin.  Tars, in Interstellar, and the computer in Star Trek, never pose any danger.  So which depiction is right?

There are really two questions we face with A.I.  First, how much can a computer mimic a human being?  Second, what kind of human being will it imitate?  I’m no A.I. expert, but it seems like we’re dealing with the mechanisms for imitation.  Just today, a story came out about how Microsoft’s A.I. chat bot started mimicking racist and anti-Semitic language on Twitter, presumably because it encountered it from others. 

The question becomes:  can we come up with an algorithm that mimics a kind of moral discernment, one that says “this kind of sentiment is bad; avoid imitating it”?  Will such an algorithm be enough?  If moral discernment is where the line is drawn between an A.I., no matter how advanced, and a human being, then perhaps we should heed warnings like that of Stephen Hawking.

What “The Final Frontier” Means

Human beings have always been explorers.  History is full of examples of peoples moving from one place to another, often facing perilous risk in the process.  The motivations are eclectic:  prestige, money, mission, curiosity, freedom, or the thrill.

Now that each continent has been explored, we have two options: the ocean, or space.  I will admit, I share the perplexity of those who wonder why we are not exploring the ocean more.  That aside, “the final frontier” allures because humans are explorers.  The frontier represented the call of the unknown.  But unlike land on earth as we perceive it today, space is vast.  The potential is nearly endless.  NASA and SpaceX will find plenty of thrill seekers to volunteer to go to Mars.  But others see potential for scientific discovery, the benefits of which could surpass what came from the space race.

Of course, whatever we do, we should do it with a degree of caution.  The expeditions of history posed not just logistical questions but ethical ones as well.  We will face a new set of those questions when we finally travel beyond the moon.

Monday, March 21, 2016

A Look at Genetic Engineering

During debate practice last week, we had a discussion about Harvard Professor Michael Sandel’s article in The Atlantic called “The Case Against Perfection.”  It’s an older article (from 2004) but the issues he raises regarding genetic engineering are all the more relevant as these technologies become more advanced and more prolific.  I’ve included the link below, but I will summarize it here:

Sandel discusses four areas in which scientists are developing capabilities for us not just correct for human biological deficiencies like disease, but to improve upon what is already normal and make us “better than well.”  The first is physical strength: a gene therapy exists that can strengthen healthy muscles in mice.  If applied to humans, then genetic enhancement could become the alternative to things like steroids.  Second, a similar gene therapy exists for enhancing memory in mice, and human applications are easy to imagine.  The third is growth hormones available to those who fall in the bottom one percentile in height, the questions then becoming what about those who are already considered average or tall?  Finally, there is sex selection, which need not be conventionally controversial—one method involves merely filtering X-bearing vs. Y-bearing sperm—no embryo destruction necessary.

After discussing common ethical objections to these practices that he believes are ultimately insufficient, Sandel argues that such practices are problematic ultimately because they are “a way of answering a competitive society’s demand to improve our performance and perfect our nature.”  There are two components to human performance: 1) natural talent (gifts), which are normally outside of our control, and 2) effort, which we can control.  Genetic enhancement brings human giftedness under our control.  In this way, it resembles eugenics.  Sandel argues that the absence of coercion is ultimately irrelevant.  He offer both religious and secular grounds for concern: “[t]o believe that our talents and powers are wholly our own doing is to misunderstand our place in creation, to confuse our role with God's . . . If bioengineering made the myth of the "self-made man" come true, it would be difficult to view our talents as gifts for which we are indebted, rather than as achievements for which we are responsible.”

Sandel then outlines practical consequences of this mentality:  first, parents’ abilities to choose the traits of their children removes “openness to the unbidden”—the love for a child that accepts them for who they are.  Second, failures become always a product of our genetic choices, and never natural weakness.  Third, it removes the basis for solidarity in forms of charity and generosity: normally, natural talent is a factor in one’s success that lies outside of one’s control, unlike effort and hard work.  But, if even natural talent is the result of human agency and choice, then the basis for a recognition that some factors are outside one’s control is diminished.

I was at church recently when Tim Keller was giving a sermon about social justice, and he coincidentally made a similar point:  If we recognize that what we have is a product of what God has given us, including our natural abilities, it is easier to understand why God would want us to be generous with the fruits of our labor, because it is ultimately derived from Him.

Is Sandel right?  Or is our reaching this technology just another product of what God has given us?  That is the attitude we take towards modern medicine.  Even vaccines are designed simply to prevent what is less than optimal.  But does the “better than well” threshold change the picture when it comes to genetics? Another question is what happens when others don’t see it that way?  Is it reasonable to be concerned that social solidarity will be hurt by this?

Or is the problem rooted in society’s increasing demands on people’s performance?  This is a whole other topic of discussion in and of itself, but I find the connection fascinating.

What do you think?  Sound off in the comments below!

Sandel, Michael J. "The Case Against Perfection: What's wrong with designer children, bionic athletes, and genetic engineering." The Atlantic. Atlantic Media Company, Apr. 2004. Web. Mar. 2016. <http://www.theatlantic.com/magazine/archive/2004/04/the-case-against-perfection/302927/>.

Sunday, March 13, 2016

Seven Ways to Describe Human Beings

As I discuss in the "about" page, this blog is aims to ask questions about humanity’s relationship with technology from a Christian perspective.  But before we can dive into the rabbit hole, we need to ask the question—what is human nature?  What actually makes us human?

To begin the discussion, I’ve compiled a list of seven ways to describe human beings.  No doubt this list is not exhaustive, nor do I expect each element to go uncontested.  If you think an essential element is missing, or would alter what I’ve said about one of these elements or would remove one entirely, let me know in the comments.  My goal is to start conversations; I certainly do not have all the answers.  After all, I’m only human.

1) Created in God’s image

There are different interpretations of this phrase from Genesis 1:26.  I think one good explanation, based on the dominion mandate that follows, is that humans have a creative and productive capacity to use the earth’s resources for good ends.  In secular terms, humans are “rational animals” who must think about and plan how to do so in the best way possible.

2) Homo-Sapiens

This is perhaps the most straight-forward, but by no means least controversial, item on this list.  Our biologically-distinguishable characteristics are rooted in our genetic makeup.

3) Structurally good, directionally flawed
 
In his book Creation Regained: Biblical Basics for a Reformational Worldview, Albert Wolters distinguishes between the “structure” and “direction” of creation.  Structure refers to how God originally intended creation to be, and direction refers to the extent to which it conforms to his will.  Christians recognize that Genesis 1 affirms creation’s, including humanity’s, original goodness, but also that human nature is corrupted by sin and the fall.  As such, society as a whole, Christians or otherwise, recognize to some extent that there is a difference between behaving like a “good” person and like a “bad” person, between virtue and vice, even if we disagree on the location, shape, and rigidness of that line.
 
4) Multi-dimensional
 
Human beings are comprise of physical, mental, and emotional components—needs, wants, and motivations.  A comprehensive account of human behavior, in whatever the context, takes all three components into consideration.
 
5) Communal
 
The cliché is that human beings seem wired for community with others, via family, friendships, romantic relationships, mentor-mentee relationships, educational relationships, economic relationships, etc.  These various forms of community can correspond to the three components from #4.
 
6) Heterogeneous

Human individuals have different tastes, preferences, priorities, beliefs, convictions, worldviews, talents, and abilities.
 
7) Having a soul
 
It’s often asserted that humans have souls, but less often actually argued for or defined.  It intuitively makes sense that we have souls, but I’m curious—what is actually the best argument for or definition of the human soul that you have heard?
 
I look forward to the discussions on technology and humanity that will follow.  Thanks so much for joining me, and be on the lookout for future posts!