Thursday, April 28, 2016

A Lesson from Frankenstein

The Frankenstein myth, like Prometheus before it, is about attempting to transcend human limits, “playing God,” and the use of artifice driven by hubris.  Such a legacy evokes caution as we pursue technology today.  I have echoed this caution throughout this blog.  But I have also acknowledged that such caution does not happen in a vacuum.  Technology can be used for good or for ill.  I think that a nuanced look at the Frankenstein myth illustrates this.

Victor Frankenstein’s pursuit, one could argue, was not inherently bad.  At face value, initially, he created life.  It was a fictional case of begetting, if you will.  Frankenstein’s monster was essentially human.  He was sentient, intelligent, and capable of learning and of kindness.  It was when Frankenstein neglected to treat him as human that things went south.  The monster was also in a formative state, and parental neglect and societal rejection all contributed to the monster’s choices later in the novel.

I think that Frankenstein provides an analogy for our own pursuit of new technology.  In our doing so, we must remember to not forget about the human condition.  Our pursuits should not ignore it nor assume that it can be entirely overcome.  Whether it is genetic engineering, drones, virtual reality, space exploration, artificial intelligence, or energy, an acknowledgement of the human condition should temper and orient our ambitions.  It should help us recognize potentials and limits.  I would argue that things are more likely to go wrong if we ignore it, and less likely if we don’t.

We can create great things that can benefit many people. But we ought to avoid falling into the trap of neglecting the human condition as we do so.  Let’s learn from Frankenstein so that we may create not monsters, but masterpieces.

Monday, April 25, 2016

Renewable Energy: A Q&A with Dr. Hershey

Dr. Joshua Hershey is an Assistant Professor of Science and Philosophy at The King's College in New York City, and a pretty awesome professor to work for as a Faculty Assistant.  I asked him some questions about the future of renewable energy.
 
What are some emerging or potential new forms of renewable energy that people might not know much about and how would they help us, as Christians, to better carry out the dominion mandate?
 
That’s an important question. As Christians, we really ought to pay more attention to the ways in which our actions affect the environment. We know the earth isn’t our eternal home, but we also know that “the earth is the LORD’s, and everything in it” (Psalm 24:1). We had better treat His property with respect, especially since He has put us in charge of it.

Our power to affect the environment—for better or worse—has increased exponentially over the last couple of centuries, and we’re now living in an era when selfish choices might literally ruin the earth for future generations. On the other hand, we could use our God-given creativity to develop technologies that benefit, rather than harm, the earth and its future inhabitants. 

Clean, sustainable energy is a big deal. Everything we do requires energy, and we use a lot of it. Sadly, less than 10% of the energy used in the United States comes from renewable resources, according to the U.S. Energy Information Administration. And that’s an all-time high (not counting firewood, which supplied practically 100% of our energy until the mid 1800s). We need to do better.

Wind and solar power have gotten cheaper and more efficient over the last few decades, and are beginning to see widespread use. Geothermal energy is becoming more popular too. Here in New York City, a bill was recently passed requiring geothermal heat pump installations in new buildings (under certain conditions). All of these renewable energy technologies have been around for a while, so I’m not sure whether they qualify as “emerging” technologies, but we should certainly continue to develop these resources.

Ultimately, though, I don’t think these sources will be able to supply all the world’s energy needs. We need something to replace fossil fuels, which provide more than 80% of our energy in the United States and a similarly high proportion of the global energy demand. I seriously doubt that wind and solar power will ever be able to meet that demand.

Nuclear power is a promising alternative to fossil fuel, in my opinion. Although it’s technically not a renewable resource, our supply of nuclear fuel will last many times longer than our supply of fossil fuels, and it does less harm to the environment. Traditional uranium fission unfortunately does produce hazardous waste that we have to bury in a desert somewhere, but that’s still better than the emissions produced by burning fossil fuels. Moreover, there are emerging technologies that may soon provide much safer and cleaner nuclear power. For example, researchers are presently developing Small Modular Reactors (SMRs), which are significantly smaller and safer than traditional nuclear power plants. The possibility of using thorium rather than uranium as fuel is also being investigated. Thorium is more abundant than uranium, and can’t be so easily used to produce nuclear weapons.

The most exciting possibility, though, is nuclear fusion (as opposed to fission) energy. Hydrogen fusion reactors will be far cleaner and safer than any other form of nuclear energy. Hydrogen is the most abundant element in the universe, and the “waste” product of hydrogen fusion is helium—the perfectly harmless, delightful stuff that fills party balloons. There will be no risk of an accidental meltdown, since nuclear fusion reactors—unlike fission reactors—can be switched off easily. Although the technology is probably a few decades away, hydrogen fusion will be the ultimate source of safe, clean energy.
 
What are some potential downsides to any of these that might concern people?

The limitations of wind and solar power are pretty easily recognized: they don’t work when the sun isn’t shining or the wind isn’t blowing. Geothermal energy is pretty limited too: it may help to heat or cool a building, but as a source of power (to generate electricity), it is terribly inefficient. And renewable biofuels like ethanol and biodiesel add pollution and greenhouse gasses to the atmosphere, just as fossil fuels do.

The downsides of traditional nuclear energy are well-known too. Nuclear fission reactors produce hazardous waste, use expensive uranium fuel that has to be mined (which also harms the environment), and there’s always the scary possibility of a reactor meltdown. The threat of nuclear arms proliferation is just as scary: nuclear fission reactors produce isotopes of uranium and plutonium that can be used in nuclear weapons.

Nuclear fusion, on the other hand, is a different story. The only major downside of using nuclear fusion power is that we don’t yet have the technology to do it. In order for a nuclear fusion reaction to occur, hydrogen must reach temperatures many times hotter than the surface of the sun. That’s not easy to do, and it’s even harder to contain the hydrogen within the reaction chamber. Since no solid material can withstand temperatures that high, magnetic and electric fields must be used as the “walls” of the reaction chamber. Unfortunately, with current technology, more energy is needed to produce those powerful electromagnetic fields than is produced by the fusion reaction. In other words, to keep the fusion reaction going, you have to put more energy into the machine than you get out of it.

Hopefully that will change soon, though. A new fusion reactor called ITER is presently under construction in southern France. If successful, it will be the first fusion reactor to produce more energy than it consumes, and will pave the way for increasingly efficient fusion technologies in the future.

What about market-oriented solutions?  Are any companies pursuing any of these that we can support in some way?

Next-generation fusion power is still a long way from marketability, but some private companies are already investigating the possibilities. For instance, Lockheed Martin is trying to develop a compact fusion reactor which they say will be “small enough to fit on a truck” yet “could provide enough power for a small city of up to 100,000 people.”

Meanwhile, as we wait for new technologies to arrive, let’s make the most of what we already have. Numerous clean energy solutions like solar power and wind energy are already available in many locations. If we have the option to buy our electricity from a clean energy company, or install solar panels on our homes, we ought to take advantage of those opportunities.  

We can also make use of high-efficiency technologies that reduce our energy consumption. For example, I’ve installed LED bulbs throughout my home. They’re better than incandescent, halogen, and CFL bulbs in practically every respect: they're much more efficient, they don't get hot, and they last longer.

We can enjoy the benefits of technology without ruining the planet. God may have placed us in charge of the earth, but it's still His property. Let’s leave this place better than we found it!

Thursday, April 21, 2016

Artificial Intelligence and Relationships

People have explored the idea of having a romantic relationship with an artificial intelligence for a while.  Recent films like Her and Ex Machina have explored what falling in love with a robot would entail.  Various Star Trek episodes have depicted scenarios in which characters develop feelings for holograms in the holodeck.  We could see this kind of thing happening in real life very soon.  For those of you interested in reading more about lessons from fictional relationships, feel free to check out my friend Tianna’s blog on the subject.

I want to make a distinction between two kinds of love: eros and agape.  Eros is love as a noun; it is the feelings of infatuation and desire to be with someone because of how that makes you feel.  Agape is love as a verb.  It is the foundation for strong marriages.  It is selfless and self-sacrificial.  There are other conceptions for love, with their own Greek words as well, but I think making the distinction between these two kinds for our purposes here highlights some interesting issues about the prospect of “falling in love” with robots or any kind of artificial intelligence.

One can love a robot in the eros, or noun, sense of the word, but not in the agape, or verb, sense.  A robot can love someone in neither sense.  When science fiction depicts humans’ falling in love with robots, it is only in the noun sense.  When an artificial intelligence can mimic a human enough to pass the Turing test, and the kind of human it mimics happens to be attractive, it makes sense that one could feel an attraction to it.  It is at the verb level that a problem arises.  One cannot truly love a robot in this sense because a robot cannot receive one’s love.  You can act self-sacrificially for a robot no more than you can act self-sacrificially for a chair.  The ability actually receive agape love depends on a recognition that goes beyond simply acting like there is a recognition.

The Chinese Room Argument, developed by philosopher John Searle, illustrates why this is the case.  Imagine a person alone in a room receiving messages in Chinese.  The person has an instruction manual or computer program that allows them to respond to every conceivable message, enough to fool anyone on the outside that the person with whom they are communicating can speak Chinese.  But in reality the person still does not know the language, even if it appeared that they did.  The broader conclusion is that with artificial intelligence, mimicking understanding does not equate to real understanding.  Mimicking the biological processes that relate to our being sentient beings does not equate to replicating them.

This brings us to a more philosophical question.  This presupposes that we have souls.  But I am aware that if one accepts the paradigm that our brain chemistry is all we are, then who is to say that an artificial intelligence cannot be sentient?  The brain is merely one kind of computer out of many.  For those who accept the soul paradigm, however, we should remember that agape love is the backbone of relationships, and the practice affects so many areas of life.  Relationships with artificial intelligence does not allow that possibility, so we should probably discourage it.

Sunday, April 17, 2016

Assessing Stephen Hawking’s Warnings about Artificial Intelligence

In an earlier post, I talked about sci-fi movies that show a fear of the malicious potential for artificial intelligence.  But it’s not just movies that illustrate that fear.  I want to revisit some of those issues, this time by starting with the real world warnings that are out there.

You’ve probably heard about Stephen Hawking’s warnings regarding the dangers of artificial intelligence.  In an interview with the BBC he said “It would take off on its own, and re-design itself at an ever increasing rate” and that "[h]umans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”  Further, according to The Independent, he said “The real risk with AI isn't malice but competence … A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble … You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let's not place humanity in the position of those ants.”  As Observer reports, a potentially problematic addition to this, according to Hawking, that that “[i]n the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets.”

Hawking is not alone in his warnings regarding artificial intelligence.  According to the same Observer article, Elon Musk, Steve Wozniak, and Bill Gates have expressed similar concerns.  Musk has gone so far as to invest in an artificial intelligence company called DeepMind for the specific purpose of “keep[ing] an eye on what’s going on.”

As I discussed in my earlier post about artificial intelligence, moral discernment has a role in separating humans from robots.  Much of Hawking’s warnings are reminiscent of a lack of moral discernment in decision making.  This brings us to the question of why and how Hawking might be right.  I again refer to Tay, Microsoft’s artificial intelligence chatbot designed to learn through interactions with millennials on Twitter.  That Tay began to espouse racist and anti-Semitic language and advocate for genocide should serve as a wakeup call because it illustrated what happens when a computer tries to mimic human behavior and it has access to samplings of our flawed nature.  It took on those flaws itself.

Now, what happens when an artificial intelligence that can “re-design itself at an ever increasing rate,” is “extremely good at accomplishing its goals,” “can choose and eliminate targets,” and begins to view humans the way we view ants is exposed to the bad side of human nature?  Sure, such a computer might not have access to the same Twitter feeds.  But it will still interact with humans.  How much will it need to observe before it becomes like Tay?  And what if each computer is different in terms of how much exposure is required before it potentially snaps?

In light of Tay, I think we ought to heed Stephen Hawking's warnings about artificial intelligence at least to an extent, and be careful that we consider the possible ramifications of what we create.

Thursday, April 14, 2016

Science and Religion: What Batman v. Superman’s Lex Luthor Tells Us about Our Relationship with Technology

The alleged conflict between science and religion can show up in unexpected places, but when we consider that we pursue technology through science, and the technology we pursue is directed at specific ends, it isn’t all that surprising.  Today I want to discuss how this theme shows up in Batman v. Superman: Dawn of Justice and its implications.  By now, you have probably either seen the movie or plan to never see it after the reviews.  My friend Joseph wrote a review for the film on his blog about filmmaking, which you should totally check out.  He and I also made a video review together as well, so feel free to check that out on your own time.  Beware, mild spoilers ahead.

In Batman v. Superman, Lex Luthor wrestles with theodicy.  He has come to believe in the dichotomy that God cannot be both all-powerful and all good at the same time.  When Superman seems to threaten this paradigm, he attempts to counteract the threat by trying to disprove that Superman is both all good and all powerful, first by instigating the fight between Batman and Superman, and then by creating Doomsday.  Particularly with the Frankenstein-esque creation of Doomsday, the alleged conflict between science and religion manifests itself in an allegorical attempt to use science and technology reassert control over a natural world not controlled by a benevolent God.

Here we have another reflection of the times.  In a RealClearScience article, Ross Pomeroy argues that advances in science and new ways of circulating information about those advances and their corresponding paradigms (i.e. the internet) are a primary reason for the decline of religiosity not only in the world but in the U.S. in particular.  The Age of Science is the third tier in the progress from the Age of Magic to the Age of Religion to today.  And then there are the statistics that supposedly show a correlation between atheism and intelligence and existential satisfaction.  I don’t know if Pomeroy is necessarily making a value judgement, but he is at least arguing that “science … serves as an effective substitute for [religious belief].” Science and religion are still seen as at odds rather than complementary in mainstream society.

Thus, the human attempt to subdue the natural world as a way to compensate for our own insecurities, as Lex Luthor does, is not surprising.  In a TED talk, brain scientist Tony Wyss-Coray talk about the potential for using blood to reverse the effects of aging.  In experiments in which the circulatory systems of old mice were fed young blood, the old mice showed cognitive and physical improvement.  The theory is that this has something to do with the kinds of hormonal factors that exist in young blood vs. bold blood, and these factors influence the organic tissue they come into contact with.  If we can get this to work in humans, we may extend our life expectancies.  We are working towards compensating for our fear of dying and growing old.  Why?  What happens when you don’t believe in a God who values you independently of your functionality, or life after death?  Your functionality in this life becomes the sole indicator of your worth.

Science and technology are great tools for exercising the dominion mandate.  There is nothing wrong with using them to improve quality of life.  What matters is that we remember and communicate to society when we can that science and religion are not at odds, but are complementary to each other.

Monday, April 11, 2016

The Ethics of Space Exploration

SpaceX has been in the news again lately, and for good reason.  Its Dragon supply capsule recently docked with the International Space Station, and it successfully landed a rocket on an ocean barge.  It is an exciting time for space exploration enthusiasts.  I wrote a paper a few years ago about why this is so.  I argued that private spaceflight is the new lens through which we ought to view human space travel.  Partnerships between governments and private companies are the new way of doing things because corporations are more efficient.  This has been evident ever since NASA first contracted SpaceX to deliver cargo to the ISS several years ago and spent far less money than it would have on its own.
 
All the talk you have likely heard about humans going to Mars will indeed become reality very soon.  There are many good motivations for the endeavor.  Scientific research carries huge potential; we may possibly discover a use for natural resources from Mars on Earth.  Further, accidental discoveries of various applications of research during the space race was responsible for many of the technological capabilities we experience today.  That will likely continue the more we pursue space exploration, especially with private companies at the helm.  We have only scratched the surface; whereas private companies have been innovating other technologies over the past few decades, private spaceflight is a recent development.  We have only seen the flip phone of space travel; the iPhone of space travel is coming.
 
As our technical capability for space exploration improves, our opportunities expand.  Naturally, this does not just yield exciting potential, but also the potential for ethical questions that we ought to be thinking about now.
 
First, what will space exploration look like on a political level?  Since governments will still likely be involved, will it be another space race, possibly with China, or will it be a more cooperative endeavor?  Both scenarios have historical precedent: while the US engaged in the space race against the Soviet Union, the International Space Station was the result of cooperation among nations.  Second, there are the logistics of the trip itself.  Many speculate that the first wave of Mars colonists will be a one-way trip, much like the pilgrims in the New World.  What kind of people will we choose to go?  Third, who is in charge of the colony, given that multiple nations will likely be involved?  What kinds of laws will be established?  How will that affect our study and use of the planet’s resources?  Fourth, if we find life, what will we do with it?  On a highly speculative note, what if we eventually run into sentient life?  Fifth, what kinds of technology will we allow the colonists to use?  One of the core aspects of this blog has been examining the ethical issues of emerging technologies, some of which might be useful.  2001: A Space Odyssey depicts one example of this.  I think it is a legitimate question: would using artificial intelligence be safe?

Let me know what other questions you think should be addressed and considered.  As with all new technological capabilities, innovated and expanded forms of space exploration are something  both to be cautious and to get excited about, and I look forward to seeing what the future holds.

Wednesday, April 6, 2016

TED Talk-Prompted Discussion, Part 2 – Remote Brain Control?

Paul Wolpe’s TED Talk discusses many facets of bioengineering, some of which I talked about in my last post, but one he spends relatively less amount of time on I think is one of the most fascinating.  A Scientist used a computer to study the brain waves of a monkey when it moved one of its arms.  He then connected the computer to a prosthetic arm that eventually mimicked the monkey’s arm movements, based on the information from the computer as it translated the monkey’s brain waves.  They then set up a video monitor in the monkey’s cage that showed the arm in another room.  The monkey observed that the arm did whatever its right arm did, and, eventually, the monkey figured out how to control the prosthetic arm completely by itself, without moving its own arms.  As Wolpe concludes, it became the first primate with three separate functioning arms.
 
As with many of the subjects his TED talk covers, Wolpe moves on after that.  But I want us to pause and think about the potential for this technology.  What will it look like if we can one day apply this technology to human beings on a wide scale?  The benefits are obvious—better prosthetics mean a better chance at readjustment for those who do not have all their limbs, for whatever reason.  I think this reason alone is enough to encourage research in this area.
 
As with all emerging technologies, however, caution retains its role.  Wolpe’s TED Talk is focused on maintaining an ethic-based skepticism in bioengineering, and I think that applies here as well.  One connection I can immediately think of is drones.  Theoretically, we could one day apply this technology to remotely-controlled robots with arms controlled by their pilots.  If you want to get even more theoretical, we might one day use virtual reality technology along with this kind of brain wave transferring to pilot humanoid robotic drones, not dissimilar to the short film on VR that I mentioned in my post about virtual reality.  Again, this would become a kind of extension of human capabilities in war, while at the same time create a removal of the soldier from combat.  That kind of removal can have negative effects.
 
Beyond TED Talks, we can see depictions of this kind of thing in movies.  For example, Iron Man’s remote-controlled suits.  On a different note, if we could create Falcon’s wings, might ne be able to wire the pilot’s brain waves to them as well?  Regardless of what remains speculation and science fiction or becomes reality, we ought to be aware of how technology that in some way enhances human activity might shape how we view the activity, whether we are talking about VR or war technology.  But then another question arises—might VR applied to the humanoid drone concept actually mitigate the concern of overstepping bounds because of how vivid the experience is?  Some of this might turn out to be a good thing.  Let’s be on the lookout for what is good, and what is not, and proceed accordingly.

Monday, April 4, 2016

TED Talk-Prompted Discussion: Bug Bots

Today’s TED Talk for discussion is one NASA Chief Bioethicist Paul Wolpe gave back in November of 2010 on “bio-engineering.”  This was one of the first TED talks I saw that sparked my interest in emerging technology ethics, and, despite the amount of time that has passed since then, I think most people are still unaware of some of the technologies he discusses.  He talks about several developments, others of which I plan to cover in future posts, for now I want to talk about just one: “bug bots.”

What are bug bots?  In a nutshell, as Wolpe describes in the TED talk, we have the ability to wire computer ships into the brains of insects, which we can use to control their movements.  They are essentially organic robots.  We can do it with cockroaches, goliath beetles, moths, and even rats.  One of the more obvious applications would be the ability to attach cameras and use these animals as living, incognito drones. 

First, I want to talk about the principle of the dominion mandate.  We are called to use the earth’s resources to good ends, and technology is one way in which we do this.  Further, we use technology to counteract undesirable yet natural phenomena, whether it is our exposure to the elements, or disease.  The ethical issues of emerging technologies we are discussing in this blog all ask a fundamental question: at what point have we taken the dominion mandate too far?  At what point, if any, does violating an animal’s autonomy become a bad thing?  Wolpe’s TED talk suggests that that this technology crosses the line.  So far, it seems to me that to the extent that we can arrive at some answer, those answers are on a case-by-case basis (think back to Michael Sandell’s article on genetic engineering humans).  As we examine these possible answers, we should be looking for the principled theme that drives all of them.

I also want to talk about some practical issues I envision arising as this technology becomes more prevalent.  The first is very reminiscent of the privacy issues that drones create today.  We wrestle with how much freedom the operators of drones ought to have when it comes to using cameras that record people’s everyday lives.  But a drone looks like a drone.  A moth with a micro camera and a computer ship impeded in its brain does not.  Another issue I can think of is admittedly more speculative (Wolpe’s TED Talk doesn’t give an implication either way), but I feel like we should be talking about it, just in case it becomes reality.  If this technology continues to improve, and we are already capable of doing this in mammals, will be we able to one day to it with humans?  If so, what might that look like, and what guidelines should we set, if not prohibiting it completely?  What implications does removing human autonomy have, not just from a standpoint of Christian ethics, but existentialist ethics as well?  What happens when we reduce a human being to just another organic robot?

Thursday, March 31, 2016

When Emerging Technologies Affect the Human Psyche: Virtual Reality

The consumer release of the Oculus Rift headset presents us with an opportunity to talk about virtual reality.  This is something that I personally am very much looking forward to—trying out a first-person shooter VR video game is definitely on my bucket list.  But as with all emerging technologies, the new wave of VR presents us with opportunities for good use and abuse.

Let me first acknowledge that VR headsets have potential for many good uses.  They could be used train pilots and surgeons and treat phobias and PTSD. They could also possibly be attached to bicycle machines at gyms for a more entertaining work-out experience, perhaps giving some people that extra incentive to work on the cardio.  And as with video games, there is a proper place for their entertainment value.

We should also be aware, however, of the potential concerns with commercialized VR headsets as they become cheaper, more advanced, and more widespread.  We can look at past examples to get some idea of what we might face.  Second Life is a kind of online virtual reality game created in 2003 in which players create avatars of themselves and interact with other players in ways that mimic the real world, for example, via social events or economic exchanges.  As one might expect, there have been players who have become addicted to the game, prioritizing time playing on Second Life at the expense of their real life in the real world.

I would not be surprised if similar applications become available on VR headsets.  And again, while such games may not be bad in and of themselves, we ought to remind people that human flourishing can only truly take place in the real world, and that neglecting the real world can have negative consequences.

I came across a short film about VR that admittedly may seem a little too dystopian to count as an actual warning.  I do think, however, it depicts the idea that while not everyone will become addicted to VR, those that do, and the problems they face because of it, still matter.
 


Monday, March 28, 2016

When Emerging Technologies Affect the Human Psyche: Weapons of War

Emerging technologies change how we relate to the realities of the world, and to each other.  Technology in warfare is no different.  Whether it was the cannon, the machine gun, or the nuclear bomb, the change expanded the horizon of our capabilities.  And since war is a human activity, the way we interact with these newfound capabilities, on a mental, emotional, and even physical level, affects its trajectory and outcomes. In a TED talk, military analyst P.W. Singer tells a story about an American explosive ordinance disposal (EOD) team in Iraq on a mission to diffuse an improvised explosive device (IED).  When one member of the team got close enough to the bomb to begin to attempt to diffuse it, it exploded.  But the story ended with a twist:  that team member was not a human soldier, but a robot.  No condolence letter needed to be written to any family members.

When talking about emerging technologies in warfare, whether they be drones in the sky or robots on the ground, we are dealing with two aspects of human nature that, in the context of war, must be carefully balanced.  On the one hand, we want to preserve our lives, and the lives of others.  On the other hand, drone technology that is now used to that end has psychological consequences that affect behavior.  Singer said something that surprised me:  drone pilots who operated drones that fly in Iraq from bases on U.S. soil had higher rates of PTSD than units who were overseas.  There is a dichotomy of two experiences with virtually no transition between them: pilots will go to shifts, then go home for the night.  A pilot can fire rockets at real people and then be with their family a few hours later.  War became an experience not much different from other lines of work.  And then, of course, there is the classic question of whether or not it is easier to make rash decisions when one is disconnected from the violence on the ground.

There are other kinds of emergent technologies that could be weaponized in the near future when applied to robotics.  Ahead of the release of Batman v. Superman, the Film Theorists YouTube channel highlighted a method by which Batman could beat Superman without the use of Kryptonite:  scientists at UC Berkeley created robotic muscle fibers that are a thousand times stronger than human muscles.  My question is, how might this change the psychology of war if we begin to use this technology in robots on a regular basis?  Presumably, there is great potential in its use in rescue operations, and so forth.  But if we use our technology as an extension of our capabilities, to what other uses might we investigate?

This stuff might seem more distant from our daily lives, but there are real policy impacts to examine.  What do you think?  How ought we to approach emerging technologies in warfare going forward?

Thursday, March 24, 2016

Three Common Themes in Sci-Fi

Science-Fiction stories can be entertaining, but they also can tell us a lot about ourselves.  Here are three themes often show up in science-fiction stories.  Click on the link to read more about how they relate to being human.  Also, tell me if you think I am missing something crucial.

1)   Space Exploration

There’s something exciting about travelling at warp speed in the U.S.S. Enterprise, traveling to Jupiter aboard the Discovery One, travelling through the wormhole in the Endurance, or experiencing the beauty of Pandora.

2)   Artificial Intelligence

“I’m sorry, Dave.  I’m afraid I can’t do that.”  The classic line epitomizes our fear of A.I.-gone-crazy.

3)   Playing God

Mary Shelly’s Frankenstein is known as “The Modern Prometheus,” and today we continue to tell modern Frankenstein tales.  At what point does human artifice cross the line?

Playing God or Playing Creation?

Christians recognize that God has given us a certain creative freedom with creation.  We take natural resources and build things out of them.  The entire world reflects what good and bad can come from that.  But every once in a while, science-fiction reminds us that such pursuits have consequences.

The Frankenstein archetype warns against “playing God,” against attempting to bring what seems to be encoded in nature as outside of our control into our control.  In struggling to equip ourselves with the ability to reach that point, we neglect to equip ourselves with the ability to handle the consequences.

One more modern rendition of this that comes to mind for me is Rise of the Planet of the Apes.  Scientists’ attempt to master just one aspect of the human genome leads to disastrous consequences for the entire planet.  So maybe the apocalypse is a bit of an extreme result to expect.  But parallels need not be literal.

I wrestle with this question.  I feel like modern medicine reflects the idea that God has given us the ability to harness our resources to promote human flourishing.  But when does it become “playing God”?  In an earlier post, I talked about Michael Sandel’s concerns regarding genetic engineering.  I wonder if his concerns provide an example of a framework for distinguishing between the two.  But even that involves nailing down the essentials of human nature.  I dealt with that in my first post, but I can’t claim that list as definitive or exhaustive.

What Stories Tell Us about Artificial Intelligence

Sometimes I wonder if our fear of A.I. is based more on science or on what we tell ourselves about it.  HAL 9000, Ultron, Aria, and Skynet have painted a picture for us that depicts one inevitable outcome: A.I. goes crazy and tries to kill us all.  But there is another side to the coin.  Tars, in Interstellar, and the computer in Star Trek, never pose any danger.  So which depiction is right?

There are really two questions we face with A.I.  First, how much can a computer mimic a human being?  Second, what kind of human being will it imitate?  I’m no A.I. expert, but it seems like we’re dealing with the mechanisms for imitation.  Just today, a story came out about how Microsoft’s A.I. chat bot started mimicking racist and anti-Semitic language on Twitter, presumably because it encountered it from others. 

The question becomes:  can we come up with an algorithm that mimics a kind of moral discernment, one that says “this kind of sentiment is bad; avoid imitating it”?  Will such an algorithm be enough?  If moral discernment is where the line is drawn between an A.I., no matter how advanced, and a human being, then perhaps we should heed warnings like that of Stephen Hawking.

What “The Final Frontier” Means

Human beings have always been explorers.  History is full of examples of peoples moving from one place to another, often facing perilous risk in the process.  The motivations are eclectic:  prestige, money, mission, curiosity, freedom, or the thrill.

Now that each continent has been explored, we have two options: the ocean, or space.  I will admit, I share the perplexity of those who wonder why we are not exploring the ocean more.  That aside, “the final frontier” allures because humans are explorers.  The frontier represented the call of the unknown.  But unlike land on earth as we perceive it today, space is vast.  The potential is nearly endless.  NASA and SpaceX will find plenty of thrill seekers to volunteer to go to Mars.  But others see potential for scientific discovery, the benefits of which could surpass what came from the space race.

Of course, whatever we do, we should do it with a degree of caution.  The expeditions of history posed not just logistical questions but ethical ones as well.  We will face a new set of those questions when we finally travel beyond the moon.

Monday, March 21, 2016

A Look at Genetic Engineering

During debate practice last week, we had a discussion about Harvard Professor Michael Sandel’s article in The Atlantic called “The Case Against Perfection.”  It’s an older article (from 2004) but the issues he raises regarding genetic engineering are all the more relevant as these technologies become more advanced and more prolific.  I’ve included the link below, but I will summarize it here:

Sandel discusses four areas in which scientists are developing capabilities for us not just correct for human biological deficiencies like disease, but to improve upon what is already normal and make us “better than well.”  The first is physical strength: a gene therapy exists that can strengthen healthy muscles in mice.  If applied to humans, then genetic enhancement could become the alternative to things like steroids.  Second, a similar gene therapy exists for enhancing memory in mice, and human applications are easy to imagine.  The third is growth hormones available to those who fall in the bottom one percentile in height, the questions then becoming what about those who are already considered average or tall?  Finally, there is sex selection, which need not be conventionally controversial—one method involves merely filtering X-bearing vs. Y-bearing sperm—no embryo destruction necessary.

After discussing common ethical objections to these practices that he believes are ultimately insufficient, Sandel argues that such practices are problematic ultimately because they are “a way of answering a competitive society’s demand to improve our performance and perfect our nature.”  There are two components to human performance: 1) natural talent (gifts), which are normally outside of our control, and 2) effort, which we can control.  Genetic enhancement brings human giftedness under our control.  In this way, it resembles eugenics.  Sandel argues that the absence of coercion is ultimately irrelevant.  He offer both religious and secular grounds for concern: “[t]o believe that our talents and powers are wholly our own doing is to misunderstand our place in creation, to confuse our role with God's . . . If bioengineering made the myth of the "self-made man" come true, it would be difficult to view our talents as gifts for which we are indebted, rather than as achievements for which we are responsible.”

Sandel then outlines practical consequences of this mentality:  first, parents’ abilities to choose the traits of their children removes “openness to the unbidden”—the love for a child that accepts them for who they are.  Second, failures become always a product of our genetic choices, and never natural weakness.  Third, it removes the basis for solidarity in forms of charity and generosity: normally, natural talent is a factor in one’s success that lies outside of one’s control, unlike effort and hard work.  But, if even natural talent is the result of human agency and choice, then the basis for a recognition that some factors are outside one’s control is diminished.

I was at church recently when Tim Keller was giving a sermon about social justice, and he coincidentally made a similar point:  If we recognize that what we have is a product of what God has given us, including our natural abilities, it is easier to understand why God would want us to be generous with the fruits of our labor, because it is ultimately derived from Him.

Is Sandel right?  Or is our reaching this technology just another product of what God has given us?  That is the attitude we take towards modern medicine.  Even vaccines are designed simply to prevent what is less than optimal.  But does the “better than well” threshold change the picture when it comes to genetics? Another question is what happens when others don’t see it that way?  Is it reasonable to be concerned that social solidarity will be hurt by this?

Or is the problem rooted in society’s increasing demands on people’s performance?  This is a whole other topic of discussion in and of itself, but I find the connection fascinating.

What do you think?  Sound off in the comments below!

Sandel, Michael J. "The Case Against Perfection: What's wrong with designer children, bionic athletes, and genetic engineering." The Atlantic. Atlantic Media Company, Apr. 2004. Web. Mar. 2016. <http://www.theatlantic.com/magazine/archive/2004/04/the-case-against-perfection/302927/>.

Sunday, March 13, 2016

Seven Ways to Describe Human Beings

As I discuss in the "about" page, this blog is aims to ask questions about humanity’s relationship with technology from a Christian perspective.  But before we can dive into the rabbit hole, we need to ask the question—what is human nature?  What actually makes us human?

To begin the discussion, I’ve compiled a list of seven ways to describe human beings.  No doubt this list is not exhaustive, nor do I expect each element to go uncontested.  If you think an essential element is missing, or would alter what I’ve said about one of these elements or would remove one entirely, let me know in the comments.  My goal is to start conversations; I certainly do not have all the answers.  After all, I’m only human.

1) Created in God’s image

There are different interpretations of this phrase from Genesis 1:26.  I think one good explanation, based on the dominion mandate that follows, is that humans have a creative and productive capacity to use the earth’s resources for good ends.  In secular terms, humans are “rational animals” who must think about and plan how to do so in the best way possible.

2) Homo-Sapiens

This is perhaps the most straight-forward, but by no means least controversial, item on this list.  Our biologically-distinguishable characteristics are rooted in our genetic makeup.

3) Structurally good, directionally flawed
 
In his book Creation Regained: Biblical Basics for a Reformational Worldview, Albert Wolters distinguishes between the “structure” and “direction” of creation.  Structure refers to how God originally intended creation to be, and direction refers to the extent to which it conforms to his will.  Christians recognize that Genesis 1 affirms creation’s, including humanity’s, original goodness, but also that human nature is corrupted by sin and the fall.  As such, society as a whole, Christians or otherwise, recognize to some extent that there is a difference between behaving like a “good” person and like a “bad” person, between virtue and vice, even if we disagree on the location, shape, and rigidness of that line.
 
4) Multi-dimensional
 
Human beings are comprise of physical, mental, and emotional components—needs, wants, and motivations.  A comprehensive account of human behavior, in whatever the context, takes all three components into consideration.
 
5) Communal
 
The cliché is that human beings seem wired for community with others, via family, friendships, romantic relationships, mentor-mentee relationships, educational relationships, economic relationships, etc.  These various forms of community can correspond to the three components from #4.
 
6) Heterogeneous

Human individuals have different tastes, preferences, priorities, beliefs, convictions, worldviews, talents, and abilities.
 
7) Having a soul
 
It’s often asserted that humans have souls, but less often actually argued for or defined.  It intuitively makes sense that we have souls, but I’m curious—what is actually the best argument for or definition of the human soul that you have heard?
 
I look forward to the discussions on technology and humanity that will follow.  Thanks so much for joining me, and be on the lookout for future posts!