Thursday, April 28, 2016

A Lesson from Frankenstein

The Frankenstein myth, like Prometheus before it, is about attempting to transcend human limits, “playing God,” and the use of artifice driven by hubris.  Such a legacy evokes caution as we pursue technology today.  I have echoed this caution throughout this blog.  But I have also acknowledged that such caution does not happen in a vacuum.  Technology can be used for good or for ill.  I think that a nuanced look at the Frankenstein myth illustrates this.

Victor Frankenstein’s pursuit, one could argue, was not inherently bad.  At face value, initially, he created life.  It was a fictional case of begetting, if you will.  Frankenstein’s monster was essentially human.  He was sentient, intelligent, and capable of learning and of kindness.  It was when Frankenstein neglected to treat him as human that things went south.  The monster was also in a formative state, and parental neglect and societal rejection all contributed to the monster’s choices later in the novel.

I think that Frankenstein provides an analogy for our own pursuit of new technology.  In our doing so, we must remember to not forget about the human condition.  Our pursuits should not ignore it nor assume that it can be entirely overcome.  Whether it is genetic engineering, drones, virtual reality, space exploration, artificial intelligence, or energy, an acknowledgement of the human condition should temper and orient our ambitions.  It should help us recognize potentials and limits.  I would argue that things are more likely to go wrong if we ignore it, and less likely if we don’t.

We can create great things that can benefit many people. But we ought to avoid falling into the trap of neglecting the human condition as we do so.  Let’s learn from Frankenstein so that we may create not monsters, but masterpieces.

Monday, April 25, 2016

Renewable Energy: A Q&A with Dr. Hershey

Dr. Joshua Hershey is an Assistant Professor of Science and Philosophy at The King's College in New York City, and a pretty awesome professor to work for as a Faculty Assistant.  I asked him some questions about the future of renewable energy.
 
What are some emerging or potential new forms of renewable energy that people might not know much about and how would they help us, as Christians, to better carry out the dominion mandate?
 
That’s an important question. As Christians, we really ought to pay more attention to the ways in which our actions affect the environment. We know the earth isn’t our eternal home, but we also know that “the earth is the LORD’s, and everything in it” (Psalm 24:1). We had better treat His property with respect, especially since He has put us in charge of it.

Our power to affect the environment—for better or worse—has increased exponentially over the last couple of centuries, and we’re now living in an era when selfish choices might literally ruin the earth for future generations. On the other hand, we could use our God-given creativity to develop technologies that benefit, rather than harm, the earth and its future inhabitants. 

Clean, sustainable energy is a big deal. Everything we do requires energy, and we use a lot of it. Sadly, less than 10% of the energy used in the United States comes from renewable resources, according to the U.S. Energy Information Administration. And that’s an all-time high (not counting firewood, which supplied practically 100% of our energy until the mid 1800s). We need to do better.

Wind and solar power have gotten cheaper and more efficient over the last few decades, and are beginning to see widespread use. Geothermal energy is becoming more popular too. Here in New York City, a bill was recently passed requiring geothermal heat pump installations in new buildings (under certain conditions). All of these renewable energy technologies have been around for a while, so I’m not sure whether they qualify as “emerging” technologies, but we should certainly continue to develop these resources.

Ultimately, though, I don’t think these sources will be able to supply all the world’s energy needs. We need something to replace fossil fuels, which provide more than 80% of our energy in the United States and a similarly high proportion of the global energy demand. I seriously doubt that wind and solar power will ever be able to meet that demand.

Nuclear power is a promising alternative to fossil fuel, in my opinion. Although it’s technically not a renewable resource, our supply of nuclear fuel will last many times longer than our supply of fossil fuels, and it does less harm to the environment. Traditional uranium fission unfortunately does produce hazardous waste that we have to bury in a desert somewhere, but that’s still better than the emissions produced by burning fossil fuels. Moreover, there are emerging technologies that may soon provide much safer and cleaner nuclear power. For example, researchers are presently developing Small Modular Reactors (SMRs), which are significantly smaller and safer than traditional nuclear power plants. The possibility of using thorium rather than uranium as fuel is also being investigated. Thorium is more abundant than uranium, and can’t be so easily used to produce nuclear weapons.

The most exciting possibility, though, is nuclear fusion (as opposed to fission) energy. Hydrogen fusion reactors will be far cleaner and safer than any other form of nuclear energy. Hydrogen is the most abundant element in the universe, and the “waste” product of hydrogen fusion is helium—the perfectly harmless, delightful stuff that fills party balloons. There will be no risk of an accidental meltdown, since nuclear fusion reactors—unlike fission reactors—can be switched off easily. Although the technology is probably a few decades away, hydrogen fusion will be the ultimate source of safe, clean energy.
 
What are some potential downsides to any of these that might concern people?

The limitations of wind and solar power are pretty easily recognized: they don’t work when the sun isn’t shining or the wind isn’t blowing. Geothermal energy is pretty limited too: it may help to heat or cool a building, but as a source of power (to generate electricity), it is terribly inefficient. And renewable biofuels like ethanol and biodiesel add pollution and greenhouse gasses to the atmosphere, just as fossil fuels do.

The downsides of traditional nuclear energy are well-known too. Nuclear fission reactors produce hazardous waste, use expensive uranium fuel that has to be mined (which also harms the environment), and there’s always the scary possibility of a reactor meltdown. The threat of nuclear arms proliferation is just as scary: nuclear fission reactors produce isotopes of uranium and plutonium that can be used in nuclear weapons.

Nuclear fusion, on the other hand, is a different story. The only major downside of using nuclear fusion power is that we don’t yet have the technology to do it. In order for a nuclear fusion reaction to occur, hydrogen must reach temperatures many times hotter than the surface of the sun. That’s not easy to do, and it’s even harder to contain the hydrogen within the reaction chamber. Since no solid material can withstand temperatures that high, magnetic and electric fields must be used as the “walls” of the reaction chamber. Unfortunately, with current technology, more energy is needed to produce those powerful electromagnetic fields than is produced by the fusion reaction. In other words, to keep the fusion reaction going, you have to put more energy into the machine than you get out of it.

Hopefully that will change soon, though. A new fusion reactor called ITER is presently under construction in southern France. If successful, it will be the first fusion reactor to produce more energy than it consumes, and will pave the way for increasingly efficient fusion technologies in the future.

What about market-oriented solutions?  Are any companies pursuing any of these that we can support in some way?

Next-generation fusion power is still a long way from marketability, but some private companies are already investigating the possibilities. For instance, Lockheed Martin is trying to develop a compact fusion reactor which they say will be “small enough to fit on a truck” yet “could provide enough power for a small city of up to 100,000 people.”

Meanwhile, as we wait for new technologies to arrive, let’s make the most of what we already have. Numerous clean energy solutions like solar power and wind energy are already available in many locations. If we have the option to buy our electricity from a clean energy company, or install solar panels on our homes, we ought to take advantage of those opportunities.  

We can also make use of high-efficiency technologies that reduce our energy consumption. For example, I’ve installed LED bulbs throughout my home. They’re better than incandescent, halogen, and CFL bulbs in practically every respect: they're much more efficient, they don't get hot, and they last longer.

We can enjoy the benefits of technology without ruining the planet. God may have placed us in charge of the earth, but it's still His property. Let’s leave this place better than we found it!

Thursday, April 21, 2016

Artificial Intelligence and Relationships

People have explored the idea of having a romantic relationship with an artificial intelligence for a while.  Recent films like Her and Ex Machina have explored what falling in love with a robot would entail.  Various Star Trek episodes have depicted scenarios in which characters develop feelings for holograms in the holodeck.  We could see this kind of thing happening in real life very soon.  For those of you interested in reading more about lessons from fictional relationships, feel free to check out my friend Tianna’s blog on the subject.

I want to make a distinction between two kinds of love: eros and agape.  Eros is love as a noun; it is the feelings of infatuation and desire to be with someone because of how that makes you feel.  Agape is love as a verb.  It is the foundation for strong marriages.  It is selfless and self-sacrificial.  There are other conceptions for love, with their own Greek words as well, but I think making the distinction between these two kinds for our purposes here highlights some interesting issues about the prospect of “falling in love” with robots or any kind of artificial intelligence.

One can love a robot in the eros, or noun, sense of the word, but not in the agape, or verb, sense.  A robot can love someone in neither sense.  When science fiction depicts humans’ falling in love with robots, it is only in the noun sense.  When an artificial intelligence can mimic a human enough to pass the Turing test, and the kind of human it mimics happens to be attractive, it makes sense that one could feel an attraction to it.  It is at the verb level that a problem arises.  One cannot truly love a robot in this sense because a robot cannot receive one’s love.  You can act self-sacrificially for a robot no more than you can act self-sacrificially for a chair.  The ability actually receive agape love depends on a recognition that goes beyond simply acting like there is a recognition.

The Chinese Room Argument, developed by philosopher John Searle, illustrates why this is the case.  Imagine a person alone in a room receiving messages in Chinese.  The person has an instruction manual or computer program that allows them to respond to every conceivable message, enough to fool anyone on the outside that the person with whom they are communicating can speak Chinese.  But in reality the person still does not know the language, even if it appeared that they did.  The broader conclusion is that with artificial intelligence, mimicking understanding does not equate to real understanding.  Mimicking the biological processes that relate to our being sentient beings does not equate to replicating them.

This brings us to a more philosophical question.  This presupposes that we have souls.  But I am aware that if one accepts the paradigm that our brain chemistry is all we are, then who is to say that an artificial intelligence cannot be sentient?  The brain is merely one kind of computer out of many.  For those who accept the soul paradigm, however, we should remember that agape love is the backbone of relationships, and the practice affects so many areas of life.  Relationships with artificial intelligence does not allow that possibility, so we should probably discourage it.

Sunday, April 17, 2016

Assessing Stephen Hawking’s Warnings about Artificial Intelligence

In an earlier post, I talked about sci-fi movies that show a fear of the malicious potential for artificial intelligence.  But it’s not just movies that illustrate that fear.  I want to revisit some of those issues, this time by starting with the real world warnings that are out there.

You’ve probably heard about Stephen Hawking’s warnings regarding the dangers of artificial intelligence.  In an interview with the BBC he said “It would take off on its own, and re-design itself at an ever increasing rate” and that "[h]umans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”  Further, according to The Independent, he said “The real risk with AI isn't malice but competence … A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble … You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let's not place humanity in the position of those ants.”  As Observer reports, a potentially problematic addition to this, according to Hawking, that that “[i]n the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets.”

Hawking is not alone in his warnings regarding artificial intelligence.  According to the same Observer article, Elon Musk, Steve Wozniak, and Bill Gates have expressed similar concerns.  Musk has gone so far as to invest in an artificial intelligence company called DeepMind for the specific purpose of “keep[ing] an eye on what’s going on.”

As I discussed in my earlier post about artificial intelligence, moral discernment has a role in separating humans from robots.  Much of Hawking’s warnings are reminiscent of a lack of moral discernment in decision making.  This brings us to the question of why and how Hawking might be right.  I again refer to Tay, Microsoft’s artificial intelligence chatbot designed to learn through interactions with millennials on Twitter.  That Tay began to espouse racist and anti-Semitic language and advocate for genocide should serve as a wakeup call because it illustrated what happens when a computer tries to mimic human behavior and it has access to samplings of our flawed nature.  It took on those flaws itself.

Now, what happens when an artificial intelligence that can “re-design itself at an ever increasing rate,” is “extremely good at accomplishing its goals,” “can choose and eliminate targets,” and begins to view humans the way we view ants is exposed to the bad side of human nature?  Sure, such a computer might not have access to the same Twitter feeds.  But it will still interact with humans.  How much will it need to observe before it becomes like Tay?  And what if each computer is different in terms of how much exposure is required before it potentially snaps?

In light of Tay, I think we ought to heed Stephen Hawking's warnings about artificial intelligence at least to an extent, and be careful that we consider the possible ramifications of what we create.

Thursday, April 14, 2016

Science and Religion: What Batman v. Superman’s Lex Luthor Tells Us about Our Relationship with Technology

The alleged conflict between science and religion can show up in unexpected places, but when we consider that we pursue technology through science, and the technology we pursue is directed at specific ends, it isn’t all that surprising.  Today I want to discuss how this theme shows up in Batman v. Superman: Dawn of Justice and its implications.  By now, you have probably either seen the movie or plan to never see it after the reviews.  My friend Joseph wrote a review for the film on his blog about filmmaking, which you should totally check out.  He and I also made a video review together as well, so feel free to check that out on your own time.  Beware, mild spoilers ahead.

In Batman v. Superman, Lex Luthor wrestles with theodicy.  He has come to believe in the dichotomy that God cannot be both all-powerful and all good at the same time.  When Superman seems to threaten this paradigm, he attempts to counteract the threat by trying to disprove that Superman is both all good and all powerful, first by instigating the fight between Batman and Superman, and then by creating Doomsday.  Particularly with the Frankenstein-esque creation of Doomsday, the alleged conflict between science and religion manifests itself in an allegorical attempt to use science and technology reassert control over a natural world not controlled by a benevolent God.

Here we have another reflection of the times.  In a RealClearScience article, Ross Pomeroy argues that advances in science and new ways of circulating information about those advances and their corresponding paradigms (i.e. the internet) are a primary reason for the decline of religiosity not only in the world but in the U.S. in particular.  The Age of Science is the third tier in the progress from the Age of Magic to the Age of Religion to today.  And then there are the statistics that supposedly show a correlation between atheism and intelligence and existential satisfaction.  I don’t know if Pomeroy is necessarily making a value judgement, but he is at least arguing that “science … serves as an effective substitute for [religious belief].” Science and religion are still seen as at odds rather than complementary in mainstream society.

Thus, the human attempt to subdue the natural world as a way to compensate for our own insecurities, as Lex Luthor does, is not surprising.  In a TED talk, brain scientist Tony Wyss-Coray talk about the potential for using blood to reverse the effects of aging.  In experiments in which the circulatory systems of old mice were fed young blood, the old mice showed cognitive and physical improvement.  The theory is that this has something to do with the kinds of hormonal factors that exist in young blood vs. bold blood, and these factors influence the organic tissue they come into contact with.  If we can get this to work in humans, we may extend our life expectancies.  We are working towards compensating for our fear of dying and growing old.  Why?  What happens when you don’t believe in a God who values you independently of your functionality, or life after death?  Your functionality in this life becomes the sole indicator of your worth.

Science and technology are great tools for exercising the dominion mandate.  There is nothing wrong with using them to improve quality of life.  What matters is that we remember and communicate to society when we can that science and religion are not at odds, but are complementary to each other.

Monday, April 11, 2016

The Ethics of Space Exploration

SpaceX has been in the news again lately, and for good reason.  Its Dragon supply capsule recently docked with the International Space Station, and it successfully landed a rocket on an ocean barge.  It is an exciting time for space exploration enthusiasts.  I wrote a paper a few years ago about why this is so.  I argued that private spaceflight is the new lens through which we ought to view human space travel.  Partnerships between governments and private companies are the new way of doing things because corporations are more efficient.  This has been evident ever since NASA first contracted SpaceX to deliver cargo to the ISS several years ago and spent far less money than it would have on its own.
 
All the talk you have likely heard about humans going to Mars will indeed become reality very soon.  There are many good motivations for the endeavor.  Scientific research carries huge potential; we may possibly discover a use for natural resources from Mars on Earth.  Further, accidental discoveries of various applications of research during the space race was responsible for many of the technological capabilities we experience today.  That will likely continue the more we pursue space exploration, especially with private companies at the helm.  We have only scratched the surface; whereas private companies have been innovating other technologies over the past few decades, private spaceflight is a recent development.  We have only seen the flip phone of space travel; the iPhone of space travel is coming.
 
As our technical capability for space exploration improves, our opportunities expand.  Naturally, this does not just yield exciting potential, but also the potential for ethical questions that we ought to be thinking about now.
 
First, what will space exploration look like on a political level?  Since governments will still likely be involved, will it be another space race, possibly with China, or will it be a more cooperative endeavor?  Both scenarios have historical precedent: while the US engaged in the space race against the Soviet Union, the International Space Station was the result of cooperation among nations.  Second, there are the logistics of the trip itself.  Many speculate that the first wave of Mars colonists will be a one-way trip, much like the pilgrims in the New World.  What kind of people will we choose to go?  Third, who is in charge of the colony, given that multiple nations will likely be involved?  What kinds of laws will be established?  How will that affect our study and use of the planet’s resources?  Fourth, if we find life, what will we do with it?  On a highly speculative note, what if we eventually run into sentient life?  Fifth, what kinds of technology will we allow the colonists to use?  One of the core aspects of this blog has been examining the ethical issues of emerging technologies, some of which might be useful.  2001: A Space Odyssey depicts one example of this.  I think it is a legitimate question: would using artificial intelligence be safe?

Let me know what other questions you think should be addressed and considered.  As with all new technological capabilities, innovated and expanded forms of space exploration are something  both to be cautious and to get excited about, and I look forward to seeing what the future holds.

Wednesday, April 6, 2016

TED Talk-Prompted Discussion, Part 2 – Remote Brain Control?

Paul Wolpe’s TED Talk discusses many facets of bioengineering, some of which I talked about in my last post, but one he spends relatively less amount of time on I think is one of the most fascinating.  A Scientist used a computer to study the brain waves of a monkey when it moved one of its arms.  He then connected the computer to a prosthetic arm that eventually mimicked the monkey’s arm movements, based on the information from the computer as it translated the monkey’s brain waves.  They then set up a video monitor in the monkey’s cage that showed the arm in another room.  The monkey observed that the arm did whatever its right arm did, and, eventually, the monkey figured out how to control the prosthetic arm completely by itself, without moving its own arms.  As Wolpe concludes, it became the first primate with three separate functioning arms.
 
As with many of the subjects his TED talk covers, Wolpe moves on after that.  But I want us to pause and think about the potential for this technology.  What will it look like if we can one day apply this technology to human beings on a wide scale?  The benefits are obvious—better prosthetics mean a better chance at readjustment for those who do not have all their limbs, for whatever reason.  I think this reason alone is enough to encourage research in this area.
 
As with all emerging technologies, however, caution retains its role.  Wolpe’s TED Talk is focused on maintaining an ethic-based skepticism in bioengineering, and I think that applies here as well.  One connection I can immediately think of is drones.  Theoretically, we could one day apply this technology to remotely-controlled robots with arms controlled by their pilots.  If you want to get even more theoretical, we might one day use virtual reality technology along with this kind of brain wave transferring to pilot humanoid robotic drones, not dissimilar to the short film on VR that I mentioned in my post about virtual reality.  Again, this would become a kind of extension of human capabilities in war, while at the same time create a removal of the soldier from combat.  That kind of removal can have negative effects.
 
Beyond TED Talks, we can see depictions of this kind of thing in movies.  For example, Iron Man’s remote-controlled suits.  On a different note, if we could create Falcon’s wings, might ne be able to wire the pilot’s brain waves to them as well?  Regardless of what remains speculation and science fiction or becomes reality, we ought to be aware of how technology that in some way enhances human activity might shape how we view the activity, whether we are talking about VR or war technology.  But then another question arises—might VR applied to the humanoid drone concept actually mitigate the concern of overstepping bounds because of how vivid the experience is?  Some of this might turn out to be a good thing.  Let’s be on the lookout for what is good, and what is not, and proceed accordingly.