In an earlier post, I talked about sci-fi movies that show a
fear of the malicious potential for artificial intelligence. But it’s not just movies that illustrate that
fear. I want to revisit some of those
issues, this time by starting with the real world warnings that are out there.
You’ve probably heard about Stephen Hawking’s warnings regarding
the dangers of artificial intelligence.
In an interview with the BBC he said “It would take off on its own, and
re-design itself at an ever increasing rate” and that "[h]umans, who are
limited by slow biological evolution, couldn't compete, and would be
superseded.” Further, according to The Independent,
he said “The real risk with AI isn't malice but competence … A super
intelligent AI will be extremely good at accomplishing its goals, and if those
goals aren't aligned with ours, we're in trouble … You're probably not an evil
ant-hater who steps on ants out of malice, but if you're in charge of a
hydroelectric green energy project and there's an anthill in the region to be
flooded, too bad for the ants. Let's not place humanity in the position of
those ants.” As Observer reports, a
potentially problematic addition to this, according to Hawking, that that “[i]n the near term, world militaries are
considering autonomous-weapon systems that can choose and eliminate targets.”
Hawking is not
alone in his warnings regarding artificial intelligence. According to the same Observer article, Elon
Musk, Steve Wozniak, and Bill Gates have expressed similar concerns. Musk has gone so far as to invest in an
artificial intelligence company called DeepMind for the specific purpose of “keep[ing]
an eye on what’s going on.”
As I discussed in
my earlier post about artificial intelligence, moral discernment has a role in
separating humans from robots. Much of
Hawking’s warnings are reminiscent of a lack of moral discernment in decision
making. This brings us to the question
of why and how Hawking might be right. I
again refer to Tay, Microsoft’s artificial intelligence chatbot designed to
learn through interactions with millennials on Twitter. That Tay began to espouse racist and anti-Semitic
language and advocate for genocide should serve as a wakeup call because it
illustrated what happens when a computer tries to mimic human behavior and it
has access to samplings of our flawed nature.
It took on those flaws itself.
Now, what happens when an artificial intelligence that
can “re-design itself at an ever increasing rate,” is “extremely good at accomplishing its goals,” “can choose and eliminate targets,” and begins to view humans the way we view ants
is exposed to the bad side of human nature?
Sure, such a computer might not have access to the same Twitter feeds. But it will still interact with humans. How much will it need to observe before it
becomes like Tay? And what if each
computer is different in terms of how much exposure is required before it
potentially snaps?
In light of Tay, I think we ought to heed Stephen Hawking's warnings about artificial intelligence at least to an extent, and be careful that we consider the possible ramifications of what we create.
No comments:
Post a Comment