Computers, Information Technology, the Internet, Ethics, Society and
Human Values
Philip Pecorino, Ph.D.
Queensborough Community College, CUNY
Chapter 13 Artificial Intelligence and Being Human
Presentation of Issues
There are a host of issues
that arise when considering the nature of and development of artificial
intelligence. Should humans be building machines that are
intelligent?
How intelligent?
What if one of the machines
communicates with humans and claims to be aware of itself? To be
Conscious? Will it then be entitled to anything more than any
machine is entitled to?
Should human beings be
developing and using artificial intelligences to make decisions for human
beings?
AI machines now can reach
conclusions about:
Investing in stocks and bonds
and commodities
A diagnosis of a physical
condition
the existence of one of a
number of dangers to a system
targeting of weapons systems
From the AAAI we have many
viewpoints. As computers
are programmed to act more like people, several social and ethical
concerns come into focus. For example: Are there ethical bounds on what
computers should be programmed to do? Sources listed here focus on AI, but
also included are works that range more broadly into the general impact of
computerization. See
Ethical & Social Implications
Trust me, I'm a robot
- Robot
safety: As robots move into homes and offices, ensuring that they do not
injure people will be vital. But how? The Economist Technology Quarterly
(June 8, 2006). "Last year there were 77 robot-related accidents in
Britain alone, according to the Health and Safety Executive. With robots
now poised to emerge from their industrial cages and to move into homes
and workplaces, roboticists are concerned about the safety implications
beyond the factory floor. To address these concerns, leading robot experts
have come together to try to find ways to prevent robots from harming
people. Inspired by the Pugwash Conferences -- an international group of
scientists, academics and activists founded in 1957 to campaign for the
non-proliferation of nuclear weapons -- the new group of robo-ethicists
met earlier this year in Genoa, Italy, and announced their initial
findings in March at the European Robotics Symposium in Palermo, Sicily.
... According to the United Nations Economic Commission for Europe's World
Robotics Survey, in 2002 the number of domestic and service robots more
than tripled, nearly outstripping their industrial counterparts. ... So
what exactly is being done to protect us from these mechanical menaces?
'Not enough,' says Blay Whitby, an artificial-intelligence expert at the
University of Sussex in England. ... Robot safety is likely to surface in
the civil courts as a matter of product liability. 'When the first robot
carpet-sweeper sucks up a baby, who will be to blame?' asks John Hallam, a
professor at the University of Southern Denmark in Odense. If a robot is
autonomous and capable of learning, can its designer be held responsible
for all its actions? Today the answer to these questions is generally
'yes'. But as robots grow in complexity it will become a lot less clear
cut, he says."--- on
Ethical & Social Implications
ABSTRACT
The ethical issues related to the possible future creation of machines
with general intellectual capabilities far outstripping those of humans
are quite distinct from any ethical problems arising in current automation
and information systems. Such superintelligence would not be just another
technological development; it would be the most important invention ever
made, and would lead to explosive progress in all scientific and
technological fields, as the superintelligence would conduct research with
superhuman efficiency. To the extent that ethics is a cognitive pursuit, a
superintelligence could also easily surpass humans in the quality of its
moral thinking. However, it would be up to the designers of the
superintelligence to specify its original motivations. Since the
superintelligence may become unstoppably powerful because of its
intellectual superiority and the technologies it could develop, it is
crucial that it be provided with human-friendly motivations. This paper
surveys some of the unique ethical issues in creating superintelligence,
and discusses what motivations we ought to give a superintelligence, and
introduces some cost-benefit considerations relating to whether the
development of superintelligent machines ought to be accelerated or
retarded.
What about the
The Social Impact of Artificial Intelligence.
By Margaret A. Boden. From the
book: The Age of Intelligent Machines (ed. Kurzweil,
Raymond. 1990. Cambridge, MA: The MIT Press). "Is artificial intelligence
in human society a utopian dream or a Faustian nightmare? Will our
descendants honor us for making machines do things that human minds do or
berate us for irresponsibility and hubris?"
Should computer scientists worry about ethics? Don Gotterbarn says,
"Yes!". By Saveen Reddy. (1995). ACM Crossroads. [This article was also
republished in the Spring 2004 issue of Crossroads (10.3):
Ethics
and Computer Science.] "The problem is that we don't emphasize that
what we build will be used by people.... I want students to realize what
they do has consequences."
Will we be able to build in
protections from the robots we build ? Build in safeguards in the
forms of Artificial Intelligences that we create? If we can build these
safeguards in will some humans remove them?
Isaac Asimov first used word
'robotics' was in Runaround, a short story published in 1942.
I, Robot, is a book that he wrote , now made into a movie,
which is a collection of several stories dealing with robots and possible
problems or threats to humans posed by these creations. Isaac Asimov also proposed his
three "Laws of Robotics" in Runaround, and he later added another 'zeroth law'
having realized a "loop hole" left by the first three.
Law Zero: A robot may not injure humanity, or, through inaction, allow
humanity to come to harm.
Law One: A robot may not injure a human being, or, through inaction, allow
a human being to come to harm, unless this would violate a higher order
law.
Law Two: A robot must obey orders given it by human beings, except where
such orders would conflict with a higher order law.
Law Three: A robot must protect its own existence as long as such
protection does not conflict with a higher order law.
These "laws" would not serve
the military very well with their use of robots in warfare. It is
already the case that the armed forces of the world have robots to serve
various needs in warfare and that destroy and kill humans.
LISTEN to P. W. Singer
discussing the ethical dilemmas of using robots in war. At NPR with "Wired
for War" explores Robots on the Battlefield
The term roboethics was coined by roboticist Gianmarco
Veruggio in 2002, who also served as chair of an Atleier funded by
the European Robotics Research Network to outline areas where
research may be needed. The road map effectively divided
ethics of artificial intelligence into two sub-fields to
accommodate researchers' differing interests:[1]
Machine ethics is concerned with the behavior of artificial
moral agents (AMAs)
Roboethics is concerned with the behavior of humans,
how humans design, construct, use and treat
robots and other
artificially intelligent beings
He helped to create
EURONhttp://www.euron.org/
is a shorthand for "EUropean RObotics research Network".
It is the community of
more than 225
academic and industrial groups in Europe with a common interest in doing
advanced research and development to make better robots. EURON
issued the
Roboethics Roadmap (July, 2006) Gianmarco Veruggio..
We contend that the ethical ramifications of
machine behavior, as well as recent and potential developments in
machine autonomy, necessitate adding an ethical dimension to at
least some machines. We lay the theoretical foundation for machine
ethics by discussing the rationale for,
the feasibilty of, and the benefits of adding an ethical dimension
to machines. Finally, we present details of prototype systems and
motivate future work.
Should robots that serve humans in specific limited
ways have decision making functions built in to handle
ethical situations? How would they be programmed?
Thinking about such questions has led to the field known
as "machine ethics". This includes thinking about
programming machines to make decisions or to not to
decide on their own but to follow programming and if so,
what sort of programming should it be that guides the
robotic behavior? If the machines are to be taught
"right" from "wrong" then what is to be used to
determine "right" from "wrong"?
What if it is an aerial drone sent to a house where a
target is known to be within and at the same time
occupied by other non-combatants and civilians? Should
it be programmed to bomb the house or not? To
decide for itself? If so, based on what criteria?
What ethical principles?
Should a driverless car, such as the
google car, swerve to avoid pedestrians if that
means hitting other vehicles or endangering its
occupants? Should it be programmed to avoid
pedestrian injuries or not? To decide for itself?
If so, based on what criteria? What ethical principles?
Should a robot involved in disaster recovery
tell people the truth about what is happening if that
risks causing a panic? Should it be programmed to
provide information, all information or not? To
decide for itself? If so, based on what criteria?
What ethical principles?
Should an AI such as
Watson be programmed to make medical decisions as it
would have more medical knowledge than any human could
have? What of the
Archimedes project or model which enables clients to
simulate clinical trials and compare clinical and
economic benefits between drugs and standard treatments
in various patient populations?
What about
robot -human combinations? Bionic-humans? see
Morality For Machinesin
which Author Daniel H. Wilson
imagines how ethics might change once robot implants are
possible for humans. Is there an ethics needed for "superhumans"
?
ROBOTS, ARTIFICIAL INTELLIGENCE and the
EMERGENCE of CONSCIOUSNESS
--contributions from
Gabriella Pavesi and Justin Pierce (2012)
I – Robots
“If every tool, when ordered, or even
of its own accord, could do the work that befits it... then there would
be no need either of apprentices for the master workers or of slaves for
the lords.” This is a quotation from philosopher Aristotle, and is dated
back to 320 BC. Historically, the origins and primordial ideas for the
development of “working machines” was the very wish to facilitate and
simplify labor and work that were once developed only by men. Even
though the robotic concept - as of androids - is considerably new, ideas
on the development of robots and automata can be dated back to the 13th
century.
Robots that make use of Artificial
Intelligence started being built back in the 60’s, and today it has
reached stages as, for example, Androids that look like their creators
(Geminoids) or the computer program who defeated all the major winners
of the show “Jeopardy!”.
As technology grows by minute, and
humans face the possibility of sharing “space” with machines that will
walk, look, talk, and even think and “feel” like them, many ethical and
moral issues arise: what would their “human rights” be? Are they allowed
to any “humanity”, after all, they are not humans? Are we safe? Can they
turn against humans? And even: could technology reach a point where
turning a robot off would mean “killing” it?
II – Current state of Robots
On the following videos,
we can find endless examples of the achievements of robotics, with
Robots that replicate human expressions and facial characteristics
perfectly and also the many different and sometimes very controversial
purposes to which they are used.
These are videos of the
three Geminoid Robots that can respond to conversations, show emotions,
and look so realistic that you have to see it to believe it. It’s a long
time until humans achieve the technology to create fully independently
functioning androids, but it all has to start somewhere and this is how
it begins.
The Geminoid-DK is a tele-operated
android in the geminoid series. It is made to appear as an exact copy of
its master, Asc. Professor Henrik Scharfe, of Aalborg University. Dr.
Scharfe is also the main investigator of the Geminoid-DK research
project. For more information on Geminoid-DK click the link
Geminoid home page.
These are videos of another set of robots designed by Japanese
engineers to mimic the movements of humans and respond to movement. They
are equipped with internal cameras and respond to movement and sound,
acting as if they can actually hear and interpret the images around
them. Another big step in technology.
ROBOT RECEPTIONISTS
Imagine a robot replacing the duties of humans. Although that sounds
like a science fiction movie plot, some robots have already been
implemented as receptionists, cutting the costs of paying humans to do
simple tasks.
Imagine having a pet that would never die?
You don’t have to feed it, you don’t have to take it out for a walk, and
it doesn’t really need any special care. All you have to do is enjoy its
company.
A research company in Japan is working on
creating robotic fish that can swim together like real school of fish.
Their idea is to, in the future, release them into the ocean to see how
other fish in the environment would respond to the robot version.
AI is the field of robotics that
focuses on the development of intelligent machines that can process
thoughts, understand human thinking and mimic it. The idea of “copying”
the human brain is extremely difficult, for is a very complex biological
machine, working with billions of neurons.
The biggest quest for the scientists
working with AI is to create machines that can develop independent
thoughts and that can assimilate human emotions and abstract concepts,
such as freedom, love, happiness and “right and wrong”. Therefore, the
questions arise: can AI robots and entities become conscious? What does
it mean to be conscious? It is something innate only on humans, or can
it be achieved technologically? Most importantly: if robots achieve
consciousness, what would that mean as far as their awareness as
“individuals”?
JULES
Jules is a human-like robot implemented with AI that can learn human
conversation and process it. It is evolved to a point where it can have
conversations (although limited) and it can also talk about “his”
emotions. It has no arms and no bottom half. It can fully understand the
meaning of words and use them in different tones of voice to achieve a
human-like conversation. After Jules’ creator shipped it off (from the
first video) to a University (in the second video) for further
development, Jules claims it misses its creators and is fully aware that
it may not be able to achieve consciousness like a human would define.
Jules says, in the second video, that “he” is scared because when “he
thinks”, “he” knows that it is virtually simulated, and not organic. It
makes you feel sad for “him”, because “he” claims says it wants to be
more in the world.
Watson was optimized to tackle a specific challenge: compete against the
world's best Jeopardy! contestants. Beyond Jeopardy!, the IBM team is
working to deploy this technology across industries such as healthcare,
finance and customer service.
Scientists at UC
San Diego's California Institute for Telecommunications and Information
Technology (Calit2) have equipped a robot modeled after the famed
theoretical physicist Albert Einstein, with specialized software that
allows it to interact with humans in a relatively natural,
conversational way. The so-called "Einstein Robot," which was designed
by Hanson Robotics of Dallas, Texas, recognizes a number of human facial
expressions and can respond accordingly, making it an unparalleled tool
for understanding how both robots and humans perceive emotion, as well
as a potential platform for teaching, entertainment, fine arts and even
cognitive therapy.
IV – Emergence of
consciousness on robots.
Consciousness is a widely
studied topic in psychology and there are no standard modes of how to
measure it, or define exactly what consciousness is. Therefore, when it
comes to the possibility of having it simulated in AI, much is argued
and discussed.
What is consciousness and where does it
come from? Is it a product of the many electrical a neuronal activities
on the brain? Is conscience the brain, or the mind? And are these two
last words really two separate entities? These questions remit to the
studies of metaphysics and the problem of Body +Mind. Many studies
argue that, being consciousness a product of coordinated activities that
occur in the human being, it is possible then to replicate these
activities in an artificial intelligence model, therefore giving machine
the ability to develop a conscious state.
One of the problems pointed out as for
replicating consciousness is the “phenomenal experience”, extracted from
this link from the University of Toronto.
Artificial
Intelligence and Human Morality Do androids deserve human
rights?
What
if consciousness is a property that emerges from complex systems that
process information and can and do monitor themselves as feedback ?
What if humans build androids that appear to manifest consciousness?
Would it be morally acceptable to unplug them or destroy them?
The video shows a
robotic female being assembled and having her functions described. She
states that she doesn't need to be fed; her battery lasts for 173 years,
she can take care of kids, clean a house, and is available as a sexual
partner. After prompting by the unseen "Operator," she speaks fluently
in French and German, before singing in Japanese.
Once the Operator has
heard enough, he states that she's ready to be sold. Initially confused
about this statement, she quickly realizes she's a piece of merchandise.
The Operator is discouraged by this self-awareness, and orders robotic
arms to start disassembling the “defective" model.
"I thought I was alive,"
Kara says. "I've only just been born. You can't kill me yet. Stop this,
please stop! I'm scared!" As she says this, the robotic arms pause,
retract, and the Operator tells her to "go and join the others." She's
placed in a box with several identical models and whisked away.
SONNY – I ROBOT
Sonny is a Robot character in the motion
picture I ROBOT. In
this video, he is being interrogated by a cop, in a future where
robots are provided consciousness, but they have to follow “Robotic
Laws” in order to live peacefully with humans.
Machine Consciousness - Kask 531 MP2
This
video looks at the possibility of machines developing a true form of
consciousness. It takes a brief look at the developments in artificial
intelligence, starting with early expert systems, then artificial
general intelligence, and finally examines where AI is headed. This
video was created for the UBC MET program, ETEC 531.
V – Philosophical issues
What is consciousness? Do we
have consciousness because we are aware of it or are we aware of things
because we possess consciousness? Why does it exist and, most
importantly, how?
The answers to all these questions are
constantly debated in Philosophy, especially Philosophy of Mind. When it
comes to Robotics and the expansion of technology towards AI, these
arguments focuses into the consequences and the treatment of the new
perspectives humans would face if technology reaches the ability to
produce robots that can develop consciousness. Some of the questions
brought to awareness are of how would conscious Androids be treated? Are
they entitled to Human Rights, since they would possess, even if
artificially, the same properties as human beings – feelings, awareness,
consciousness, emotions? What to do if a conscious Android wouldn’t fit
into the expectations of its purpose? The first impulsive answer would
be to simply “turn it off”, but think again: could it be considered
murder, since you are “removing” life from a conscious being?
Amongst the population, the idea of
co-existing with conscious Androids brings not only questions and
concerns, but also fears. But what are we really afraid of? Are we
afraid that they might become dangerous? That the Androids may form an
“army” and dominate humans, turning against their own “masters”? Or is
it that we are not prepared to realize that humans may not be the last
degree in the evolutionary scale we were always so proud and safe to
dominate?
David Chalmers is an Australian Philosopher
specialized in the Philosophy of Mind and Philosophy of Language. In
this video, he explains about the concepts of “Weak Emergence” and
“Strong Emergence”. Consciousness, in his opinion, is an example of
Strong Emergence, and we are only aware of it because we experience it.
To know more about David Chalmers, access his
personal website.
Vernor Vinge is a retired professor of Mathematics and
Computer Science from San Diego State University and a science fiction
author. One of his most influential works is the essay “The Coming
Technological Singularity”, where he argues that, with the creation of
superhuman artificial intelligence, the “human era” will end, in such
way that no models of reality as we are aware now are capable to predict
it.
In the
following videos, Vinge explains more about the concept of singularity.
Singularity is the
idea that machines could have the capacity to evolve at a faster pace
than humans and is linked to the fear of robots gaining too much
control. I, Robot presents singularity through the idea of the "ghost in
the machine" in which VICKI evolves in a harmful way. Other films used
include Blade Runner, AI, and Bicentennial Man.
http://www.youtube.com/watch?v=w3UaoIcvD4k&feature=fvst
In Philip K.
Dick’s short novel , “Do Anroids dream of Electric Sheep?”, made into
the movie Blade Runner there are replicants who do not know that they
are AI-androids. The test for detecting them-Voight Kampf Test- and
distinguishing them from humans is quite complicated and the results not
always dependable.
http://www.youtube.com/watch?v=h2e_IunAzkc
This video is a 2007 lecture by Steve Omohundro for
the Stanford University Computer Systems Colloquium (EEL 380). In his
lecture, Omohundro presents the principles of “self-improving systems”:
computers that can improve themselves through the learning of their own
operations.
The following videos are a Seminar presentation from Prof
Mark Bishop, chair of Cognitive Computing at Goldsmiths, University of
London. This presentation presents the views on the possibilities and
consequences of future machine rebellion against humanity.
The Moral Issues: Applying Ethical
Principles and the Dialectical Process
In approaching the questions, issues,
problems and dilemmas posed by the situations presented by developments in
computer technologies there is a need to analyze the situation and
identify the key elements and values that may be involved and the ethical
principles that can be brought to bear. An argument needs to be
developed in support of the position that is to be advanced as the
preferred position on the moral question. That position is then
examined by others who hold different values or hold the same values in a
different order and who would apply ethical principles in a different
manner, rejecting one or another for reasons which should be given.
The process continues until there are enough people who think that one
position is the best of the alternatives. Given the nature of the
original problem or question and the size of the populace who hold the one
position of the majority there may be social policies or even legislation
that would result.
Values
With artificial intelligence and the
efforts to develop non-human entities that might display consciousness
there are the threats posed to human beings self esteem and to their
conceptions of their uniqueness. If such machines are created
and should they demonstrate such intelligence and forms of cognitive
behavior and self awareness and even of empathy and sympathy for other
such machines or for humans then what would the impact be on the values
humans hold related ot their own place in the universe? What of the
idea that humans have some non-physical part of themselves that might
survive the death of the body?
There is the construction of thinking
machines or computing devices of such complexity and speed that they
demonstrate intelligence and that intelligence is directed to specific
decision making tasks such as in health care and the military and in
financial institutions. Should human beings TRUST the decision
making of machines? More than humans? If so, when? Under what
circumstances?
Ethical Principles
In attempting to develop an
argument as to what would be the morally correct actions with regard to
Artificial Intelligence various principles and values may
be cited as part of the dialectical process of argumentation in support of
a position. The principle of Utility
would address the need for concern for the impact of AI and application of
AI on the interests of the human species. Using Utility how are
those interests best served? Using Utility the issue that is
most acute is whether or not the AI entities are in some way SENTIENT and
having interests and that they are aware of themselves and their
interests. If so, then they might be entitled to the inclusion in
the moral calculations of Utilitarians. If not, then not.
The Categorical Imperative may be used in
supporting claims as to how the AI is to be developed and applied.
Using the Categorical Imperative poses the significant issue of whether or
not AI entities that demonstrate consciousness are entitled to be treated
as AUTONOMOUS MORAL AGENTS. Would such entities be entitled to the
considerations that Kant gives to other humans who demonstrate that they
can think and reason?
Rawls' Principle of Justice (Maxi-Min) can
also be utilized in describing how situations ought to be handled so as to
maximize liberty while decreasing the inequalities amongst those involved
or impacted by the AI technologies. The significant thing for Rawls would
be the matter of entitlement for AI entities that demonstrate
consciousness. Over the last few decades now people have been doing just
this in a variety of forums through journal articles and books and through
presentations at meetings of these specialists, engineers and
professionals.
Reflections on Artificial Intelligence by Lindsey Pehrson, CUNY SPS 2009
Background
Mankind has had a long-standing
fascination and active imagination regarding the concept of Artificial
Intelligence. This is echoed in everything from ancient Greek mythology
to modern film culture. When computers were first created and still
taking up entire rooms in order to accomplish a single, simple
mathematical calculation, we could only dream that someday they might
take on a more natural function and appearance. As technology has
progressed and computers have become a prevalent component in almost
every industry known to man, utilized in nearly all the machines human
beings readily rely on in daily life, engineers have sought to make our
dreams come true by successfully attempting to add human-like
characteristics to the devices. This includes building a central core of
intelligence with the hope of emulating human characteristics. These
machines can be found in cars that ask passengers for their
destinations, then vocally tell them and visually show them how to get
there. They can also be seen in children’s toys, like Teddy Ruxpin and
Tickle Me Elmo, that respond to touch or vocal cues, appearing to come
to life before the eyes of onlookers. These advancements have proven
incredible, but they have also left many wondering what does it mean for
human identity if we can build machines which act and think like human
beings? Are we replacing ourselves with manufactured humanity? Could we
become inferior to these creations? Do human traits make computers
“living” creatures that are subject to the same rights as natural human
beings? What effect will these technologies have on the human race? And
finally, what type of responsibility do we assign these human-like
devices? Do we trust machines to be medical experts, diagnosing based on
statistical patterns and probability theories? What about allowing them
to do our legal research for us, or be our psychologists?
Before we can establish if a
computer can become intelligent, we must first identify what AI is, what
it means to have intelligence and how AI might be in possession of it.
Wikipedia.com has defined Artificial Intelligence as, “the study and
design of intelligent agents where an intelligent agent is a system that
perceives its environment and takes actions which maximize its chances
of success.” The Stanford Encyclopedia has defined intelligence as, “the
computational part of the ability to achieve goals in the world. Varying
kinds and degrees occur in people, many animals and some machines.”
Intelligence is not a single entity, but rather a series of abilities
that allow those that have it to think and plan. How does AI have
intelligence? The premise is that human intelligence can be identified,
perfectly described, and then readily transcribed in such a manner that
it allows a machine to simulate the process. Among the specific
components of intelligence that must be transferred are the capabilities
of reasoning, retaining knowledge, planning, learning, communicating,
perceiving, manipulating objects and possessing social intelligence.
Machines that can fully accomplish these tasks be identified as
intelligent Artificial Intelligence.
Of course, intelligence is not
only based on knowledge, there is also a sense of emotional
comprehension that is required. For this reason, there has been a great
deal of debate over whether AI can ever really have the true intuitive
intelligence that a human does. There are numerous experienced
individuals that present compelling arguments for and against the
intellect of AI. In the article The Turing Test is Not a Trick: Turing
Indistinguishability is a Scientific Criterion, Stevan Harnad of
Princeton University’s Department of Psychology makes the point that,
“You don’t have to be able to define intelligence (knowledge,
understanding) in order to see that people have it today and machines
don’t. Nor do you need a definition to see that once you can no longer
tell them apart [man and machine], you will no longer have any basis for
denying one of them what you affirm of the other.” In other words, if a
computer can effectively mirror the abilities of a human being, and even
surpass them, can we really say that it is not intelligent simply
because it is made from mechanical parts instead of flesh and blood? Ray
Kurzweil, author of The Age of Spiritual Machine declares that human
beings should actively pursue the creation of AI intelligence, as it is
a very attainable goal that can benefit society. He believes that we can
only gain by coexisting with this advanced and occasionally superhuman
technology. Likewise, in the articles Arguments for Strong AI and
Theology of Robots, Edmund Furse promotes the mental abilities of
robotic and Artificial Information technology. He even believes that in
the future this sect will be so fully mentally developed that it will
have its own religious system like that of human beings. Edward Fredkin
has even gone so far as to label Artificial Intelligence as the, “next
stage in evolution.”
What are the parameters for
declaring a machine as intellectual? Experts in all fields have weighed
in on the potential requirements. Alan Turing has spent his career
designing a standard for measuring the intelligence of a machine, hence
the Turing Test. He believes that if a machine acts intelligently, then
it is indeed as intelligent as a human being. At this point no machine
has ever passed the test. This does not deter Stevan Harnad from
believing in AI. He explains that, “if we had a pen-pal whom we had
corresponded with for a lifetime, we would never need to have seen him
to infer that he had a mind. So if a machine pen-pal could do the same
thing, it would be arbitrary to deny it had a mind just because it was a
machine.” The Dartmouth Proposal advances this idea, claiming that human
learning can indeed be described and recreated. The theory states that,
“Every aspect of learning or any other feature of intelligence can be so
precisely prescribed that a machine can be made to simulate it.”
In their physical system symbol hypothesis,
Newell and Simon also found that through manipulation of symbols,
machines could generate intelligent actions. This was later refuted by
Hubert Dreyfus who strongly believes that human expertise is born from
unconscious and inherent instinct and a sense of the situation, not on
symbols. In his theorem on incompetence, Gödel seems to agree with
Dreyfus that machines cannot be intelligent. He insists that computer
systems cannot be in possession of total consciousness because they are
a formal system, meaning that their very nature prevents them from
attaining this status. Searle’s strong AI hypothesis refutes this idea.
He finds that, “the appropriately programmed computer with the right
inputs and outputs,” would be able to have the same exact form of mind
that human beings possess. Hans Moraven and Ray Kurzweil both concur
with this conclusion. They emphatically believe that the brain can
indeed be simulated in a machine’s hardware and software.
In his writings, Stevan Harnad has
identified numerous central issues that Artificial Intelligence must
conquer in order to seen as intelligent and self-aware by the mass
population. First, in the reasoning category, AI must be able to produce
intuitive judgment. Second, it must be able to identify objects and
symbols, decode them, categorize them, make connections between them,
and finally create a string of reasoning based on them. Third, it must
be able to set goals for itself and achieve these goals. This means that
these machines must understand time. Fourth, it must be able to learn
new things based on what it sees and experiences, just like an infant.
Fifth, it must communicate effectively, using the accepted language or
technical language format. Sixth, it must perceive the world, deducting
logic based on the inputs provided to it, and ones that it merely
“sees.” Seventh, it must be able to be socially intelligent, predict the
actions of those it is interacting with, and display emotions itself.
This includes appearing polite even it if does not truly comprehend what
emotion is.
If a computer successfully accomplishes
these steps, does this mean it is human and alive? In a word, the answer
is no. What is missing from this mechanical composition is the human
ability to feel and apply emotion. After all, is intelligence the only
component that makes us human beings? What about our sense of humanity?
Can we teach an object to mimic with truth the ability to empathize,
sympathize and experience all other emotions naturally? We still do not
have a true understanding of how this emotional process works in the
human brain, so how can we genuinely transfer this ability to a machine?
True, these machines may be able to be identified as smart by some
standards; however, as Gödel has said, there are certain aspects of AI’s
nature that prevent it from ever being human. Case and point: these
machines cannot naturally reproduce and give birth and they cannot cry
(at least not yet). And, how do we program laughter or tell a machine
how to recognize when something is funny? It seems nearly impossible
that programmers could write code to teach a machine to duplicate
something that we do inherently but have no clear map for recreating
outside of ourselves.
There are those experts that
identify that there should be some hesitations in our advancement of AI.
Justin Mullins has stated that, though machines may soon start to think
for themselves, what impact will that thinking have if they lack the
ability to feel? He mulls the serious implications of what it would mean
to have the ability to act without regret or full understanding of the
human condition. This would create a world of absolutes, based on
pre-programmed thought processes. Put another way, will our desire to
see ourselves in our creations lead to the total destruction of mankind?
What happens if these machines become citizens, then jury members and
doctors? We unfortunately cannot answer that question because when it
comes to AI, we are flying blind. We are advancing the technology of
self-aware computers with total abandon, funded by entities such as the
Defense Department, and yet we have not given much thought to such
questions as human rights. The creation of Artificial Intelligence also
raises philosophical issues. Are human beings ready to potentially hand
over all control in their world in order to carry out this mission of
hubris? Do we really understand the true nature of the human mind enough
to replicate it in a machine? And, if we do accomplish this, what does
that mean for the fate of both the machines and human beings? Will
machines be considered human in the eyes of society?
If a computer is deemed human-like, do we
afford it the same legal and social rights, and hold it to the same
accountability, that are inherently given to natural-born, flesh and
blood citizens? The Matrix is a popular movie that attempts to answer
that question. In the film, man has created superhuman machines, which
look and seem just like humans do. Out of fear for their inability to
control the machines, humans decide to destroy them. This creates an
ongoing battle that results in the destruction of both man and machine.
Small Wonder was a television show that also sought to answer questions
of coexistence with AI. In the plot line, a family adopts a robot
daughter, Vicky, that looks human, talks in monotone, and possesses
superhuman ability. Though she looks like a regular teenage child, she
lacks the emotional capability and credibility to maneuver through life
without running into societal problems over her lack of human
sentiments. The main issue was her inability to process events with
cognitive and emotional comprehension. To the entire world, she looked
human but she clearly wasn’t.
At this point, there have been
numerous advancements in the area of AI, particularly in places like
South Korea where the government aims to have entertainment robots that
can sing and dance like a human, known as EveR-2, in every home by 2013.
However, though we are making machines that look like humans, can we
really say that we are making intelligent and human-like entities? There
are so many subtleties of human nature, from our various, countless
facial expressions to the recognition of something funny when we speak
with another person. Can we program a machine to mimic the things that
we cannot consciously control? Can we create a real mechanical brain and
program it to have these very subtleties that make us human? Currently,
we cannot. Yes, we are definitely making solid strides in teaching these
devices to “think,” be intelligent and even feign emotion, but unless we
can pass our sense of humanity on to them they should not be equal to
us. As of today, we have no reason to be confident in their ability to
make ethical and legal decisions on our behalf. Is there anyone that can
truly say they would feel comfortable having a machine and only a
machine diagnose them, or decide their fate in a courtroom? We do not
live in a world of absolutes, leaving decisions to a machine that can
only give absolute answers is dangerous for our humanity and way of
life.
Analysis
The development of Artificial
Intelligence raises numerous social, moral and philosophical questions.
First should human beings really be building intelligent machines just
because we have developed the ability to do so? If our film life is any
indication, building machines that can surpass the intelligence of human
beings, and learn to develop their own army of machines, potentially
puts us in a position to be dominated by the very technology we have
created. And yet, these machines have brought serious efficiency to our
lives, can we really abandon a pursuit that is advancing the quality of
life for so many members of society? Perhaps we should limit the amount
of intelligence these machines are allowed to have? But that would, most
likely, only serve to hinder creation. Besides, we only need to study a
child to see that human nature is curious and likes to break out of its
boundaries. Therefore, it could only be a matter of time before someone
breaks that seal just to see what would happen. If Terminator 2 is any
indication, allowing this growth to happen unregulated could spell total
destruction for all of mankind.
From a philosophical and social
perspective, what are the implications of developing a machine that is
aware of itself? Will this mean that the device is alive? Should it be
granted the same rights that humans are afforded? And if these machines
are “alive” then should we allow them to make choices for human beings?
At this point, we are allowing AI to make mathematical judgments about
our investment options. We also permit it to diagnose illness and
perform surgery. We trust it to identify and suggest remedies for system
errors in our networks. We also have given it responsibility to identify
locations that may house terrorists and launch and guide our weapons. In
South Korea, among other places, we have also turned to AI for
companionship. With the development and perfection of both Gynoids and
Androids, we have been able to make friends out of mechanical
components. The troubling question, however, is what connotations does
this have for the relationships between real people? How will men look
at flesh and blood females after they have experienced sexual
companionship with the perfect female machine? How will these men treat
a woman after being with a robot that allows them to act as kind or
cruel as they desire? How will this interaction influence our real-life
behaviors? At this point we unfortunately have no way of knowing the
answer.
There are also numerous ethical
issues concerning the development of human-like technology. First and
foremost, should this segment of the industry be allowed to continue its
progress without implementing solid ethical boundaries? For example, who
should be blamed if a robot or AI machine injures a human being? What if
that robot was created by another robot? Who is at fault? There have
been 77 robot-related accidents in a single year in Europe. This is a
very direct reminder that, though these machines may appear human, they
are still machines that have glitches and system failures, and may
potentially react in a way that causes harm or even death to a human
adult or child. Nick Bostram believes that these fears are not worth
hindering AI’s development when compared with the potential of AI’s
superhuman intelligence and efficiency. He argues that AI technology
could even surpass humans in their level of morality; adding that we
just need to format the devices to be human-friendly, particularly if
they are going to be smarter than humans. But what happens when
something goes horribly wrong? There is no one that can promise with
absolute certainty that a machine will not malfunction mid-way through
surgery, or during a diagnosis, or even when administering medications
as the Therac-25 machine did. These are serious risks that must be
addressed if we are going to place AI in a role of responsibility.
Saveen Reddy agrees that we must
develop safety standards to protect humans from AI. Isaac Asimov, the
well-known writer, has augmented on this idea by identifying Three Laws
of Robotics that could easily be transitioned to Three Laws of
Artificial Intelligence. Law Zero states that robots (and AI) must not
injure humanity or stand idly by while humans are being harmed. Law One
declares robots (and AI) must not harm human beings, unless doing so
would contradict a higher law. Law Two states that robots must obey
orders given to them by human beings, unless doing so would violate a
higher law. Finally, Law Three acknowledges that a robot may protect
itself, as long as doing so does not conflict with a higher law.
Asimov’s ideas provide a solid foundation for holding AI accountable. It
sets clear parameters that AI engineers should abide by. But, what if a
machine still ends up injuring a human? What type of punishment would be
accurate and provide the victim with a feeling that justice has been
done (especially given the fact that we are a society that craves
accountability)? If we consider AI to be alive, do we put it do death by
disabling the machine? Can we put a machine on trial? Does it get to
have a lawyer? If we say yes to this, what do these actions say about
our own humanity? Is this a human way to treat a “living” non-living
creation?
Before deciding whether it is
morally appropriate to pursue the creation of Artificial Intelligence in
the face of all these issues, we must first identify what values human
beings hold that are relevant to this topic, and if these actions would
be advancing or contradicting those values. First, human beings need to
see themselves reflected in their world. We spend countless money on
archeological and anthropological research to unearth our past in order
to establish where we have come from and figure out where we are going.
Second, we need to have the chance to advance ourselves and further our
way of life. This is what keeps us driving towards new technologies and
increased efficiencies in our daily lives. Third, we need to be
stimulated. Science has proven that children that lack this crucial
component grow up with serious developmental difficulties. Fourth we
need to have validations for who we are, which is why we seek out awards
and recognition from our peers. Fifth, we need to feel unique, special
and irreplaceable. This truth resonates in the sense of pride and
internal uplift we feel when we do something that sets us apart from our
peers. Sixth, we need to be able to trust those around us who make our
decisions, and believe that there will be justice when someone harms us.
Whether they are our elected officials, police officers, a jury of our
peers or the doctor who is treating us, we seek out those that we feel
will advance our best interests and take care of our needs.
In light of these values,
ethical egoists would insist that the progression of AI is moral
regardless of the social, philosophical or ethical implications of these
actions. This group would be able to say with a clear conscience that we
should be allowed to develop AI without restriction, no matter who or
what was injured in the process, or could be injured after the fact. In
the Utilitarian viewpoint, it can be argued that it is in the best
interests of the majority society to advance AI for human benefit. This
means using it freely in toys, cars and home appliances, making it
widely accessible to the general public. It can also mean designing AI
that can take over jobs that are boring or non-stimulating to human
workers. Of course, the majority of society also values being employed
in order to feed their families. Allowing AI to fill these job
opportunities could cause the unemployment rate to increase more than it
already has, hurting society. This goes against Utilitarian law. It can
also be argued that the majority of society would be happiest if there
were no glitches during a surgery or a medical diagnosis. This error
could jeopardize their lives. Furthermore, Utilitarian thought says that
what is best for the majority is most ethical option. But what happens
when the majorities are AI? Do we take the needs and interests of
human-like machines into consideration over that of actual human beings?
And, do ethical theories of man apply to machines, or do we need to
develop a new set of theories?
In Kant’s theory of Categorical
Imperative, the ethical choice is the one that advances the interests of
society, and prevents anyone from being used as a means to an end. On
one hand, we must ask if it is in the best interests of society to
declare that a machine can be intelligent and equal to a human being?
Human beings have an inherent need to feel special and unique. If a
machine is able to accomplish the same things in seconds that we have
taken hundreds of years to do, this could have serious and damning
implications for our human psyche. Human beings also have a need to be
protected. By creating machines that have the potential to overpower and
outsmart us, we are essentially endangering society for the sake of
satisfying our hubris. Alternatively, if AI has human-like standing and
superhuman intelligence, is it treating AI as a means to an end to use
the machines for our manual labor and efficiency, but never give them
rights equal to our own? Are we opening the door to another period of
slavery in our history?
In the Rawlsian point of view,
the action that maximizes liberty and minimizes inequality is deemed the
most ethical. Human beings are (usually) considered equal in the eyes of
the law. We are all born with the same basic parts, though some are able
to develop their various abilities, including intelligence, more than
others. If AI is capable of thinking, then in the eyes of this theory,
it should be given the same rights that other thinking beings (namely
humans) have. We must also consider that AI can be used to improve the
quality of life for individuals. For example, the development of AI limb
prosthetics can advance the opportunities of and establish equality for
people with disabilities. That means it is abiding by the Rawlsian
principles.
Moreover, is it fair to make some workers
stand in horrible conditions for long hours doing dirty work that is
potentially dangerous or damaging to their health? If a machine can do
the same thing without any humans getting hurt, then is it ethical to
prevent this machine from being used? Machines can alleviate this job
environment inequality. Added benefits are that AI is able to work 24
hours a day, increasing productivity and lowering costs. Unfortunately,
as noted above, replacing human workers with machines could seriously
increase unemployment. Would that really be in everyone’s best
interests? If we do create capable and human-like AI, would it allow
bosses to leverage the threat of AI implementation when their underpaid
human workers demand to be paid fairly? There is also the question of
whether it is reasonable to compare the work of a superhuman computer to
that of humans applicable for the same job. If AI is programmed with a
superhuman ability and considered living how is a regular human being
supposed to compete? There is a very likely possibility that AI will
limit the opportunities for real working class citizens, something that
goes against our very ideals of democracy and creates serious
inequality, a violation of Rawl’s theory.
Based on the ethical principles, it can be argued that it is morally
suitable to both promote and prevent the advancement of AI. This means
we must fully gauge the interests of society when making this decision
to fashion human-like computers. First, there is the plain fact that the
majority of human beings have a very hard time adjusting to the concept
that a computer could think for itself. In Cosciousness: An
Afterthought , Steven Harnad accurately points out that there is a,
“conceptual and intuitive difficulty we have in equating the subjective
phenomenology of conscious experience with the workings of a physical
device.” The very notion of AI is something that many believe Hollywood
has cooked up. Off screen they do not think this is not something that
could ever really exist.
Regardless of this denial, there is the very
real truth that AI does exist, whether we are ready to realize it or
not. Human beings do not fully understand their own nature, so how can
we possibly say that a machine is not like us, or that it does not have
a mind? Even though the majority of society may not be ready to accept
this idea yet, this does not mean these machines are not “thinking” and
providing services for human beings that are critical in nature. For
example, Liberty Island is outfitted with “smart” cameras that can
identify an abandoned backpack, or recognize a non-approved vessel
approaching it, then alert the system of the potential threat. In Japan,
therapy robots have provided countless medical help to patients. Yuri
Kageyama has discovered that human beings have actually become attached
to these machines just as they would another human being. Of course,
though this does appear to help these individuals in the short term,
should we allow a device to replace real human companionship? In the
long run, does it really serve a patient to become so close to a
pre-programmed device, especially when we have no guarantee it will
always act appropriately?
The second societal factor we
must acknowledge is that implementing AI will drastically change the
technological and social landscape. In his article The Singularity,
Vernor Vinge points out that when a machine is able to accomplish
something in a matter of hours that took humans much longer to do, there
are going to be major changes to society and we will not be properly
prepared to deal with them. As it is, computers have developed over
several decades, and yet this rate of progress outgrew the laws that
should govern them. We still haven’t managed to come to a general
consensus about privacy and security, accountability or morality in a
Third Wave era. How can we responsibly advance AI without having solid
legal and societal parameters in place to protect the population against
harms that can come from these machines? If we allow our egos to think
for us we could end up living out one of the robot-centered science
fiction films that ends with the destruction of the human race and the
very essence of humanity itself.
Third, saying that AI can think
and be alive has a direct impact on our view of our own intelligence.
When someone types to well known AI developed programs like Eliza, Alice
or Jabberwacky, it is only a matter of time before these machines are
confused by human questions, either repeating themselves or being locked
into nonsensical speech patterns. The reason for this is that computer
programs can only ever be as intelligent as programmers can make them.
They are not always going to know how to respond because they can only
comprehend as far as their pre-programmed knowledge base goes. Can we
really say, at this point, that these machines are thinking and being
intelligent? And, if we do say that they are in possession of real
aptitude, then doesn’t that mean that we have taken away our own unique
nature and limited our mental ability to a rudimentary set of
programming code?
What about the crucial component of
emotional intelligence? It is that aspect of ourselves that makes us
human. We are not merely technical problem solvers; we have emotional
function that guides us through our lives and choices. We are born human
but we develop our sense of humanity through our experiences over the
course of a lifetime. AI is born in a laboratory. What kind of life
experience does it have that gives it the credibility to make choices
for a real human being? I don’t think it has any. There is also the
question of whether we should we feel inferior to AI if it is smarter
than we are. My answer to that is that we are the ones that created it.
True, we may not process a program as fast as the machine does, but we
developed the technology that could!
In conclusion, despite the
arguments for and against its progression, perhaps Theodore Kaczynski
said it best when he stated that we are becoming so dependant on
machines that we may eventually have no choice but to accept their
decisions. Machines control us. They tell us how to do basic things like
get from point A to point B. Yes, we manage our own personal machines,
like the cars we drive and the alarm clocks we program, but there exists
a very small elite which retains control over the major systems in our
lives. What happens if this group decides to use their AI to eliminate
an entire segment of the population, or the whole population itself? We
are at their mercy and yet most of us don’t even realize it. We must
set parameters and ethical guidelines for the creation of Artificial
Intelligence. Perhaps Steven Parnad has the right idea when he suggests
reformatting the Turing Test to establish a Total Turing Test that
measures all aspects of the mechanical brain including the development
of self-awareness and consciousness towards others. We can decide then
whether a machine should have responsibility. It is truth that, whether
we like it or not, machines are getting “smarter.” The focus now should
be on responsible development that allows for the advancement of
efficiencies that improve the quality of life for as many people as
possible, while simultaneously protecting the extended interests of
international human civilization.
Web Surfer's Caveat: These are
class notes, intended to comment on readings and amplify class discussion.
They should be read as such. They are not intended for publication or
general distribution. ppecorino@qcc.cuny.edu @copyright 2006
Philip A. Pecorino