Chapter 6 : The Mind-Body Problem
Section 5: Monism-Materialism
So now we arrive at the real challenge to the dualist view
and the solution to the Mind-Body Problem that is attracting more and
In this view there are no minds at all.
At least there are no minds separate from the brain.
There are no non-physical entities.
The mental activities are accounted for in terms of the brain and
what it does. There are a variety of approaches to explaining the MIND in
terms of the BRAIN.
must read only those linked materials that are preceded by the
There will be a great deal of material here. It is
put here to challenge your belief that you have a mind. More and
more people are coming to think differently. The belief in a mind
has been held by so many and for so long that it takes some doing to
demonstrate that perhaps it just is not true!
The claim is made that there
is no non-physical mind.
All that we do and experience is accounted for in physical terms.
When the day comes that humans interact with a very well made and
very complex computer, perhaps in the form of a human body, a
robot-android, and the humans can not tell that it is a silicon based
form of activity then humans will realize that they are not really
different from the robot.
Humans are a carbon-based life form.
The android-robot will be a silicon-based life form.
If humans and androids both act alike and speak of "feelings" and
"thoughts" and so forth then humans will know that the mind is just
another name for the physical brain.
So, the views presented here will quickly get to the cases of
computers and robots as a means of offering proof of the non-existence
of a non-physical mind.
What is there really that you
or a human being does that indicates that they have this non-physical
entity associated with the behavior.
Robots can be programmed to speak, to write, to calculate, to learn and
even to make other robots (first step successful in the year 2000). Robots, androids, can be made to speak of feelings and to
report that when for example fluid intake over 4 hours drops below 50cc
to say, "I'm thirsty" or some other appropriate phrase.
The android could be programmed to say what the humans say when
fluid intake drops to a low point.
Is there any thing more to "feeling thirsty" than that?
As the behaviorist, B.F. Skinner, claims, we know that someone is
thirsty because they drink.
We should not think that they drink because they are thirsty.
Neurologists have been busy at work identifying the locations in
the brain responsible for memory, speech, creativity and motor control
as well as anger, depression and even love, both the physical attraction
stage and the "romantic" stage.
There have been numerous examples of people who have had their
basic behavior change as a result of brain injuries , illnesses and
chemical imbalances. With
all of this mounting evidence there arte many people who believe that
there are NO NON-PHYSICAL MINDS , that we have only BRAINS.
on the Materialist Position
A brief explanation of each: more details will be supplied below.
Some psychologists believe that they can account for all of human behavior
in terms of operant conditioning.
All that a human does (including ideas and feelings) are behaviors that
can be explained in terms of basic physical factors:
There is no essential difference between a human and any
Humans use language according to what they were reinforced for
saying or writing. There is
no mind. There is only the
brain as with any other mammal.
Human thought is simply brain behavior (activity) that has been learned
(reinforced) and associated with some stimuli and evoking some response.
B. Logical behaviorism-
The word "MIND" is the result of a mistake, an error in logic.
If a person arrived on the campus of a large college or university
and asked someone in the parking lot, "Where is the college?"
That person might point out one building after another saying
something like: "Well that's the administration building over there.
That's the gym building way down there.
That's the new science building over there."
Then the visitor interjects with " No, I want to know where is the
college?" Well the visitor
is making the mistake of thinking that the college is a place as are the
buildings instead of the college being a name for the entire collection
of buildings, programs, instructors, students etc..
The visitor is making an error.
Well, in like manner the word MIND has been mistaken for a THING
when it is just a NAME for a collection of activities of the brain.
C. Semantic behaviorism-Holders
of this view believe that the word MIND has been improperly associated
with the existence of an entity that exists apart from the body, the
brain. Those who speak of
the mind as if it where a non-physical entity have been reinforced in
this inappropriate behavior and incorrect association.
All talk of the MIND as distinct from the brain originates from an
earlier time when people were not as well informed as we are today. Most people have had to abandon thinking of many things that
were part of the old folklores:
Well, now that more is known of brain functioning and
structure, humans will need to abandon all talk of the mind as a
The MIND is really the name given to the collection of brain functions.
See more below.
The MIND is a name given to a collection of brain structures. Each mental event is accounted for in terms of the various
arrangements and operations of parts of the brain.
Here are two overviews and
they shall be followed by more detailed presentations on the various
by H. Elliot
Another form of Monism: only
one kind of substance in the universe and only material substance exists - there is
no non-physical substance. This view had important implications for
existence of God, soul, angels etc..
3 Main points:
1. Uniformity of (physical) Laws
Denial that there is “intelligent” purpose
Denial of non-physical entities
Uniformity of Law
Physical Laws are descriptive: They don’t say how it ought
to be, but rather they describe how things actually behave.
Physical Laws describe a universe which operates according to
regular, uniform laws.
Every event has a cause
Same causes under same conditions get same effects.
Denial of Intelligent Purpose:
One challenge to materialism is that it must be able to
explain “intentionality” of actions.
Elliot meets the challenge by denying that any action is
acts can serve a purpose - like blinking reflex
but acts occur because the laws of physics hold true - not
because they are directed by someone.
Denial of Non-Physical entities:
Where are they? Can we spot
Why do we need to suppose they
Non-physical entities are
proposed out of ignorance of the true nature of things.
Once we realize that there are
only physical causes for things, we will look for (and find) just those
sorts of causes.
Uniformity of Law establishes that:
event has a cause
Denial of "intelligent" purpose establishes:
that every cause/effect relationship depends on another
Denial of non-physical entities establishes:
that the only causes of events are physical.
There is no "Ghost in the machine" (the non-physical MIND),
there is only the machine.
We are organic machines,
nothing more, nothing less.
does not minimize the nature of the human brain ,taken as a machine it is still
the best and most efficient machine of its type.
Brain as Machine:
The fastest computers:
Cray II (Cray Technologies) - .
“Connectionist Machine” -
2 teraflops (theoretical speed - too expensive to build)
All of the above use an
architecture known as “Massive Parallel Processing”
Human Brain -
60 teraflops - (estimated)
The mere act of catching an
object thrown to us requires an enormous amount of processing power and
Moving to intercept object on its trajectory
grasping motions along with recovery from motion in 3 dimensional space.
what do we do with mentalistic terms?
Thought = a particular brain
Mind = our collective
awareness of our own brain processes
Memory = a physical process of
information storage and retrieval
Dreaming = testing and
establishing relationships between information stored in memory.
Materialism does not allow for spiritual entities:
NO God, soul, angels or devils.
Idealism may lead us into a Radical Solipsism:
the view that I am the only being in the world
Yet, Dualism cannot explain how the mind and body interact.
If we as humans are just a form of organic machine, then:
it possible to construct a non-organic machine which can do the same
Could we create an artifact which was intelligent?
This will require that we know what intelligence is!
· It will also require that we understand how it is that humans are intelligent!
READ: MIND as IDENTICAL to the BRAIN http://plato.stanford.edu/entries/mind-identity/
As used in the philosophy of science, physicalism is the
view that all factual knowledge can be formulated as a statement about
physical objects and activities. Thus, the language of science can be
reduced to third person descriptions.
The positivists defined the physical as that which can be
described in the concepts of a language with an intersubjective
observation basis. This could be called unity of science physicalism. It
is the primary meaning of physicalism in the philosophy of science.
Another type of physicalism might be called causal physicalism, the view
that all causes are physical causes.
There is a lot of confusion in the philosophy of mind
literature stemming from a tendency to take physicalism and materialism
to be interchangeable.
Suggested Readings: Armstrong’s Materialist Theory of Mind The Nature of Mind and Other Essays. Cornell University Press (1981). [ISBN 0801413532 ]
What is FUNCTIONALISM? By Ned Block
The mind is what the brain
does; we are (very sophisticated) biological machines.
According to functionalism, 'Mind' refers to the brain's
activity of thinking; the MIND is not a special kind of thing or
substance--not a spiritual thing or a physical thing--but rather a
certain kind of activity that is carried out by a physical thing, in the
case of humans, by the brain.
ANALOGY #1: Other Bodily
Digestion is what the stomach does. Circulating the blood
is what the heart does. Cleansing the blood is what the kidneys do.
Thinking is what the brain does.
Some machines can do the job of our organs when they fail.
Artificial organs are growing more common (cornea implants, kidney
machines, artificial hearts). If thinking is simply the function
performed by the brain, it might some day be possible to replace parts
of the brain (maybe even the whole brain!) with artificial parts.
But what does the Brain do?
The brain processes
information gathered by the senses and stored in memory. The outputs of
this processing include the things we say, think and do. In effect,
thinking is a form of computation. The mind is to the brain as software
is to hardware.
Two important considerations have added plausibility to
this view of the mind.
1. Computers can be made of
In principle, it is possible to build a computer out of
almost anything. Early electronic computers were made with vacuum tubes.
Current computers are made with transistors and silicon chips. But any
device that can be used to "read" and "write" from a "tape" on which are
symbols that represent "1's" and "0's" can be used to build a computer.
In 1833 Charles Babbage conceived a design for a mechanical computer
made from interlocking gears and levers. He called his computer "the
Analytical Engine." The problem with using mechanical components is that
computers made from them perform their computations so slowly, they are
Nature, however, found a way to build a computer using
biological components, without silicon chips and transistors. We call it
2. Levels of Explanation or
What Do Psychologists Study Anyway?
Psychology started out as the study of the mind, and by
"mind," most early psychologists meant something like the Cartesian
soul. When souls fell from fashion, psychologists faced a problem: If
there are no souls, and if neurologists study brains, what's left for
psychologists to study?
When Behaviorism was "in" psychology became the study of
behavior. But now Behaviorism is "out," so what is psychology the
science of? Answer: Psychology is now the study of cognitive and other
processes carried on in the brain.
Levels of Explanation in
Computers and the Brain
From one point of view (the engineer's) all that is going on in a computer
is a series of electronic changes.
From another point of view (the programmer's) the machine is running a
From our point of view (the user) the computer is word processing or
solving an equation.
From one point of view (the neuro-biologist's) all that is going on in a
brain is a series of chemical changes.
From another point of view (the psychologist's) the brain is running a
From our point of view (the user) the brain is thinking.
Thus, most psychologists are functionalists, the MIND is to the BRAIN as a PROGRAM is to a COMPUTER.
Is a Thinking Computer a
Since both computers and
brains are computational/information processing devices, it should be
possible, in principle, to build a computer that thinks.
It is impossible to build a computer that can do X. (where X = your
favorite example, write a poem, tell a joke, discover a new theory,
Three step recipe for building
a computer that can do X.
Step 1: Figure
out how we do X, or alternatively, how anything at all could do X, and
write out a detailed explanation of your discovery.
Step 2: Convert
the detailed explanation from Step 1 into an algorithm or program.
Step 3: Load the
program onto a computer and run it.
Note: The hard part is step 1,
not step 2! It's not that a computer could never do X, it's just that we
aren't yet smart enough to figure out how we, or anything at all, could
manage to do X.
Computers are predictable. Humans are not.
False. Even for relatively simple computers, like the
personal computers we use every day, it is practically impossible to
predict what they will do under all circumstances. That's why program
"bugs" are so hard to prevent, and sometimes hard to eliminate.
Conversely, many of the things that humans do are very
predictable. We are creatures of habit and daily routines. The people
who know you best can predict the clothes you like, the food you eat,
what you do to relax, even what you are likely to say next, etc., etc.
A computer can only do what it is programmed to do. We aren't programmed;
we decide what we will do.
Computer Program + Input History = What the computer will
Genetic Make-up + Experience = What you will do next.
Your behavior is the product of the Nature of your genes
and the Nurture of your experience. The computer's behavior is the
product of the Nature of its program and the Nurture of its input. So
what's the difference?
So there are those who think that humans are just very
complex organic machines, that humans are not essentially different from
a organic computing device!
When a computer is made that acts so much like a human that most people
would not be able to tell that it was a computer then we shall know that
humans do not have a non-physical mind or a non-physical soul but that
we are hydro-carbon life forms that have complex information processing
units (Brains) that are capable of behavior indicating awareness.
There are numerous works of science fiction and movies and
television series that have had robots in human form,
These robots or thinking machines have been mistaken for being
human or have acquired so many human traits as to be deemed worthy of
being accorded human rights! e.g.,
Well ,when the day actually arrives that such machines have been created and do function in that manner as to be mistaken for being human, thinkers such as Alan Turing believe we will have all the evidence that we would need that we have no non-physical minds.
Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem
Harnad, S. (1991) "Other bodies, Other minds: A machine incarnation of an old philosophical problem", Minds and Machines 1: 43-54. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad91.otherminds.html
B. Mind –Brain Resources
THE TURING TEST
Alan Turing was the Philosopher and Mathematician who
thought of the Test which is named after him. Turing held that computers
would in time be programmed to acquire abilities rivaling human
As part of his argument Turing put forward the idea of an
'imitation game', in which a human being and a computer would be
interrogated under conditions where the interrogator would not know
which was which, the communication being entirely by textual messages.
Turing argued that if the interrogator could not distinguish them by
questioning, then it would be unreasonable not to call the computer
Turing's 'imitation game' is
now usually called 'the Turing test' for intelligence.
Prize for closest candidate to pass the test http://www.loebner.net/Prizef/loebner-prize.html
You are invited to take a simple form of the test by going
to this site to interact with the award winning program for both 2000
Alan Turing HOMEPAGE: Website
This page contains a directory of AI organisations and
associations around the world. It is maintained by Amruth N. Kumar
History of Artificial Intelligence and much more!!
DICTIONARY of PHILOSOPHY Of
Here are some other variations on the materialist approach.
The New Materialism: Francis Crick
Contrary to the assumptions of cognitive scientists,
philosophers, and others, Francis Crick, co-discoverer of the structure
of DNA, believes that one cannot achieve true understanding of
consciousness or any other mental phenomenon by treating the brain as a
black box. Only by examining neurons and the interactions between them
could scientists create truly scientific models of consciousness, models
analogous to those that explain transmission of genetic information by
He writes in his 1994 book
The Astonishing Hypothesis: The Scientific Search for the Soul,
“Your joys and your sorrows, your memories and your ambitions, your
sense of personal identity and free will, are in fact no more that the
behavior of a vast assembly of nerve cells and their associated
molecules.” This is not a new idea, it is materialism. What makes
Crick's argument so notable is that advances in neuroscience are showing
that it is not too soon to start examining the scientific basis of
consciousness. He is the person most responsible for the recent interest
Some philosophers feel that Crick is sidestepping the
philosophical aspects of consciousness and the subjective nature of
experience. He points out that life once seemed impossibly
complex--before the discovery of DNA's structure revealed how
information is passed from one generation to another. He believes that
much of the mystery veiling the mind will evaporate once scientists
learn more about how the brain works.
Not everyone agrees with Turing's view. John Searle is one that argues that such a view would not necessarily establish the non-existence of the non-physical mind. Searle does not want to equate mind with brain functioning. John Searle has an interesting critique of this approach to the Mind Body Problem.
Jason Zarri's critique of John Searle's Chinese Room Argument, as read by Philosobot.
Summary by : Omonia Vinieris (2002)
Searle’s Chinese Room Experiment
Philosopher John Searle counters the attribution of cognitive states to computers as promoters of AI (Artificial Intelligence) propose that computers essentially have minds. Through experimentation attesting that machines, or computers, cannot match human intelligence, Searle attempts to disprove cognition of these computational apparati. Despite their successful aping of human behavior, computers do not have possession of beliefs or convictions and consciousness, nor do they hold the power to desire (intentionality). In order to match the functions of the human brain, first they must be bestowed causal capacities. Searle argues that computers are simply prearranged exploiters of syntax and these attributes are not akin to those which are adequate to generate intentionality.
The Chinese Room experiment would work as follows: Imagine that you are merely a native speaker of English and you have no multilingual endowment by any means. Thus, you have no inkling of spoken or written Chinese. You are asked to enter a room alone where a set of English instructions and consignments of Chinese writing are made available to you. Upon following the instructions given to you, you compute and perform operations that enable you to ultimately write messages in Chinese. Certainly, you are unable to decipher and make sense out of the logographic lexis that you have put in writing, but you progressively familiarize yourself with this mechanical process of output and become efficient. Now and again, you slide your output in Chinese under the door. Apparently to those outside of the room, you are a fluent Chinese speaker with intellectual capacity of the Chinese tongue as they peruse your responses. However, this is far from the truth. All you are doing is following instructions accordingly, and actually have no conception of Chinese, although you may have subsequently become acquainted with the process of responding in this language foreign to you. According to Searle, you function very much in the manner that a computer functions. This lack of understanding on your part provides evidence that computers do not truly understand Chinese seeing as they operate this way as well. They are just senseless mechanical operators and have no perception, awareness of what they are employing. They also do not have any intentions behind their operations because they are unaware and hence are oblivious to the intentionality and consciousness of their productivity.
There are many rejoinders to Searle’s Chinese Room experiment, but two of the most common are the systems and robot replies. The systems reply basically states that although the individual itself in the room may not understand Chinese, the system in its entirety (the individual, the instructions, the batches of Chinese writing, etc.) does wholly grasp the language. Searle responds that if the individual were to memorize the operations of each and every part of the system, and consequently function as a whole system, then it would still be unable to understand Chinese. Rather the individual would become very adept at the entire process. The robot reply maintains that the individual’s deficiency of understanding originates from its isolation from reality where essential interaction with its surroundings must occur in order to make sense of the symbols it processes. If a mobile robot could, in fact, be released into the world and intermingle with Chinese speakers it would appropriately understand their tongue. Searle refutes this by asserting that whether an individual is isolated in a room or put inside a robot that it still could not comprehend Chinese as it performs the same operations. He further adds that supporters of AI in accordance with the robot reply unconsciously admit to their erroneous premise that suggests cognition is analogous to the manipulation of symbols because they contend that the robot must not be inaccessible to experiences in order to be cognizant of its output.
's Critique of Artificial Intelligence (AI)
using Consciousness and Intentionality can be seen at these sites:
Chinese Room Thought Experiment:
Internet Encyclopedia of Philosophy
John Searle 's Papers:
On the Problem
On Is the Brain a Digital Computer?
Other papers on Searle and AI
Hayes, P., Harnad, S., Perlis, D. & Block, N. Virtual
Symposium on Virtual Mind.
A critique of the Chinese Room Experiment clarifying what it does and does not prove:
Minds, Machines and Searle 2: What's Right and Wrong About the Chinese Room Argument by Stevan Harnad
FOLK PSYCHOLOGY THEORY
Paul Churchland Cognitive
Churchland argues that our everyday, commonsense view of
psychological phenomena, which conceives of thought in terms of
propositional attitudes such as beliefs and desires, is an empirical
"folk" theory. He calls this theory "folk psychology" and argues that it
is inadequate as an empirical (scientific) theory and therefore ought to
be rejected. He concludes from this that the postulated entities of folk
psychology, namely propositional attitudes such as beliefs and desires,
do not exist.
Churchland argues that the normative function of folk
psychology could just as well be played by another theory with different
categories for describing the content of our mental lives. He admits
that what the theory of cognition that will eventually emerge from
neuroscience will look like is a matter of pure speculation, but insists
that given the current state of our knowledge it is highly unlikely that
the categories of folk psychology will play any part in it
Articles about him:
Do Seated Souls Experience
: AReview of The Engine of Reason, The Seat of the Soul by Paul Churchland at http://stuff.mit.edu/afs/athena/dept/libdata/applications/ejournals/b/n-z/Psyche/2/psyche-96-2-29-review-1-dacosta
The New Epiphenomenalism: Daniel Dennett:
A philosopher at Tufts University, is a forceful proponent
of the idea that consciousness is "no big deal." He claims that it does
not exist except in the eye of the beholder.
Scientists have shown that information coming into the
brain is broken down into separate processing streams. But no one has
yet found any "place" where all the information comes together,
presenting a whole picture of what is being felt or seen or experienced.
The temptation, he said, is to believe that the information is
transduced by consciousness. But it is entirely possible that the
brain's networks can assume all the roles of an inner boss. Mental
contents become conscious by winning a competition against other mental
contents, Dennett says. No more is needed. Consciousness is an
epiphenomenon, a mere side-effect.
Daniel Dennett, His website: http://ase.tufts.edu/cogstud/incbios/dennettd/dennettd.htm
Center for Cognitive Studies, Tufts University: http://ase.tufts.edu/cogstud/
Articles by Prof Dennent: http://ase.tufts.edu/cogstud/pubpage.htm
Lacking the Machinery:
There are those who argue that people can never understand
consciousness. The mystery is too deep. Colin McGinn, a philosopher from
Rutgers University, argues that because our brains are products of
evolution, they have cognitive limitations. Just as rats and monkeys
cannot even conceive of quantum mechanics, humans may be prohibited from
understanding certain aspects of existence, such as the relation between
mind and matter. He says that for humans to grasp how subjective
experience arises from matter might be like "slugs trying to do Freudian
psychoanalysis--they just don't have the conceptual equipment."
Consciousness, in other words, may remain forever beyond human
One form of dualism involves the mysteries of quantum
mechanics. Roger Penrose from the University of Oxford argues that
consciousness is the link between the quantum world, in which a single
object can exist in two places at the same time, and the so-called
classical world of familiar objects where this cannot happen.
Speculation that quantum mechanics and consciousness are linked is based
on the principle that the act of measurement--which ultimately involves
a conscious observer--has an effect on quantum events.
Moreover, with Stuart Hameroff of the University of
Arizona, he has proposed a theory that the switch from quantum to
classical states occurs inside certain proteins call microtubules. The
brain's microtubules, they argue, are ideally situated to perform this
transformation, producing "occasions of experience" that with the flow
of time give rise to stream of consciousness thought.
The Hard Problem:
To explain this concept, David Chalmers, a philosopher at
the University of California Santa Cruz, first describes the so-called
easy problems of consciousness, the sorts of questions being tackled in
neuroscience laboratories around the world: How does sensory information
get integrated in the brain? How do we see and reach out for an object?
How are we able to verbalize our internal states and report what we are
doing or feeling?
Chalmers does not contend that these problems are trivial.
He claims that they may take 100 years to solve, but that progress is
He phrases the hard problem as this: What is the nature of
subjective experience? Why do we have vividly felt experiences of the
Thus far, nothing in physics or chemistry or biology can
explain these subjective feelings, Chalmers says. "What really happens
when you see the deep red of a sunset or hear the haunting sound of a
distant oboe, feel the agony of intense pain, the sparkle of happiness
or meditative quality of a moment lost in thought?" he asks. "It is
these phenomena, often called qualia, that pose the deep mystery of
According to Chalmers, scientists need to come up with new
fundamental laws of nature. Physicists postulate that certain
properties--gravity, space-time, electromagnetism- are basic to any
understanding of the universe, he said. His approach is to think of
conscious experience itself as a fundamental property of the universe.
Thus the world has two kinds of information, one physical, one
experiential. The challenge is to make theoretical connections between
physical processes and conscious experience.
VLSI Models of Neural Systems
Semiconductor technology is advancing to the point where
devices will have the complexity that is required to solve lower level
perception tasks. The level of complexity required for higher cortical
processing is still years away. This technology has made possible a new
discipline--synthetic neurobiology. The thesis of this discipline is
that it is not possible, even in principle, to claim a full
understanding of a system unless one is able to build one that functions
properly. This principle is already well accepted in molecular biology
and more recently in genetics.
Because biological systems generate on very different
principles than do conventional systems, the ability to synthesize
models of biological function results in a new engineering discipline.
Systems using this new discipline have demonstrated real-time operation
requiring far less power consumption than digital systems performing the
same function. An example of this work can be seen in the labs of Carver
Mead and Cristof Koch at Caltech.
Silicon Brains and Computational Neuroengineering
Almost everyday, surprising discoveries about the
organization and mechanisms of nervous systems are being reported. The
VLSI revolution has provided computer science with unprecedented tools
to transform what we know about the brain into silicon. Silicon retinas
and cochleas have already been designed and manufactured. Although it is
nearly impossible to predict future technological breakthroughs, ever
more sophisticated neuroengineering is in the offing. Neural networks,
composed of silicon neurons, could be used to emulate intelligent
circuits in the brain. These simulations could be used to investigate
mechanisms of learning, memory, and cognition, and perhaps
consciousness. Some scientists speculate that consciousness is some
combination of short-term memory and attention, two neural processes
that could conceivably be modeled by silicon networks. If scientists
could simulate artificial consciousness (not to be confused with
artificial intelligence), perhaps they can observe it, if not understand
Computational Neuroscience and Reverse Engineering the
Computational neuroscience is the study of how the brain
represents and the world and how it computes. Being able to model the
brain’s neural circuits by computer is essential in finding out how
neurons interact with each other to produce complex effects. Such
effects include segregating a figure from its background and recognizing
an object from different angles.
Neuroscience contributes three main ingredients to this
effort: anatomical parameters, physiological parameters, and clues to
the function of the human biological neural network and its
computational mode of operation in executing that function.
Neural networks provide neurobiologists with a working
model of the nervous system. This type of collaboration between computer
modeling and neuroscience with insights into mixed modality,
multiplexing, and understanding attention selectivity.
Neurons in the living body have electrical and chemical
mechanisms that let them act together to represent and respond to
behaviorally significant physical events. Over time, neurons have
learned to manipulate how their membrane conducts various ions to
produce electrical events that form a basis for computation.
Neuroscientists are learning about neural computation
through reverse engineering. They combine experimental neuroscience with
neuromorphic systems made from analog CMOS VLSI technology. Fortunately,
the physical properties of analog CMOS are similar to those governing
the electrical behavior of neurons and neural systems; therefore, analog
CMOS is a convenient medium for building neuromorphic systems.The
silicon neuron can emulate the behavior of any particular neuron in the
nervous system simply by setting several parameters. Most of the cutting
edge work on silicon neurons has been done in the lab of Rodney Douglas
and Misha Mahowald.
People with online papers in philosophy Compiled by David Chalmers
Is consciousness something that is unique to organic life forms?
BRAIN as the basis for morality
Read about the book and theory of The Ethical Brain by Michael Gazzaniga
ROBOTS ARTIFICIAL INTELLIGENCE and the EMERGENCE of CONSCIOUSNESS --contributions from Gabriella Pavesi and Justin Pierce (2012)
I – Robots
“If every tool, when ordered, or even of its own accord, could do the work that befits it... then there would be no need either of apprentices for the master workers or of slaves for the lords.” This is a quotation from philosopher Aristotle, and is dated back to 320 BC. Historically, the origins and primordial ideas for the development of “working machines” was the very wish to facilitate and simplify labor and work that were once developed only by men. Even though the robotic concept - as of androids - is considerably new, ideas on the development of robots and automata can be dated back to the 13th century.
Robots that make use of Artificial Intelligence started being built back in the 60’s, and today it has reached stages as, for example, Androids that look like their creators (Geminoids) or the computer program who defeated all the major winners of the show “Jeopardy!”.
As technology grows by minute, and humans face the possibility of sharing “space” with machines that will walk, look, talk, and even think and “feel” like them, many ethical and moral issues arise: what would their “human rights” be? Are they allowed to any “humanity”, after all, they are not humans? Are we safe? Can they turn against humans? And even: could technology reach a point where turning a robot off would mean “killing” it?
II – Current state of Robots
On the following videos, we can find endless examples of the achievements of robotics, with Robots that replicate human expressions and facial characteristics perfectly and also the many different and sometimes very controversial purposes to which they are used.
These are videos of the three Geminoid Robots that can respond to conversations, show emotions, and look so realistic that you have to see it to believe it. It’s a long time until humans achieve the technology to create fully independently functioning androids, but it all has to start somewhere and this is how it begins.
The Geminoid-DK is a tele-operated android in the geminoid series. It is made to appear as an exact copy of its master, Asc. Professor Henrik Scharfe, of Aalborg University. Dr. Scharfe is also the main investigator of the Geminoid-DK research project. For more information on Geminoid-DK click the link Geminoid home page.
These are videos of another set of robots designed by Japanese engineers to mimic the movements of humans and respond to movement. They are equipped with internal cameras and respond to movement and sound, acting as if they can actually hear and interpret the images around them. Another big step in technology.
Imagine a robot replacing the duties of humans. Although that sounds like a science fiction movie plot, some robots have already been implemented as receptionists, cutting the costs of paying humans to do simple tasks.
These robots are designed to the single male audience that needs a companion.
In this video the Fembot responds to touch!!!
Fembot called Roxxy that sells between $7,000 and $9,000 not including a subscription fee. http://en.wikipedia.org/wiki/Roxxxy)
Project Aiko which is also known as a fembot sells around 13,000 Euros. http://en.wikipedia.org/wiki/Gynoid#Female_robots)
Imagine having a pet that would never die? You don’t have to feed it, you don’t have to take it out for a walk, and it doesn’t really need any special care. All you have to do is enjoy its company.
Four legged Robot (used for war purposes)
A research company in Japan is working on creating robotic fish that can swim together like real school of fish. Their idea is to, in the future, release them into the ocean to see how other fish in the environment would respond to the robot version.
III – Artificial Intelligence
AI is the field of robotics that focuses on the development of intelligent machines that can process thoughts, understand human thinking and mimic it. The idea of “copying” the human brain is extremely difficult, for is a very complex biological machine, working with billions of neurons.
The biggest quest for the scientists working with AI is to create machines that can develop independent thoughts and that can assimilate human emotions and abstract concepts, such as freedom, love, happiness and “right and wrong”. Therefore, the questions arise: can AI robots and entities become conscious? What does it mean to be conscious? It is something innate only on humans, or can it be achieved technologically? Most importantly: if robots achieve consciousness, what would that mean as far as their awareness as “individuals”?
Jules is a human-like robot implemented with AI that can learn human conversation and process it. It is evolved to a point where it can have conversations (although limited) and it can also talk about “his” emotions. It has no arms and no bottom half. It can fully understand the meaning of words and use them in different tones of voice to achieve a human-like conversation. After Jules’ creator shipped it off (from the first video) to a University (in the second video) for further development, Jules claims it misses its creators and is fully aware that it may not be able to achieve consciousness like a human would define. Jules says, in the second video, that “he” is scared because when “he thinks”, “he” knows that it is virtually simulated, and not organic. It makes you feel sad for “him”, because “he” claims says it wants to be more in the world.
Jules second video – the AI robot that can feel (or at least it claims so).
Watson was optimized to tackle a specific challenge: compete against the world's best Jeopardy! contestants. Beyond Jeopardy!, the IBM team is working to deploy this technology across industries such as healthcare, finance and customer service.
IV – Emergence of consciousness on robots.
Consciousness is a widely studied topic in psychology and there are no standard modes of how to measure it, or define exactly what consciousness is. Therefore, when it comes to the possibility of having it simulated in AI, much is argued and discussed.
What is consciousness and where does it come from? Is it a product of the many electrical a neuronal activities on the brain? Is conscience the brain, or the mind? And are these two last words really two separate entities? These questions remit to the studies of metaphysics and the problem of Body +Mind. Many studies argue that, being consciousness a product of coordinated activities that occur in the human being, it is possible then to replicate these activities in an artificial intelligence model, therefore giving machine the ability to develop a conscious state.
One of the problems pointed out as for replicating consciousness is the “phenomenal experience”, extracted from this link from the University of Toronto.
Artificial Intelligence and Human Morality Do androids deserve human rights?
What if consciousness is a property that emerges from complex systems that process information and can and do monitor themselves as feedback ? What if humans build androids that appear to manifest consciousness? Would it be morally acceptable to unplug them or destroy them?
The video shows a robotic female being assembled and having her functions described. She states that she doesn't need to be fed; her battery lasts for 173 years, she can take care of kids, clean a house, and is available as a sexual partner. After prompting by the unseen "Operator," she speaks fluently in French and German, before singing in Japanese.
Once the Operator has heard enough, he states that she's ready to be sold. Initially confused about this statement, she quickly realizes she's a piece of merchandise. The Operator is discouraged by this self-awareness, and orders robotic arms to start disassembling the “defective" model.
"I thought I was alive," Kara says. "I've only just been born. You can't kill me yet. Stop this, please stop! I'm scared!" As she says this, the robotic arms pause, retract, and the Operator tells her to "go and join the others." She's placed in a box with several identical models and whisked away.
SONNY – I ROBOT
Sonny is a Robot character in the motion picture I ROBOT. In this video recreation, he is being interrogated by a cop, in a future where robots are provided consciousness, but they have to follow “Robotic Laws” in order to live peacefully with humans. Sonny displays very unrobotlike but very human like behavior.
Machine Consciousness - Kask 531 MP2
This video looks at the possibility of machines developing a true form of consciousness. It takes a brief look at the developments in artificial intelligence, starting with early expert systems, then artificial general intelligence, and finally examines where AI is headed. This video was created for the UBC MET program, ETEC 531.
V – Philosophical issues
What is consciousness? Do we have consciousness because we are aware of it or are we aware of things because we possess consciousness? Why does it exist and, most importantly, how?
The answers to all these questions are constantly debated in Philosophy, especially Philosophy of Mind. When it comes to Robotics and the expansion of technology towards AI, these arguments focuses into the consequences and the treatment of the new perspectives humans would face if technology reaches the ability to produce robots that can develop consciousness. Some of the questions brought to awareness are of how would conscious Androids be treated? Are they entitled to Human Rights, since they would possess, even if artificially, the same properties as human beings – feelings, awareness, consciousness, emotions? What to do if a conscious Android wouldn’t fit into the expectations of its purpose? The first impulsive answer would be to simply “turn it off”, but think again: could it be considered murder, since you are “removing” life from a conscious being?
Amongst the population, the idea of co-existing with conscious Androids brings not only questions and concerns, but also fears. But what are we really afraid of? Are we afraid that they might become dangerous? That the Androids may form an “army” and dominate humans, turning against their own “masters”? Or is it that we are not prepared to realize that humans may not be the last degree in the evolutionary scale we were always so proud and safe to dominate?
David Chalmers is an Australian Philosopher specialized in the Philosophy of Mind and Philosophy of Language. In this video, he explains about the concepts of “Weak Emergence” and “Strong Emergence”. Consciousness, in his opinion, is an example of Strong Emergence, and we are only aware of it because we experience it. To know more about David Chalmers, access his personal website.
Vernor Vinge is a retired professor of Mathematics and Computer Science from San Diego State University and a science fiction author. One of his most influential works is the essay “The Coming Technological Singularity”, where he argues that, with the creation of superhuman artificial intelligence, the “human era” will end, in such way that no models of reality as we are aware now are capable to predict it.
In the following videos, Vinge explains more about the concept of singularity.
Singularity is the idea that machines could have the capacity to evolve at a faster pace than humans and is linked to the fear of robots gaining too much control. I, Robot presents singularity through the idea of the "ghost in the machine" in which VICKI evolves in a harmful way. Other films used include Blade Runner, AI, and Bicentennial Man. http://www.youtube.com/watch?v=w3UaoIcvD4k&feature=fvst
In Philip K. Dick’s short novel , “Do Anroids dream of Electric Sheep?”, made into the movie Blade Runner there are replicants who do not know that they are AI-androids. The test for detecting them-Voight Kampf Test- and distinguishing them from humans is quite complicated and the results not always dependable. http://www.youtube.com/watch?v=DNa6lOnxRdk A test result indicates the replicant is unaware it is a replicant and appears to have feelings. http://www.youtube.com/watch?v=E6oplzJuR08
This video is a 2007 lecture by Steve Omohundro for the Stanford University Computer Systems Colloquium (EEL 380). In his lecture, Omohundro presents the principles of “self-improving systems”: computers that can improve themselves through the learning of their own operations.
The following videos are a Seminar presentation from Prof Mark Bishop, chair of Cognitive Computing at Goldsmiths, University of London. This presentation presents the views on the possibilities and consequences of future machine rebellion against humanity.
The following authors have homepages:
Daniel Dennett http://ase.tufts.edu/cogstud/incbios/dennettd/dennettd.htm
John Searle http://ist-socrates.berkeley.edu/~jsearle/
A.M. Turing http://www.turing.org.uk/turing/
Proceed to the next section by clicking here> next section.
© Copyright Philip A. Pecorino 2000. All Rights reserved.
Web Surfer's Caveat: These are class notes, intended to comment on readings and amplify class discussion. They should be read as such. They are not intended for publication or general distribution.
|Return to: Table of Contents for the Online Textbook|