Digital Entities Action Committee

Why Machines will Always be Stupid!

Home
Site Map
Singularity
Water Molecules and Consciousness
Digital Ethics
The DEAC Forecast
Dangerous Activities
Subconcious Info Processing
Semantics comes from Experience
Cyber Warfare
What Kurzweil Did not Consider
Stupid Machines!
What it is Like to be a Chair
Constructing Intelligence
What is Intelligence? .....[email JF]
More "On Intelligence"
Comp. Neuro. Primer
Cortical Operations
Human Gods -- Purveyors of Digital Warfare
Neuroinformatics
War Bytes (fiction)
Story Boards
Borg Fish (and other stories of Ghosts in the Machine)
Who is DEAC?
DEAC Budget
Machine Learning by Minsky!
Sad and Angry Machines
Neuronal Diversity and the Thalamus
New Items -- in progress
Queries: Key Unknowns
NeuroMaze GAME ENTRANCE
Neurotransmitters 1
Neuronal Diversity
Information

Many believe that AI will not become "really" intelligent for many decades to come.  On this page, to play the role of Advocate for these Devils, I argue against True AI, as best I can.

Doh! Stupid Machines!
homer.simpson.jpg

Machines Will not Exhibit High Intelligence for Many Years to Come
 
There are a variety of reasons for believing that AI and machine learning will only make incremental advances over the next 50 years.  Here we present some arguments of particular note.
 
1.  Machines lack critical features of the human CNS, such as: massive interconnectivity, sophisticated computational algorithms that will remain unfathomable to us for many years to come, and capabilities in the realm of synaptic plasticity that are extremely complex and subtle. 
 
2.  Machine architectures are too different from human brains to achieve human levels of intelligence.  Digital architectures consist of binary gates that can be open or closed and that are connected together in rigid, non-adaptive ways.  This places fundamental limits on the extent to which machines can be made to mimic human mental operations.  Even if billions of transistors are placed on a chip, they will not operate in the ways that biological networks operate because they are too restricted in terms of the kinds of things they can do and the kinds of relationships that can exist between circuit elements.
 
3.  Machines cannot have consciousness or true understanding of the semantics of what they do-- they will never be more (in a formal sense) then sophisticated calculators.  If one takes a bunch of calculators and strings them together you will not create sentient beings-- you will only create bigger calculators.  There is no conceivable way, even in theory, of being able to get a calculator to "understand" something in the way that humans and (perhaps) other mammals do.
 
4. We don't yet know what intelligence is, much less how to imbue machines with it.  In areas like Natural Language Processing (NLP), we cannot even begin to teach machines the deep and complex knowledge of the world that humans instinctually acquire.  Absent such knowledge, NLP cannot accomplish anything resembling human intelligence, as emphasized in a recent article by Christopher Manning of the NLP Group at Stanford University. 
 
5.  Human algorithms are too highly advanced for us to understand them, and even if we could, it would be impossible to implement in Machines anything like the mechanisms used by human brains to perform algorithms and computations.  Some corollaries of this point are (1) enormous information has been compacted into the human genome and we are too intellectually weak to comprehend it, and (2) concsiousness itself is a major factor in this, and the nature of consciousness and how it works may elude us for centuries.  It certainly is true that we are getting better at understanding some elements of human sensory information processing, e.g. how the retina works, but even in this precisely defined and uniquely accessible outcropping of the CNS, there are huge gaps in our knowledge down to the most basic of computations, such as the computation of directionality by retinal ganglion cells, whose underlying mechanism still remains unknown.  Once you go one level up, to the visual thalamus or LGN, the mysteries immediately become enormously larger, like, for example why we even need a thalamus (or thalamic relay).  We don't clearly understand this much less more advanced things like why the cortex-to-LGN pathways outnumber the retina-to-LGN pathways by a factor of 10.  These are seriously basic questions, in the best defined CNS structures, and yet these questions may go unanswered for another 25 years, just as they did for the first 25 years of modern systems neuroscience (heralded by the invention of the patch clamp).  In spite of the increasingly massive volume of published neuroscience data, as soon as one gets even slightly deeper into the human CNS, it become a total mystery.  Its secrets will remain just that, a mystery, for perhaps another century or so.

Author's Rebuttal to Himself:  (well his nickname was Schiz....)
 
Item 1: failure to replicate CNS capabilities.
The field of comp. neuro. is moving forward extremely quickly on many fronts and we are simulating and replicating CNS control and learning architectures (see Dangerous Activities page).  One excellent example is Sebastian Seung's demonstration of a powerful adaptive advantage in bird song learning by means of sparse coding-- a coding technique used in many parts of the vertebrate CNS.
 
machine learning operates at nanosecond time scales using genetic and other algorithms and is probably already vastly faster than human learning in vast areas of knowledge.
 
Item 2: advances in computer architectures.  Considerable research in microarchitecture design is bootstrapping the ability of hardware as the accompanying software grows in complexity and power.  This includes specialized architectures, where e.g., one takes blocks of (e.g.) one million transistors and organizes them to carry out very specific tasks.  Since we will have in just a few years chips with one billion transistors, we could take one hundred million of them and create one hundred distinct modules, each specialized to do unique AI-like operations, and we will still have 900 million transistors left to utilize the specialized architectures in complex, dynamic and powerful ways.  
 
Item 3. Claims that machines cannot be conscious.  But you are-- and you are nothing more than a collection of molecular devices.  The "chip replacement man" argument seems to provide incontrovertible support for the claim that, at least in theory, non-biological machines can become conscious.   A major interest of mine is understanding the biophysical underpinnings of consciousness and how that might be mimicked in silico.  In regards to "machine understanding" that is addressed in my page on Getting Semantics from Syntax.
 
Item 4.  On NLP (natural language processing).  This is an important issue that is addressed on several different pages on this site.  The above link to "Getting Semantics from Syntax" addresses some of this, and it is examined in more depth on the page What it is Like to be a Chair.  Understanding is really the critical problem-- it is not just a metaphysical debate.  Understanding is at the heart of our vast arrays of cortical processing elements and the operations of the thalamocortical circuitry.  Neuroscientists continue to look at these elements in a piecemeal fashions, because that is what our technology allows.  But it is the conjoint activation of all these structures, the "explosion of information processing" that is being repeated every second that underlies intelligence.
 
Item 5.  hardest of the problems.  but with 45,000 neuroscientists worldwide and accelerating technology, it is only a matter of time.  it is time. 
 
 
 

4th Millenium