Machines Will not Exhibit High Intelligence for Many Years to Come
There are a variety of reasons for believing that AI and machine learning will only make incremental advances
over the next 50 years. Here we present some arguments of particular note.
1. Machines lack critical features of the human CNS, such as: massive interconnectivity, sophisticated
computational algorithms that will remain unfathomable to us for many years to come, and capabilities in the realm of synaptic
plasticity that are extremely complex and subtle.
2. Machine architectures are too different from human brains to achieve human levels of intelligence.
Digital architectures consist of binary gates that can be open or closed and that are connected together in rigid,
non-adaptive ways. This places fundamental limits on the extent to which machines can be made to mimic human mental
operations. Even if billions of transistors are placed on a chip, they will not operate in the ways that biological
networks operate because they are too restricted in terms of the kinds of things they can do and the kinds of relationships
that can exist between circuit elements.
3. Machines cannot have consciousness or true understanding of the semantics of what they do-- they will
never be more (in a formal sense) then sophisticated calculators. If one takes a bunch of calculators and strings
them together you will not create sentient beings-- you will only create bigger calculators. There is no conceivable
way, even in theory, of being able to get a calculator to "understand" something in the way that humans and (perhaps) other
4. We don't yet know what intelligence is, much less how to imbue machines with it. In areas
like Natural Language Processing (NLP), we cannot even begin to teach machines the deep and complex knowledge of the world
that humans instinctually acquire. Absent such knowledge, NLP cannot accomplish anything resembling human intelligence, as
emphasized in a recent article by Christopher Manning of the NLP Group at Stanford University.
5. Human algorithms are too highly advanced for us to understand them, and even if we could, it would
be impossible to implement in Machines anything like the mechanisms used by human brains to perform algorithms
and computations. Some corollaries of this point are (1) enormous information has been compacted into the human genome
and we are too intellectually weak to comprehend it, and (2) concsiousness itself is a major factor in this, and the nature
of consciousness and how it works may elude us for centuries. It certainly is true that we are getting better
at understanding some elements of human sensory information processing, e.g. how the retina works, but even in this precisely
defined and uniquely accessible outcropping of the CNS, there are huge gaps in our knowledge down to the most basic of computations,
such as the computation of directionality by retinal ganglion cells, whose underlying mechanism still remains unknown.
Once you go one level up, to the visual thalamus or LGN, the mysteries immediately become enormously larger, like, for example
why we even need a thalamus (or thalamic relay). We don't clearly understand this
much less more advanced things like why the cortex-to-LGN pathways outnumber the retina-to-LGN pathways by a factor of 10.
These are seriously basic questions, in the best defined CNS structures, and yet these questions may go unanswered
for another 25 years, just as they did for the first 25 years of modern systems neuroscience (heralded by the invention
of the patch clamp). In spite of the increasingly massive volume of published neuroscience data, as soon as one
gets even slightly deeper into the human CNS, it become a total mystery. Its secrets will remain just that, a mystery, for
perhaps another century or so.