It has been argued that machines just process syntax and so can never truly "understand"
things. Now, since we are machines (even if created by an Intelligent Designer), we just process syntax in our
mindless automaton states. This is true and note that there is no place in our neural networks where "free will"
intervenes to alter synaptic connections or ion channel dynamics. We are not predictable because
there are infinite stochastic aspects to our neural development and neural functioning, but we most certainly are computational
devices-- ones that write poetry, make love and war, and care for our young and our planet.
Maybe the machines that we will create in our image will be "better" than
their creators. Maybe they will love and care for us. But let's not count on that quite yet, OK?
So let's consider how we will make these digital gods (and goddesses!) and
lets consider how we might rein them in.
The Semantics from Syntax debate hovers around the issue of neural
structures giving rise to linguistic meaning, and while this is a really interesting aspect of our minds and consciousness,
it misdirects the debate about how we understand the world. Other animals understand their worlds in ways that are both
different and the same as ours, and they do so with little or no linguistic skills. Moreover, much of our deepest thoughts
and insights are non-linguistic in nature. The purpose of language is to attach labels to things and draw relationships
between them. Words are these labels and linkers. The ability to form words into sentences constitutes a
very interesting evolutionary step taken by some hominid ancestor's brain: this ability enables us to say things like e=mc**2
and allows us to investigate the role of ApoE alleles in Alzheimer's disease. This is an incredibly powerful
system. But it is also a narrow, highly specialized system and it does not reflect our more general information
processing capabilities. But what the linguistic system is, is a big part of some Master Control Program
that sits on some cortical perch and sifts through our past to interpret our present and plan our future. What it is not is this: it is not the answer to how we get semantics from syntax.
The semantics arises from the interrelationships of specific syntactic (and synaptic) patterns to other syntactic (and synaptic)
patterns, processed in a hierarchy like that discussed for visual cortex (see Subconscious Info Processing page). The
linguistic labels are only one kind of a larger vocabularly of "neural words" like those fuzzily defined "words" described
by Jeff Hawkins' in his tome "On Intelligence". It is the creation of these neural words, and the links between
these neural words, that give meaning to such events in the world as a doorbell ringing vs. a smoke detector beeping.
We consider the neural basis of extracting the semantics on the subcons. IP page. But the bottom line is that a signal
"means" something if there are links from that signal to other things. Our "experiencing" the meaning of objects and
signals depends on consciousness, but there is no reason for consciousness to exist-- we can, in principle, get along perfectly
well by just processing relevant information from the world, gaining experience and establishing the necessary linkages
to extract meaning/semantics from those experiences-- these meanings then become critical in helping us to make good
decisions, from a Darwinian perspective. Such decisions might well be based on Bayesian inference or other algorithms,
but the important point is that there is no point at which consciousness or free will is required. In one sense, neither
linguistics nor semantics have anything to do with consciousness. We do not require semantics or linguistics to have
consciousness. Nor do we require consciousness in order to have semantics or linguistics. Consciousness seems
to be something deeper and probably a lot older than language. This is a topic for another page, but for more on
why semantics precedes language follow the link below to our page where we discuss how an infant gets the semantics she needs
from her emerging world.
For More on this Topic go to: Subconscious Information Processing
Letter to Christopher Manning, Asst. Prof., CS&Linguistics, Stanford University
Dear Dr. Manning,
I read with great interest your 2006 article on NLP (specifically on Logical Textual Inference/RTE1). As a systems/computational
neuroscientist, I am interested in parallels between neural and digital information processing, and am especially interested
in issues of "machine understanding". In this vein, I whole heartedly agree that textual inference is fully intertwined with
world knowledge. Also, I was surprised and pleased to see your emphasis on evaluating KR&R in the context of "raw sensory
stimuli". This brings me to the point of my email which is how computational devices (like us) get semantics from syntax.
"Semantics comes from experience", which is saying something more (I think) than "just take care of the syntax". Specifically,
it explains how my two small children went from averbal to understanding things. I could go on and on about this (and have),
but the bottom line is that dogs, chimps and people all get semantics from the relationships between different syntactic streams
and synaptic events. While this point is not made **explicitly** in Jeff Hawkins book "On Intelligence", he does emphasize
the importance of invariant representations in primate cortex and he calls these neural representations "words", that are
both sent to higher centers and shared with neighboring cortical modules. But these are not **linguistic words**, they
are neural codes or "neural words", as I prefer to label them. The language part gets added on later (in both an evolutionary
sense and a human development sense). The smell of bacon means something, and it would mean just the same thing to me even
if a stroke rendered me wholly alinguistic (if a stroke could do so without disrupting my concept of "frying bacon"; which
is an open issue). The point here is that we at some point should be able to imbue computers with equivalent understanding
and meaning, by implementing in them the same learning algorithms. We can then give them equivalent language-processing skills,
and now they can label their "neural words" with linguistic tags. [of course, I have just trivialized an entire field of study
here; no disrespect intended! and of course these things probably need to occur in parallel as well]. My interests are in
the kinds of digital architectures that would make this happen, and I am indeed concerned about machines that could learn
everything that we learn over our first 5 years of life, in perhaps their first 5 minutes of life/experiencing/learning/practice
time (this is perhaps inevitable, given that machine processors already run ~6 orders of magnitude faster than our brains).
Eric Baum's book "What is Thought" has some interesting ideas along a slightly different tangent, and although it tells
us relatively little about thought or consciousness (as far as I am concerned), it has some REALLY interesting ideas about
how evolution created brains that learn things so "quickly" (or maybe "so easily" is a better term). I hope that these
ideas are of interest to you (whether original or not) and if you care to comment, I would be very interested in your views
and suggestions as to how I might further advance research along these trains of thought.
Best Regards, Don O'Malley, 617-373-2284