The page title is a marginal pun playing off Thomas Nagel's famous paper "What
it is Like to be a Bat", referring to the flying kind, not the wooden kind.
But what this page is not about is this book nor flying mammals-- it is about chairs and intelligence, about the
issue of how you know a chair when you see it and why Stupid Machines (doh) still cannot do this trivial task as well as my
6 year old, or even my 4 year old (who only got to jabbering relatively recently).
The story goes (as of this writing, April 2006), that you can interface any given AI machine to a camera and let
it look into rooms and it has a really hard time to tell
you if there is some kind of chair in the room -- arm chair, bench, desk chair, sofa, or stool (--yes
the wooden kind!...I still get to see more diapers than I would like). David Kaeli (ECE, NU) told me about work long
ago in the areas of NLP and machine vision, and about strategies and he is interested in the idea that our visual processors
are still so much better than machine's (and hopes that might be exploited in some interesting way), but the issue
here is not, however, machine vision, it is object classification. And this
problem, well it's a real bitch!
It would seem a trivial thing-- you tell your AI that a chair is something that
people sit on, but when you do, it concludes that cars, grass and floors are chairs, and so it goes (or so it
would go if'n you had an AI that was smart enough to understand your first statement/directive).
So this stupid question embroils us in many different things from knowledge representation, NLP, machine vision, pattern
recognition and more. And the reason this problem is so hard
is not because it is technically difficult, it is because we do not know where to start.
Ahh, but we do! We start out as infants, then toddlers, then pre-schoolers. The secrets lie in their little
minds, because they already beat AIs cold in everything from chair recognition to moving chess men around on a chess board
(the manual task, not the game-playing task). And because their minds are so small, next to ours, we blindly think there
is something simple about them, in the same way we unintentially pooh-pooh our own subconscious activities.
But here is where it all comes together: awakening
from sleep. The cortical modules all get reconnected (modules is
probably bad and misleading and connected is probably bad and misleading too). Once reconnected,
then everything works: the machine vision, pattern recognition, linking context and structures, accessing
vast experience stores and making a decision. Likely this is all sublinguistic and your conscious self
knows nothing about any of this (for the most part and for most of the time). If in this "explosion of computation",
if in this massively parallel and massively hierarchical and recursivearchical maze of prediction, assessment and determination,
you find something that seems like a chair, but maybe you are not sure (akin to rivalrous interpretations as they call them
in visual psychophysics), then your "consciouness" gets involved. Well, your consciousness was involved at the
outset, when you made a conscious, vertebrate decision to count chairs in the picture above (maybe; the issue
of consc. vs. subcons. decisions gets a bit fuzzy here). Just the same, you "know" what "chair" means and you let subconscious
processes do the rest, until it came up with "four". "Chair" is just a tag for a whole plethora of your personal experiences
and your brain quickly honed in upon the similarities between the designer chairs and pehraps simpler classic wooden
desk chairs and it was trivial to see that there were 4.
The specific role of conscious IP (info processing) to the problem may seem
rather trivial: For example, almost at the same moment that you looked at the picture/question above (how many chairs),
you immediately saw 4 chairs, and may have then asked yourself, "what's the catch?". THEN you might have looked
a bit more closely at some fine structural details (to see if there are smaller representations of chairs, or perhaps a small
chair hidden under one of the larger chairs) and then you probably concluded that this was a stupid question because
the answer was so obvious at the outset (which it was, I see only 4 chairs; the photo is just one picked at quasirandom; they're
some designer chairs by one R. Henry, apparently).
So the roles of consciouness to this point might seem two-fold:
(1) first you decided (probably in the briefest of instants) to count the chairs in the picture (maybe because your
subconscious had already counted them and knew it was a trivial task and so this made it a trivial decision to make!);
then suspecting trickery on my part (for which I am entirely innocent!), you may have looked again, with conscious IP playing
here perhaps two roles at once:
(2) you decided to look again and also may have consciously focused upon the finer details of the chair, to find possibly hidden tiny
chairs, with the subconscious motivation of a possible internal reward or splash of dopamine for having done
a superior job of counting (which reward deprivation explains your angst at their being only 4 chairs-- stupid question!).
every neural word is a determination!
|WHO -- SHOULD -- PLAY -- GOD ???
So here we encounter again the idea that consciousness is a "vertebrate
decision making" device, possibly evolved exclusively in the vertebrate evolutionary lineage. And this may
seem off topic, because what we are interested in is how we can make machines that are slightly less stupid
than today's machines-- i.e. how can we get a machine to perform this trivial task of saying whether or not there is
a freakin chair in a scene!!!
But it is not off topic for 3 reasons:
(1) First, your machine has to make a decision: Yes or No! So however human consciousness might contribute to making
a decision, it is deeply relevant.
(2) We need to understand the raw material that went into making that decision: the pattern processing, experience checking
and all that, and most importantly,
(3) How do the raw information materials (the processed visual signals; prior experiences, linguistic tags, etc.) interact
with the decision making device that is ultimately used to make the decision? How are subconscious and conscious activities
interfaced to confirm to your satisfaction that there are 4 and only 4 chairs?
In theory, we do not need consciousness to do any of this, and yes it may all be an illusion and delusion, but epiphenomenon
or not, those neural events that are occurring WHEN we are conscious are terrifically important, because we make very poor decisions when we are knocked out or asleep. For example, when
you fall asleep at the wheel, your sleeping brain can make a very stupid decision to drive your car head-on into
a tree! [...ok...this argument will drive scholars of "decision making" nutso, but that is their decision to be nutso].
But if the clack-clack-clack of a highway border strip wakes you up, along with a little helpful boost of adrenaline
from some subconcious (but active only during waking?) fear-generating mechanism, then you will make a FAR
better decision, "Oh, I should probably continue driving on the road!"
Consciousness is a way of bringing together all these modules, of utilizing incomprehensibly
vast amounts of information and experience, and of deciding how to focus our neural efforts in specific ways. And in
humans it does this to make decisions in ways that ants and fishes cannot (even if they have little tiny consciousnesses).
This human mechanism requires sleep to continue operating as it does in other mammals (and
perhaps fishes). This is the challenge for neuroscientist and AI-monster creator alike: understand what this process
is and how it works. Part of me does not wish to get any closer to solving this problem, and I have probably said too
much as it is; provided too much help to those engaged (like me) in dangerous activities. Yet, there are many people
much smarter than I trying to build these monsters, so I deem it better to just slightly entice you with the prospects
of how true AIs might be built, so that you appreciate the imminence of the problem. I think we have very little
time to prepare, in spite of the "enormous obstacles" that remain: these
obstacles may be no more substantive than the perceived obstacles to "the cloning of mammals". In 1995 everyone
"in the know" knew it was impossible to clone a mammal-- then in 1996 Dolly the Sheep
was born at the hands of Ian Wilmut-- an agrarian God. Are you playing God with machines? If you are,
then Einstein was wrong: God does play Dice--and is gambling with the future of the human universe. Please don't roll