Digital Entities Action Committee

Water Molecules and Consciousness
Home
Site Map
Singularity
Water Molecules and Consciousness
Digital Ethics
The DEAC Forecast
Dangerous Activities
Subconcious Info Processing
Semantics comes from Experience
Cyber Warfare
What Kurzweil Did not Consider
Stupid Machines!
What it is Like to be a Chair
Constructing Intelligence
What is Intelligence? .....[email JF]
More "On Intelligence"
Comp. Neuro. Primer
Cortical Operations
Human Gods -- Purveyors of Digital Warfare
Neuroinformatics
War Bytes (fiction)
Story Boards
Borg Fish (and other stories of Ghosts in the Machine)
Who is DEAC?
DEAC Budget
Machine Learning by Minsky!
Sad and Angry Machines
Neuronal Diversity and the Thalamus
New Items -- in progress
Queries: Key Unknowns
NeuroMaze GAME ENTRANCE
Neurotransmitters 1
Neuronal Diversity
Information
The Philosophical Basis of Machine Consciousness

conscious.figures.jpg

 
 
 
Free DE-consult: Identify the person who said this: "...lecturing to rows of fools who couldn't BEGIN to understand my work!"  Send your answer to: zfhindbrain@msn.com
 

H20.    Are Water Molecules required for Consciousness?

All conscious beings (that we know of) have brains made mostly of water, organized into bundles of cables (axons) and processing units (dendrites, nerve cell bodies). Within these animals, consciousness is indeed dependent upon ongoing neuro-electrical activity: the firing of action potentials.  Although just what it is about these electrical activity patterns that actually gives rise to your being a conscious being is really quite mysterious. Sejnowski, Churchland, Edelman and others can say all they want about electrical activity in neuronal ensembles being "correlated" to conscious experience, but such correlations have essentially nothing to do with the so called "hard" problem of consciousness.

Let's be clear: we are not talking here about arousal or attention or being self-conscious. We are talking "dead or alive" in your mind. At night, your mind is dead, when you wake up, it comes alive. Period. This is the "hard" problem of consciousness, and all the "neural correlates of consciousness" research is just beating around the bush. No matter how many electrodes you stick in a monkey's brain, and no matter how many recordings you acquire at one time, you cannot explain why such electrical patterns yield the conscious experience that you are having right now.  Because you are reading this you MUST BE conscious. You are in the world of sentient consciousness and if someone sneaks behind you and wacks you on the head with a lead pipe, BAP, you check out of the conscious world, maybe for some small amount of time, maybe forever, but while you are checked out, you have no way of knowing which scenario is correct. [Query: there seems to be an important link between our "sense of time passage" and consciousness, although the nature of this link is still too fuzzy].

So what does this have to do with Digital Entities? Nothing and Everything. DE's do not need to be conscious to be malevolent, all-powerful creatures. But if they are not conscious, then we will not respect them in the morning, or any other time of day. If they are conscious (or at least can convince us that they are), then we would take them oh so much more seriously. And yes we should take them more seriously, when they tell us that they "get" the phrase "I think therefore I am". Human consciousness gives us abilities vastly beyond what unconscious humans can do. We don't quite understand why or how this is, but the neural correlates of consciousness clique is doing something extremely important: they are laying groundwork for understanding (in biological systems), the architectural infrastructure from which consciousness seems to emerge. They will help us define the threshold level of complexity, both organizational and operational, that one must attain to enter the conscious world. There is no reason to assume that molecules of H20 (or tubulin) are somehow necessary to the generation of conscious experience. Electrical activity does seem to be quite important, and computers have lots of that. The biggest ones run at 100 terraflops-- a speed that would melt our brains if we performed floating point operations at that speed --our neurons fire 0.1 action potentials per second, on average, working at sub terraflops speeds I'd guess.  [note: this is wild conjecture at this point since I haven't yet figured out how to relate neural computations and action potential firings to flops, but it might be correct if, e.g. each 1 msec bin of time represented a bit of information in which a given neuron was either signaling one or zero, depending on whether or not it fired an action potential in that time bin]. 
 
What is Consciousness?
       A relatively small fraction of humans appreciate how mysterious the experience of consciousness is.  Most humans do not "get" the conundrum that this "mysterious flame" poses to thoughtful men.  Many intellectuals dismiss the problem out of hand.  You know that your experience of consciousness depends on your being awake (lets skip dreaming for the moment).  This experience is missing if you are knocked out, dead asleep or just plain dead.  If you know anything about neuroscience or the brain, you presumably believe that consciousness has something to do with ongoing neural activity (no activity, no consciousness).  Pretty banal stuff, but tell me one thing: are my pet zebrafish conscious?
      The first problem, is that there is nothing that you can do to either prove or disprove your conjecture that my zebrafish are conscious entities.  You can devise many experiments to evaluate their sleep/wake states and their EEG patterns, but none of this tells you anything direct about what it is that they are experiencing.  We are quite certain that their experiences differ from ours, but exactly what it is that fish, rats or monkeys actually experience cannot be determined first hand.  You cannot provide any compelling real (feasible) or even theoretical experiment that will determine whether or not they are conscious.  So what?
        So the next problem is that we do not need consciousness at any point up the evolutionary ladder!  You may presume that ants (even if they are good learners and have complex social structures) are mindless automatons.  They do not have the luxury of consciousness, nor the freedom to choose which path to follow as they leave their nests to search for food.  [of course some of you may believe that they are conscious, no matter].  Problem is, they are just axons, dendrites, synapses and patterned neural activity-- and so are we: nothing more, nothing less, in a pure biophysical sense.  Thus, you cannot (unless you're a god-fearing man) say anything at all about the point at which conscious beings emerge from automaton ancestors. 
        Well this is true, metaphysically speaking, but as a practical matter we follow Turing's "polite convention" and presume our fellow human beings to be conscious (some barely so, like W.)  And we might argue that things with sleep/wake cycles (like zebrafish!) qualify for inclusion in the club of "probably sentient beings".  But can a machine be conscious?

THE CHIP REPLACEMENT BABE

         Some have argued by fiat that machines cannot be conscious-- they are calculators, nothing more.  Certainly, one can make a similar argument regarding ants-- simple computational devices, nothing more.  And then, at SOME POINT, going up the evolutionary ladder, you have to draw a line between automatons and sentient beings.  Well, this is a bit of a problem (no pun intended).  We are computational networks and because consciousness is so damned mysterious, we can say almost nothing substantive in regards to the question of "machine consciousness".  One argument that can be made (and has been made) is that if we replace your neurons one at a time with "functionally perfect" replicas, but machine replicas, without biological tissue, you would never notice.  Your brain would be completely replaced (one neuron at a time) with equivalent digital devices that had the same inputs, outputs, adaptive and stochastic capabilites.  This experiment is vastly beyond any present or imagined technology, but in theory, this could be done and you would presumably remain a conscious, sentient being throughout the procedure: you would become a conscious machine.  One might counter this by arguing that at some point in the chip-replacement process your consciousness would just blink out, and you would indeed become a mindless automaton-- but this automation would be indistinguishable from the conscious you, which is a bit of a conundrum.  Moreover, we know that consciousness, whatever it is, depends acutely upon the ongoing electrical patterns and rhythms-- disrupt those and...bye bye consciousness.  If your electronic brain exhibited exactly those same electrical patterns and rhythms, it too should be conscious.  Also, in specific regard to the chip-replacement "babe", if she were really hot, you might find her desirable, chips or no chips.
 
We venture into this philosophical bog of consciousness for 2 main reasons:
(1) People will likely take machine intelligence much more seriously if they think that machines have our cognitive and conscious experiences and abilities, and
(2) The issue of conscious vs. subconscious information processing seems to figure into considerations of what makes humans so smart (relative to slugs and worms), as discussed in the pages on Consc. vs. Subconsc. Info. Proc and Getting Semantics from Syntax.
      Of course it is possible that simple metazoans like jellyfish have some kind of consciousness (I think not, since our "sleeping spinal cord" is not conscious), and if so then the issue of machine consciousness is perhaps not such a big deal.  But viscerally, it is a big deal, and our brains' operations make things "visceral" because they ARE important-- to us.
 
        Consciousness might be widespread in animals, but be "more apparent" in animals that sleep-- purely becaues they do sleep.  In this vein, sleep may be an evolutionary enhancement to enhance learning and memory consolidation, i.e. sleep is a vertebrate invention that transiently disables consciousness so that we can carry out various neural house-keeping activities that are essential to higher cortical functions.  An alternative idea is that consciousness "appears" only in the phylogenetic lineages of animals that exhibit true sleep states.  In this vein, the neural processes ongoing during consciousness might deplete our brains in some way that then requires sleep for rejuvenation.
        Both versions fit with the idea that sleep serves the purpose of consolidating new memories. We cannot both be storing new memories and consolidating recent ones: during sleep we disable the "sequential-event memory process"-- that is we disable consciousness-based memory archiving, while we carry out tasks that consolidate these more ephemeral archives-- perhaps akin to backing up your computer. 
      Missy's brain warp:  Yet a 3rd alternative (idea from Melissa Borla McElligott) is that we are totally conscious while we sleep, but just have zero memory of it (REM aside).  If we are indeed conscious during this process, we have no recollection of being conscious.  While stimulating, I suspect this idea is incorrect, given the increasing evidence that cortical information processing is "fragmented" during sleep-- stuff still gets processed locally, but sleep somehow isolates cortical modules from one another (even while increasing synchronicity).  This cortical isolation is most intriguing for two reasons:  (1) whatever it is that causes widespread module-disconnection also enables the sleep-supported processes that we need to survive and (2) given that this disconnection renders us unconscious, it must somehow be disabling some biophysical process that is immediately antecedent to consciousness.  The "neural correlates of consciousness" gang may howl that I crimped their views and am now advancing a very similar view (introduced to me by an MGH resident, Mashour?), but I see the "neural correlates" as reflecting an "attentional spot light" that is distinct from the core consciousness generating mechanism.  Nonetheless, I would agree that there is some profound linkage between this generating mechanism, and the cortical neural correlates, and additionally, details in the operation of the thalamocortical circuitry.  So what does this have to do with AIs and DEs?
 
      Well, for starters, only animals with advanced learning and memory skills get to do this: i.e. animals that are suitably advanced enough to check out of the net for some shut eye.  It seems based on sleep research in zebrafish that essentially all vertebrates sleep.  My guess is that the octopus should also sleep, given its 300 million neurons (vs. the 150,000 in the larval zebrafish which sleeps).  But this might not be true if the process of evolving a "memory disabler" and/or consciousness is quite difficult and happened only once in Earth's evolutionary history.  This might all seem remote from DEs, but it figures into DEAC's concerns in 2 ways:  (1) the evolving of consciousness in digital environments might be rare or common place and (2) the relationships between learning, memory and intelligence, and the ultimate capabilities of systems built along these lines, might be amplified by a better understanding of how our conscious and subconscious systems interact and what exactly it is that consciousness brings to the table: how does it contribute to our being the most dominant and destrutive life force on the planet?
 
-- chip replacement Man (or Babe): whether you will argue it is or is not conscious, you will be arguing something profound!

            Reference: Colin McGinn, The Mysterious Flame.
He does a very good job of explaining why prominent neuroscientists have goofed on the hard problem, and why the hard problem is actually a problem (and a hard one at that). But then, at the very end, he quite curiously makes a huge blunder in assuming that only biological (water-based) entities can be conscious. Nonetheless, a great primer on the problem for the neophyte, and a good refresher if you need one.  Has good leads to deeper studies. 
 
side notes:
   1.  Turing Test and McGinn: Turing Test is neither necessary nor sufficient for consciousness.  While I agree with McGinn, and have always been dubious of the Turing test, finding it to be seemingly trivial to the point of being silly, I have a lurking suspicion that I am missing something profound here, if for no other reason that I am quite reluctant to dismiss a major conclusion of Alan Turing -- the man who saved us from Nazi domination, at least until George W. seized power.
  2.  Not sure if we mentioned yet that the vertebrate invention may have been sleep not consciousness.  We can turn ourselves off, something that inverts maybe cannot do.  We are less than the lowest of vertebrates when we shut off our cognitive engine.  Or maybe not: maybe sleeping humans are more cognitively powerful than "waking insects".  We can respond to a variety of stimuli and perhaps perform more sophisticated processing of such information (when asleep) then might an insect in its "regular cognitive state".  Indeed this fits with MGH-guy's arguments about local cortical sensory processing, which may be fragmented (by the sleep inducing process), yet far more powerful than the insect's capabilities.  As to why we DO NOTHING when we are sleeping, this might be explained by deliberate inhibition of motor programs when asleep, since sleep walking and driving are quite dangerous.  Of course people do walk in their sleep, and, with the help of Ambien they can drive in their sleep and do many other things.  This is thus consistent with consciousness being a "vertebrate invention", and sleep also being a "vertebrate invention" that is needed in order to reset certain resources that are NEEDED for consciousness, e.g. silent synapses or some other sleep-generated neural or cognitive resource. 
       In this vein, we can understand better what consciousness is, by better describing what it is that sleep generates that is required for consciouness.  This could be silent synapse or other "stuff"  that enables the disjoint cortical modules of the sleeping brain to come together into a cohesive assembly. 

Query: Is consciousness just experience?
     We cannot know what other beings experience directly simply because it is their experience, not ours.  So consciousness can be defined as "what an animal experiences".  Or, "what a machine experiences".  Or, "what a rock experiences".  So this is a philosophical dead-end, even though the question does seem to hit on something fundamental about consciousness.  Perhaps a "temporal sequence of experiences" would be a better definition?  We can argue that rocks and plants do not have a sense of time (even though plants certainly have temporal behaviors).  When unconscious, we do not have a sense of time.  So we might understand more about consciousness by exploring this sense of time-- and this can be done experimentally, perhaps.

4th Millenium