I have studied the science of consciousness, particularly GWT, and the influence of that on constructing system 2 AI. The remaining perspective I haven’t examine is the AI scientists’ ideas on consciousness. I asked the question: if system 2 AI is successful, could it be used to simulate consciousness? I have already got a preliminary answer that it couldn’t because consciousness is a broader phenomenon including but not limited to system 2 reasoning. But I am curious to hear what Yoshua Bengio had to say in ASSC26:
Unexpectedly, this presentation is really about providing one answer to the hard problem of consciousness. It started with Bengio’s usual talk about the gap between GPT4 and AGI, the dictomony of system 1 and system 2 and the GWT inspirations, which led to GFlowNet. Then he talked about his new paper “sources of richness and ineffability for phenomenally conscious states”. The core is the “contractive dynamics”, which describes the state of brain as a enormous vector of neurons moves from one attractor to another over time or in transition to one. Those attractors have low energy and are stable. And they are mutually exclusive and the set of attractors is discrete, even though the state space of the brain is continuous.
I was tempted to make an analogy between the discrete-continuous duality above and my experience looking at one thing, let say my speaker, from different angles, which is continuous, but recognizing it as one discrete entity, again my speaker. But I quickly realized this analogy is naive because the dynamical description of the brain IS a proposal of the brain functioning in neuroscience, which doesn’t lend itself to explaining a phenomenal observation, albeit the existence of a similar duality.
There are two more highlights worth mentioning: one is attractors can be identified by a composition of symbols, different from word embedding in that word embedding has one vector for a word, due to its high-dimensionality. I would argue word embedding can achieve that too but this could be a design problem. Second is everyone get their own sets of attractors so when they are present with the same sensory information, their consciousness will be different because their attractors take them to different places.
Lastly, there is this bullet point “This would make subjective experience a side-effect of the thinking machinery.” “This” probably refers to the richness and ineffability of consciousness. My interpretation is the enormous nature of the vector state space leads to the richness of subjective experience, and that richness IS consciousness or subjective experience. I don’t know if that is what they are trying to say and if so, whether is theory is correct. I need to read their paper to find out.
Does the huge dynamical system in our brain give rise to consciousness?
Yes of course, that is the biological basis of consciousness. I need to ask this question more precisely.
Are they attempting to answer what gives rise to consciousness? They for sure answered the source of richness of subjective experience. But is that subjective experience itself?
TO_BE_ANSWERED.
Leave a comment