There was an update on the 23rd of December.
Steve is very excited about the progress he's made over the last few months. Although the outward appearance hasn't changed, he has made huge strides with the internal thinking/decision making code.
He summarises beneath a very, very long explanatory essay:
- Anticipation, intention, imagination and attention all work in the same way and their differences are merely a matter of context and use.
- Salience and desirability also work in the same way as each other.
- As do simulating imagined actions and sensory imagery.
- Motor, sensory and sensorimotor maps are all now meaningfully sensorimotor.
- Recognizing and classifying sensory states is now the same process as learning and generating motor patterns, and happens in just the one tissue layer.
- Affect not only tells us how everybody feels about a possible top-down future action, but also how maps feel about the salience of bottom-up sensory events and what intrinsic goals a map might have at any particular moment.
- Yang signals reflect back up as yin sometimes, and yin signals reflect back down as yang sometimes, making the information flow bidirectional, resonant, and semantically coherent.
- Two difficult learning schemes have resolved into a single, multi-purpose, learning-by-observation scheme with a relatively simple way of self-organizing into meaningful maps that's more fluid and less arbitrary than most SOMs.
- Everybody can contribute to generating, weighing up and vetoing decisions and there is no homunculus required in the Cartesian Theater.*
- Sequencing spontaneously produces an offline, mental simulation of the expected future behavior of the world, with the functional aim of establishing the overall cost/benefit of a plan, and if that’s not the biological root from which our own imagination and consciousness grew then I’m a Dutchman.
(*By this he means there is no "Brain CEO" making decisions. Every part of the brain contributes to, and carries out, a plan of action.)
Essentially, what he's saying here is that the brain is made up of various 'maps' that can learn about sensory inputs and states of mind and memory and all sorts of things. Each map is connected to others with two pathways: yin and yang. Signals get sent up and down these pathways in such a way that the creature can imagine future scenarios, pretend to carry out certain actions and evaluate the expected results even before it carries them out for real. It can also remember things like "When I was over here, eating a strawberry, I got less hungry." so that when it starts to feel hungry again, it'll try and find its way back to that area and look for strawberries to eat.
And that's all accomplished without pre-determined scripts or animations. Creatures learn to walk, they learn to recognise objects, and they learn how to categorize the importance of sensory inputs and ignore/pay attention where appropriate. All of this is theoretically implemented in his code now, though we haven't yet been given a working demonstration (he promises there will be source-code to play with soon!)