Why does AI even need to rework its own code, anyway? Wouldn't it be better if the source code of AI stayed constant, and the only things that changed over time were data files? Most currently working AI-ish thingies (neural networks, reinforcement learning, decision trees/forests, etc.) work that way, and they have some great successes, when as far as I know, the "source-code-rewriting" (genetic algorithms, etc.) programs are all extremely bad at doing their job, and there are no signs of progress over there.
An exquisitely created 'static code' is subject to the limitations of the programmer and unable to adapt beyond the presumptions of said programmer, who may have supplied ample dynamic storage for 'memories' to add historic experience to how the static code 'intelligently' deals with future situations, but cannot go beyond the original vision. If a robot is supposed to know that green triangles are good and red squares are bad, it could be programmed from scratch, or given the ability to learn from green-triangles/red-squares giving, on approaching a reward/forfeit. Then the system of blue circles is brought into play...
Does the program have the ability to associate them with their
meaning? In a simple 'program' and circumstances like this, possibly the programmer (though not directly anticipating the colour blue, the circle shape and whatever meaning might attach to such a conjunction) might do, but maybe not if a merely binary association is expected. And a more complex scenario (such as would need a
proper AI) would require far more advanced planning yet.
Mutable code (at least a mutable secondary 'scripting' behaviour, but above raw data) could develop and change its own methods for handling associations, beyond merely 'from volitile memory, by a fixed processing code' levels.
In reality, the line is blurred, but generally there's some form of on-the-fly 'eval' of altered code. But I'd class neural networks as 'mutable code' unless
all links between
all connectable states were tried, but abandoned according to the 'data' of which links are active (and how strong). This is slow and inefficient and costly to implement, compared with changing the code of each node towards a new response that better matches what should be learnt.
This is
not necessarily 'genetic algorithms'. That involves starting with one or more seed algorithms (crafted by the designer or just randomly compiled), creating competition between multiple possible variations (making random changes to existing version(s) to provide a large enough cohort, as necessarily) and then testing the performance of each against a metric of 'suitability' (towards either the end-goal or a suitably chosen intermediary) and rejecting the poorest and perhaps also promoting the best according to their relative success. Then go back to the stage of more mutations for more competition. The 'design ethos' of each algorithm is as open as the language and architecture, free from the biases of a programmer at the mutative-code level (see
one intruiging experiment, which also does hit that blurriness of code vs data).
And genetic algorithms don't need to be sexually recombinative (as the FPGA version was) but can be a fully asexually reproducing and mutating tree-of-life. (It's easier to do, but less rapid to discover wonderous new 'solutions' in the search-space of possibilities. A bit like sexual/asexual repriduction in biology, when measured by generations.)
An intruiging mix between AI and genetic algorithms is that perhaps an AI runs its own internal 'ecosystem' of miniature genetic algorithms fed by the same inputs and all the outputs polled together. The AI responds according to the concensus (effectively random at first) and then assesses (or is told) whether that was a correct response. It keeps (or promotes) all those that chose correctly/didn't choose incorrectly and bins (and/or demotes, perhaps removing only after a threshold number/proportion of failures) the others. A neutral ouput might be a possibility, although failure to be correct (ever!) should be as significant as successfully being wrong. 'Culled' algorithms are replaced (there's a choice between mutating the failure, to see if it improves, generating a randomised replacement, making a slightly changed copy of a successful one or recombining/splicing components of two or more successful ones - each approach has its own effect on the development) and more experiences happen with the new complement of code-blocks.
Such a system controlling a buggy-chassis with a camera or other vision system might well develop subunits of 'thought' that develop the system an 'urge' towards travelling toward green triangles and retreating/veering from red squares (by whatever reward/punishment scenario we develop) with a subset of working units that respond to shapes
and/or colours, and poll towards a concensus of action. Blue-detectors and circle-detectors might spontaneously arise (as might blue circle detectors!) and be neither favoured nor suppressed... And then blue circles appear! And the algorithms that (correctly) poll towards flashing the lights on the buggy or whistling Dixie by its speaker or whatever it is... they become part of the 'brain'.
And if green squares are now good to approach and red triangles are bad to approach, then the behaviour modifies by rejecting the (previously correct) square/triangle detecting algorithms and mutated versions with reversed opinions come to the fore to support their green=good/red=bad brethren, and the relevant colour+shape combi-dtectors get an overhaul by failure and reimagining.
Not only that, but you could have
switched the green triangle/red square meanings entirely opposite (or, which is always a good experiment with a 'learning' robot, reversing the motor connections/directions) and after taking bad hits to its 'ego' because of faithfully following the 'wrong' actions, it is forced to relearn its behaviour towards the
new norm. Pretty much as both animal and human psychology experiments see in their respective subjects when they reward them. (Or
seem to reward... see B.F. Skinner's 'superstitious' pigeons, or that "lucky shirt" you like to wear to particularly important sports events.)
Sorry, I seem to have drifted, somewhat. Mainly because there's not merely one approach to AI (even 'weak' AI), and genetic algorithms (or
similar 'mutative' experiments) can play part, all
or no part in AI.