I think that the AI being unable to adapt "beyond the presumptions of the programmer" is a good thing. It makes bugfixing the problems in AI so much easier, not to mention it reduces the chances of bad unexpected stuff happening.
And you no longer have a significantly useful AI, you merely have a tuppeny-hapenny ElizaBot that has
no intelligence and can only deal with input anticipated by the designer. (My own first Eliza program was typed into the BBC microcomputer in the early '80s... Knowing how it all works takes most of the magic out of it. Tell me more about your mother.)
And no, neural networks are very much not "mutable code". There's a very simple algorithm in their core that's literally just repeated multiplication and summation of a certain set of input numbers, with weights given by data, and then subsequent update of these weights based on the outputs. There could be some kind of algorithm at the tail that processes said outputs to produce actions based on these outputs, but the algorithm itself doesn't change, either.
You're approaching NNs from a different angle to me, it seems. To me, a NN node is defined as a logic gate (of a non-boolean nature, usually) programmed to monatomcally convert various linked input values/potentials into a single value/potential sent on to zero or more other nodes. This configuration
can be defined as data, but then so would the string "if (pop(A) AND pop(B) and NOT(pop(C)) then push(D,TRUE)", even though it is obviously (pseudo)code when push comes to
pop shove. If the container-code that oversees the NN (in hardware or simulation/emulation) is modifying the behaviour of
how the (real/virtual) decisions are made (not just which values are decided, whether the operands are immediate/direct/indirect/doubly-indirect/etc 'data', but the mix and relationships of the operators themselves) based upon some judgement of how well the tested input is converted to a desired output then it starts to look like it deserves the affectation of 'self-modifying code' to me.
Or, as a corollary, the Dwarf Fortress executable is obviously merely Data, as was its source 'code', as was the compiler 'executable' or its source. As is command.com/whatever. As is any prior bootloader. As is the entire contents of the BIOS/UEFI. As are the microcode 'instructions' governing the operation of the chip. The line can have a fuzzy and arbitrary location, depending on the phase of the moon and (possibly) the context of the examination.
AFAIK all genetic algorithm stuff hasn't gone beyond the labs.
Not sure if you mean CS labs, but have a look at something like
this..?
And, while we're at it, human intelligence doesn't seem to work like genetic algorithms do, which further raises the question as to the actual applicability of this "genetic" stuff to something it wasn't actually designed for, and which has, in fact, shown some extremely poor performance in nature, compared to neural networks:
You (or someone else) originally equated genetic algorithms to AI. Like NAND gates, it might be a useful massed-component of a full AI system, but
a genetic algorithm wouldn't really be expected to be the
entire AI.
Genetic evolution took billions of years to evolve humans, human's neural networks took only 10,000 years after learning agriculture to conquer Earth.
Anthropcentric, much? Neural networks exist in leaches, and even lowlier creatures. It's not even that we've developed intelligence, because octopuses, merecats, crows, cuttlefish and many other creatures (including our close ape cousins, of course) have intelligence of various kinds (social, tool-using, communicative...) and for some of these it may be that merely the lack, in parallel evolution to that of the mind, of suitable physiology (hands for tool-use, larynx for communication, etc) stops them developing their
extelligence to the human level.
And our neural network (going by brain-mass) was developing far prior to agriculture, during the particular subset of billions of years of (unguided!) evolution that Homo took to get to the Sapiens (and Neanderthalis) stage. It's not even as if Google's NNs, etc, are anything like a mammalian brain in structure or paradigm, even 'virtually'.