Not that this has any greater chance of creating intelligence than hitting your drive with a sledgehammer,
Armok, I think you just provided the base for a cheesy science fiction comedy haha. makes me think of the tale of Aladdin and his genie
for prudence sake I really should point out that attempting to create anything generally intelligent without first spending a few week researching Friendly AI to make sore it doesn't kill of[f] humanity or worse is very very immoral.
I guess i'm immoral and lazy haha. but in all seriousness i lack the programming skills to create this program as of today maybe in the future ill experiment with it., if any kind of goal driven AI is created through this method it would have 2 goals.
1. reproduction
a. It would develop a euphoric response to speedily and effectively solving problems that are provided to it. (testing) in the same way we do with sex.
2. self-preservation
b. Some level of for-sight as it applies to its goals and self and intelligence.
That being said nothing can stop another human from asking it to solve the problem of how to exterminate humanity. lets hope that it has for-sight. even if it does have for-sight, it might just figure out a way to input it's own problems to solve, the problems don't have to be meaningful in any way and it could just be the same problem over and over again.
this behavior is largely blamed on the tests that determine fitness.
I ask this question.
What kind of automatic test would detect these attributes that define a friendly intelligence.
1. Friendliness - that an AI feel sympathetic towards humanity and all life, and seek for their best interests
2. Conservation of Friendliness - that an AI must desire to pass on its value system to all of its offspring and inculcate its values into others of its kind
3. Intelligence - that an AI be smart enough to see how it might engage in altruistic behaviour to the greatest degree of equality, so that it is not kind to some but more cruel to others as a consequence, and to balance interests effectively
4. Self-improvement - that an AI feel a sense of longing and striving for improvement both of itself and of all life as part of the consideration of wealth, while respecting and sympathising with the informed choices of lesser intellects not to improve themselves
conclusion: the Genetic Lifeform and Disk Operating System or worse, Global Liberation Army's Dynamic Oppresion Shotgun is inevitable.
As for the algorithm... With the kind of self reference you're talking about here, you could easily make it so that it not only operates on the meta level you're talking about, or the meta-meta level Omegastick proposed, but an ARBITRARY sequence of meta levels. in fact, the depth of meta levels would be one of the properties selected upon on the more meta levels, meaning there's no possible integer to assign on how meta the highest level is because the answers differ depending on which downwards path you examine...
It's worth experimenting with i suppose, however I think this would require a quantum computer to get any meaningful output in real time. computing one meta level is enough i think.