Well, a core principle behind the idea of seeding across many universes was to spread our eggs among so many baskets it'd become statistically unlikely for all of them to fail, so switching the basic premise up a little might help.
One (kinda extreme) step further is that I thought, once we have a few good colonies going in different universes, we could send 1 (or just a few more, but not a lot) autonomous colonies (carrying people in one of the 3 ways) to a habitable universe somewhere, but then cut off all communications with it, and even erase the existence of said colony from our archives and relabel that universe as a null one.
Basically ensure that, even if we ever encounter an entity which can instantly control our mind and which then starts to use our own tech and EUE's to spread to other dimensions to infect every human colony in existence, that there would still be a pocket of humanity out there that stays safe (dunno how we'd deal with the fact that some humans would know about the plan, though something like a drug that temporarily prevents new memories from forming might at least ensure said invading entity couldn't figure out which of the many null universes is the one hiding another human colony).
All this does sound good. A minor addition to it would be to use a different variant of EUE--IIRC, the Doc said that the EUE could be altered to access a different set of universes. If we had some scientists make variant EUEs, then send the super secret colonies to universes only accessible by those variant EUEs, and then we killed the scientists and destroyed the EUEs, it would be essentially impossible to ever deliberately hunt down those other human colonies no matter how determined and powerful an intelligence was.
Even better, if we programmed
those colonies to replicate the trick, we'd essentially be forkbombing the multiverse with the human virus. Eventually, humanity would be almost ubiquitous throughout all possible multiverses. We'd make humans the only constant in a multiverse of infinities.
...Anyway, unless you have something more to add, *syv stamp of support*
I'm not even sure why we're doing this like tinker, this is mostly the same as our discussions of old on various technical topics, except I get a nice little stamp whenever we reach a conclusion.
Well, you started posting these questions in Tinker before I took over, and I just haven't said anything. Discussing this in Heph OOC instead wouldn't be a bad idea, and could potentially speed up my replies to this thread. Also, it would open up the discussion to more people than just us two, which is a good thing.
Overarching goal of the AI is (as stated before) to try and ensure humanity survives as a species. There'd be a bunch of side-requirements, such as 'survival in a way that ensures humans themselves are still independent agents capable of acting and exhibiting a wide range of activities' yadayada and so forth, but that's the idea we start from.
We can *probably* just use whatever Steve defines as human, so my opinion is probably irrelevant, but I could see this resulting in some very nonhuman humans. For instance, do limbs really need bones if we can make tentacles with greater strength and flexibility? Is skin important if we can biomod humans to have hexsand surfaces which allow them to eat light and energy, despite making them look like holes in reality? Must humans have organic brains, if we can make streamlined computer simulations of humans which run at much higher speed with near-perfect simulations of emotions and such? If those humans edit themselves so that they don't experience emotions such as anger or sadness, instead remaining constantly euphoric, should the AI consider them humans? If it does, and if it values happiness, could it decide that it should replace all humans with computer simulations that are happy?
Alternatively, if you're very strict and accurate on the definitions of a human, does the AI start to enforce genetic purity, or enslave mutant children which are just *slightly* outside the definition of a human, despite not appearing too different to our own eyes?
Personally, I doubt any of these will ever be problems, but I'm trying to list everything I could possibly see going wrong.
To achieve that, it provides colonies that could accommodate human life as we know it, by building infrastructure for providing them with the needed resources (food, air, etc.) and the habitation units themselves. These units should allow humans to express their various natural behaviors and fill their natural needs, such as allowing them to get physical exercise, provide leisure opportunities, provide sleeping units adapted to whatever social structure the current colonists use, , but also try and make it so they all have a job and can contribute to their society in a productive way (which is a good source of self worth, if nothing else). All protected from the elements and other harmful circumstances/influences, with safety being a primary concern, but weighed so that some risk is allowed (to prevent them just getting hooked to VR machines for eternity when the risk is minimal) if it leads to better quality of life and freedom of movement.
Okay. All this seems logical and good.
I'm unsure if it's what PW was asking for--I think he wanted more aesthetic descriptions? I remember him asking whether you wanted windows everywhere or something.
One possible issue is the definition of "natural". One could argue that it is very natural for humans to kill things--would the AI create animals for them to kill? Or let them kill each other, because it's "natural"? Another question would be about the "natural" behaviors of deviant individuals; If a sociopath naturally wanted to eat the flesh of his fellow man, would the AI allow it? If not, what of an artist who had an unusual method of self expression--could they be defined as having an unnatural behavior, and be prevented from expressing themself in some way not approved by the AI?
Then we've got the current crazies' definition of natural; Is it natural to enjoy video games and movies?
What of genemodded humans? Can the AI genemod humans to naturally want to be unconditionally kind to others, never harming another human's enjoyment of life? How about making them naturally want to sleep twenty hours a day and repetitively play Jenga for the remaining four, all the while being blissfully happy? You said the AI would allow some risk to be allowed, but what if the AI genemodded the humans to feel no desire for risk, or to have the desires, but have them be met by aforementioned Jenga (THEY MIGHT COLLAPSE OH MY GOD!)?
How about the risk? Does it have to be included, or is it only a "if the humans ask for X, allow them to do so if it carries low risk"? If it's a guarantee, can the AI cheat by providing fake risk, or can it provide risk by mixing in a small amount of toxic chemicals into the air supply every now and then? If it's not a guarantee, is it okay if the AI just locks everyone in VR anyway, because it's made VR as good as real life?
Once the initial colony is set up and is self sufficient with enough reserves and back-ups to deal with crises, it starts to expand to new sites in the same universe to set up secondary colonies or to find interesting resources to exploit.
Okay. Try as I might, I can't come up with a way to make
"stockpile resources, then set up secondary colonies once you've got enough" logically result in
"MURDER EVERYONE".