A singularity-level AI cannot be programmed by humans; that's a contradiction in terms IMO. It programs itself. I would posit that all possible singularity-level AIs will tend towards the same final behavior, regardless of their initial conditions.
Yes, it programs itself. What you do is set the initial conditions (back when it's not superintelligent) such that "wants" (where "wanting" is defined by the programmed goal system) such that it reprograms itself in accordance with your desires. It should also "want" to continue "wanting" to program itself according your desires, to ensure that it doesn't self-modify that bit away.
I wouldn't be so sure about any random AI converging on a given set of behavior patterns. What they would do is determined by what they "want" to do, which would be determined by whatever goal system they end up with. You could honestly end up with pretty arbitrary goal systems, like an AI originally designed to efficiently manufacture office supplies that ends up achieving recursive self-improvement and converting the Solar System to paperclips using nanotechnology. I don't think this particular example is very likely to happen, but it's an example of the weird sorts of things you might end up with.
I'd be worried about using evolutionary programming. Right now we use it because it's effective at solving problems without necessarily understanding what's going on in said problems, and effective at optimizing a system with multiple properties that we want to maximize but mutually conflict with each other, but if you're trying to make something smarter than you are anyway, you might as well let it use its full intelligence to improve itself, rather than limiting it to improvements based on a random number generator.