Well they could take that AI mentioned in another thread that designs other AIs according to criteria. Basically random AIs would be made for each town, then bred together. And estimates would be taken for such things as death rate, crime, road fatalities and the like, and rogue AIs would be culled to be replaced with new mutants. e.g. if you local AI decided that in your house, specifically, the temperature should be set to 80 degrees Kelvin, via pumped Liquid nitrogen at 3am in the morning, we'd lower that AI's health score, and it wouldn't be replicated unless it also scored relatively highly in other areas.
But it's important not to completely cull the psycho or aberrant AIs, if ultimate wellbeing is your goal. Because you can't know whether the current "best we've got" is the peak, or merely a local maximum in the equation of possible choices vs desired outcomes. If we limit the AI such that it can only make "safe" choices that people are happy with, that ultimately dooms the project. No real changes will be made.
One important thing to think about for the longterm would be to avoid premature optimization. Basically that's when the system is geared to make small changes which make life better, but it ends up putting evolution in a cul-de-sac. An AI that optimizes a community has a real danger of being trapped in that. e.g. in a shitty stepford-wives type world, but any incremental change from that point, specifically makes things measurably worse. Sometimes you have to plough through worse conditions because you know or hope there's a better reality on the other side. Typical "optimize this" type AIs tend to get stuck in ruts, because change hurts people, and if the AI is designed to always min/max happiness and sadness, it's going to get stuck in some rut or other.