I think it's less the "why not" aspects which interest me, and more the ramifications of such being possible. It could shatter and reform entire ideologies. It would make us gods - creators of sentience almost ex nihilo.
If such a thing were possible, and we could essentially create our own brand of "life," it removes a lot of the supposed need for god/s.
Of course, that's just the philosophical advantages. The practical applications could be wide as well - a sentient robot is much more flexible than one bound to rigid structures and rules, and could also act upon its own creativity. Advanced AI with creativity could create new blue-prints - revolutionise industry, architecture, science, art.
They could also glitch out and do something unintended. Which, depending on what the AI is used for, could have huge ramifications. And there's the matter of creating a general intelligence comparable to human intelligence and have it coexist alongside human intelligence. Even with just 1 kind of general intelligence (humans) there's been constant histories of conflicts among individuals and groups with that kind of intelligence. It seems unrealistic to me to not expect conflict to arise between humans and some variant of a general AI.
Also it seems that with every major technological development, human society becomes extremely dependent on that technology. General AI would probably be a comparable technological development. It would be likely that human society would become dependent on using general AI.
But unlike previous technologies which are inanimate tools, a general AI would be able to "think" like a human, take the most rational courses of action like most humans, and assuming they have a full range of cognitive abilities as a human mind, would be able to make their own decisions and priorities.
So humans become dependent on this completely separate intelligent entity that can do everything a person can as well as or better than a person can. Unless people build in the necessary restraints to ensure this entity is willing to cooperate with the totally dependent humans, then there's nothing stopping the entity from
not doing the things people expect them to, and are reliant on them to do.
But then how do we ensure that the AI doesn't one day decide to stop working with people? The only foolproof method I can see that would absolutely work against a general intelligence as capable as humans is to not let it get to that point of intelligence in the first place. Don't let it be general. Restrict it to what it needs to know to work well.