Probably? Unless we create an AI that doesn't want to be turned off or directly modified or whatever I don't see any moral issue with doing so, no matter how smart they are.
Even if the AI are as smart as a human (or likely far far smarter) unless we make them to be human like, there's no reason afaik??? (don't actually know anything about AI so maybe I'm wrong here) that they would actually be human like. Sure, if we make a full virtual person with all the various bits of desire and such that make up a human then there's going to be a moral issue with killing them off, but there doesn't seem to be a reason to do that.
It doesn't have to be human-like to be sentient (well except of course it would be since humans would make it) and it's hard to actually measure when the sentience starts. How to objectively check when something is sentient? After all, aren't living organisms very advanced biological computers? If it isin't okay to kill animals for purpose of testing, is it okay to wipe out AIs? And what about the moral issue of specifically restricting ability of AI to feel certain emotions? Would that be basically making a perfect slave? Do androids dream of electric sheep?
I think you missed my point a bit, just because something is sentient doesn't necessarily make it not okay to kill it. Furthermore, I do believe it's okay in some cases to kill animals for testing purposes.
The slave thing is a pretty legit question. But... We're already going to be making these to be slaves. Assuming you're okay with making AI at all (which is a question something I'm not necessarily going to answer in this post at least). Then not only do I think it's morally okay to make them to be more fitting for slavery, but in fact I think it's a bit of a morally superior option. I mean... What, would you rather make a slave that's not okay being a slave?
My main reasoning is this: morality cannot be hard-coded. Morality cannot exist without a foundation in emotion, because the concept of morality is strictly that. You cannot create a superintelligent AI with a hard-coded morality and have any expectation that it will conform to the rules that you've set. The ability to intentionally misinterpret to bend rules to your will is so easy for even a general-intelligence person that a superintelligent AI would have no difficulties in ignoring whatever notion of "don't do's" that you try to instill forcibly.
As for "turning off emotions", it doesn't work that way. Emotions aren't defined in concrete terms; they're an association of sensory information to an experience and a response. There's no inherent "sadness" or "happiness"; those are just names for something that our brains constructed. The only emotions in the human brain that have any actual inherent natures is fear/anger, which is a biological construct of our evolution. In simpler terms, our emotions are the very literal "data mining" that our brain does on a daily basis.
:/. Yeah. But why would an AI want to "bend it's rules" if it didn't have emotions? Honestly this whole conversation sounds pretty scifi to me so it's hard for me to make definite statements, but it sounds like emotions, or rather, desires, that are unrelated to what you want the AI to do is far more likely to bring about unintended consequences.
And yeah, I see what you mean about emotions not being... Er. Things, just rather doing what we want or whatever. That doesn't actually change anything I've said. Sure, it'll make the AI "happy" to do what it's programmed to do. Just don't program it to want to live.
Edit: Although to make it clear here. If you're right (and I doubt ether of us, or possibly anyone at all at this point, is actually qualified enough to make that call) and AI saddled with a bunch of desires and thoughts that have nothing to do with their purpose turn out to be more efficient and loyal then ones that don't have such process (which sounds silly to me, but, see point one, that doesn't mean it's untrue) AND we decide it's worth the moral issues involved to make them that way for the extra efficiency they give us. Then yes those AI would be a moral issue to shut down.