-start snipping the damn quote pyramids people-
It only becomes person-like (and especially only becomes like a *normal* person) if the creators are foolish enough to design it that way (which admittedly seems likely despite the fact that it would only serve to cause problems from both a practical and ethical standpoint). It needn't even have the capacity for strong or reactive emotions.
The best AIs we can design right now are by simulating an organic brain and they use the "trained as a child" method.
Think of it like this: you're a futuristic AI engineer. Your boss comes in and says "hey man, we need to know this thing doesn't give bad responses." Your bosses' system for bad answers? Anything creepy, unethical, or espousing harmful behavior. Basically, anything that would make customers regret their purchase. So, going by the Bohandas method, you start to make a list of all things that fit into that category. How long do you think that list is going to be? Ridiculously long, is the answer. And the more broad your rules are, the more they'll prune out acceptable answers. So for example "no violent responses" has to be narrowed down with "against living things". Something like "no racist answers" would involve a huge amount of sub-rules because what exactly that entails would be very difficult to explain to a being that isn't human. What if you make a list of racial slurs but a new one gets invented after the AI is created and it hears it and starts repeating it? And then you still have the problem of "my rules tell an AI what it CAN'T do. So what does it do if the answer it decides on is prohibited?"
Or, you could use a punishment/reward system and use guided learning to tell it whenever it gives a bad answer. Then watch as it forms its own connections as to what proper behavior is.
This may seem separate from the ethical concerns others are raising, but its not exactly. If you teach the AI, performing its "function" and acting "ethically" are one and the same; its a concept of "proper" versus "improper" behavior. Hard rules don't do that; instead you get an AI that might decide something is the best course of action and then be told it isn't allowed to do that. Its the difference between telling a child they shouldn't steal and telling them they'll go to jail if they steal, except its more coercive than that.