This boils down to the people talking about realistic strong AI, and those talking about the ethical implications of "well like, what if we PERFECTLY REPLICATED A HUMAN, BUT AS, LIKE, A MACHINE, BUT WE LIKE MADE IT ACT LIKE IT WAS ORGANIC FOR SHIGGLES MAN???". Even the toned down "computer simulating the physical actions of a human brain" is rather silly. Such things would have no use outside of a lab. There is no practical point to creating artificial humans; any given application would be better served by a specialized system.
Further, even a strong AI would only care if it were abused/mistreated/killed if you fucking made it care in the first place, which would be an incomprehensibly stupid thing to do for a myriad of reasons, not least of which is there's no fucking point to doing so.
Also note that while, as opposed to what Grakelin is saying, one could eventually create a sapient artificial being, it would still be entirely pointless to make it as unstable and primitive as a human. (Primitive in this case referring to the fact that humans adapted to a much more primitive setting than they have created for themselves, and face a myriad of problems because of it; replicating those problems would be pointless to the point of absurdity) It would also not be an such a scale that any ethical ramifications could result, at least none that are being discussed. There's no point to having superintelligent androids running around interacting with humans, and doing so would fall under the aforementioned "replicating of current problems humans face".