I've already won this argument.
Q: Ohreally?
A: Yareally!
Q: If we destroy a thinking and learning AI, is it the same as slaughtering an animal?
A: 1st point: We made it. 2nd point: A computer doesn't roll around on the ground, screaming in its death throes unless we program it to.
Q: Are we ever likely to see an AI that learns and thinks?
A: Probably. There are numerous advances made in that field, demonstrating AI's that learn from their mistakes, being implemented into games/the military, etc. So I can probably safely say that the human race will live to see one.
Q: Are we ever likely to see an AI that perfectly simulates a huma-
A: NO! There are some things that just can't be programmed. Sure, if mathematically simulate every part of the autonomy, from the digestive system to the billions upon billions of individual nuerons, what would we have? A tool. A tool that is no different than your screwdriver or wrench at home. If we kicked it, there wouldn't be a consciousness telling itself, "Why did the mean man kick me?". It would run through a series of inputs and outputs that we programmed.
Besides, judging from the sheer number of weapons on this planet that sane people are too afraid to use, it's only a matter of time before a madman comes into power and decides it's a viable solution. Which, as you may have noticed, is happening right now.
And, to religious people: There is a major difference between a tool created by man and a creature created by god.