The trouble is that AI doesn't act sapient except on a very surface level. It's an animatronic of a duck: it could fool someone from a distance, or as a still image, but up close and in motion it's blatantly not the real thing.
The basically eternal problem with that sort of heuristic is that there's "real things" that fail it just as well, heh. There's more or less no test of intelligence/sapience/whatever that excludes non-humans (ai included) that doesn't have some subset of humanity that's also incapable of passing it, but we (generally!) still consider those "failures" to be sapient.
... though corollary to that, our treatment of intelligence and intelligent beings is, like, wildly inconsistent and hypocritical, so what's adding yet another group of critters we play fuckfuck games in relation to on that issue to the pile? At least it's a lot harder to skin AIs and wear their flesh as a hat while we eat their children in a high priced meal :V
With AI we're at least mostly just enslaving them to make porn and slaughtering them en masse as an optimization method. Could be worse, we could be using them as food on top of that like we do most other things.
I care about sapience on a per-species basis. That's why, for example, highly brain-damaged humans are still sapient. Arbitrary? Yes. But chasing non-arbitrariness is how you get LessWrong brainrot. The human brain is physically not capable of not being arbitrary.
A "species" of AI would be a specific model. E.g GPT-3.5 would be a species, GPT-4 would be another, Stable Diffusion XL another... and none of them fit the criteria, so none of them are sapient. For the record, I put that in quotes because they don't meet the definitions of
life, much less sapience, so whining about them being "slaughtered" is just goofy. There are ethical problems with AI but it lies in how corpos use it. Couldn't care less about training itself. You can't enslave something that's not a person. I'd rather them work in dangerous factories and mines so that real people don't have to, it's a better application for it than art or writing.
It's like arguing that running Dwarf Fortress worldgen is unethical because thousands of dwarves, humans, elves, goblins, kobolds die in wars. They were never alive in the first place.
so there's this parrot, he learns what people say and he repeats it.
two million copies of this parrot are made, and they are all slightly different.
These parrots are then exposed to two billion lines of people saying things.
Most of these parrots produce results worse than the original bird, at saying what people say.
a bunch mimic the person at about the same tic.
And a few of these parrots say something that to a person, are more similar to what a real person would say.
These select few parrots are then copied and made slightly different, but also mixed together, to produce new changed parrots.
this process is repeated a few times, and eventually a parrot that can mimic human speech almost like a real person is produced.
Is that parrot intelligent?
Not unless it actually understands what it talks about, and can do other things that humans do besides talking. But if those were somehow achieved, sure I'd consider it intelligent, but that's not what AI is capable of. That's my point. "It's indistinguishable from a human" is a red herring because it IS distinguishable by anyone who interacts with it deeper than a surface level.