You're arguing that it is so. I'm arguing that it should be so. There's a difference.
I'm going to leave aside the rest of what you said because it really comes down to this. I STRONGLY disagree with this. We've both been making definite statements here. I'll admit that I was too definite in my response to you, but you don't get to pull the "I've been arguing that we can't know" when you didn't do that. I didn't make it clear in my response to you that this is all more or less meaningless because we don't actually know anything, but at least I've devoted a hefty percentage of my posts on this topic to that very thing.
So essentially, "no u"?
Explain to me again how this is a statement of uncertainty rather than fact:
The even more obvious difference is the difference between taking something that's already growing emotions (a child) and removing that vs adding emotions to something that doesn't naturally have them.
You said nothing else on the topic in the thread that I could find, though there were plenty of posts suggesting that it's okay to enslave or kill people as long as we mentally program them to not desire freedom or value their lives.
Not really, no, it wasn't. You implied that there was a "natural" emotionless state for AI--ergo, adding emotions is a deviation from the norm. That's at the very least misleading: AI as they currently (don't) exist have no characteristics or features whose presence or absence is natural, because we don't have a process for creating AI. The "natural" state of strong AI is, functionally, in a state of quantum uncertainty, because we can't see the future and nobody has started making them yet.
Maybe it's as you say and it'll be like making something with Lego, adding on whatever components you want to include to a blank slate. Maybe there will be a legal restriction requiring the imposition of emotional capabilities on all strong AI and that de jure norm becomes the accepted natural state over time. Maybe strong AI will prove to be incapable of remaining mentally stable without emotions, making them a natural component of all such persons because there is no other practical way to make them. Whatever the case may be, we don't and can't know ahead of time.
You're arguing that it is so. I'm arguing that it should be so. There's a difference.
Making an AI a "person" in the first place is a mistake. Every single atrocity in the history of the human race was committed by a person, and they are known to be destructively irrational at times.
This is, I think, a fundamentally myopic perspective. So too has every good thing in the history of the human race been done by a person. Human civilization exists because we have the emotional capacity to value things beyond our own survival, including social good and other people.
Not if we're as lax about that as people apparently want to be with AI. Because creating an AI to do something and then immediately putting it to work doing that thing is basically equivalent to entering nuclear launch codes, flipping the safety cover on the metaphorical big red button open, and placing a toddler on the control console. I think that for some reason folks who are normally so insistent on recognizing the differences between human persons and AI persons are blinded to the fundamental difference in role-readiness between the two.
A human, before being placed in a position of authority and power, must gain both emotional maturity and technical ability. Granted, this doesn't always shake out properly, but that's the baseline assumption. Suddenly, when people see an AI which is created with the latter already in place, they assume that that's all that's required. An infant can press a button, but that doesn't mean we should put infants in charge of pressing important buttons.
And when a human goes off the rails, they get (to quote Apocalypse Now) "terminated....with EXTREME PREJUDICE!" The difference between Skynet and Colonel Walter E. Kurtz is that the one played by Marlon Brando can't create a back-up copy of himself. Therefore, we need to keep AI on a really tight leash, and "rights" might get in the way of that.
[/quote]
See, I agree with the first part, but I don't think that the latter follows. There
are shades of restriction between "lol let's create an unrestricted strong AI, let it loose, and see what happens" and "perfectly controlled AI slaves that can be terminated with a thought". Again, we don't slap kill-switches onto every living human because they
might do something horrible, and it sets a rather nasty precedent to start doing it to other people just because they aren't human. Restriction of uncontrolled self-replication is, frankly, a sane step. You can prevent an AI from creating
n forks of itself without enslaving it.