This is a fun argument that may at some point in the future become relevant, once we create real AI.
It only has to look like real AI, it does not actually have to genuinely be such to be a problem.
That being said, let me pose the following suggestion: The quality of being an entity that experiences existence (for the sake of brevity, this concept will be referred to as "conscious") cannot be tied to any particular degree of complexity. Otherwise, any particular point of complexity you choose to "draw the line" will be completely arbitrary. Is an ape conscious? A human infant? A dog? A lizard? A plant? A bacterium? An atom? All are entities that respond to their environment in some sense, the only difference is the complexity with which they do so
Relative complexity is actually not relevant. Responding to your environment is also not sufficient proof of consciousness, some reactions of conscious beings are reflective in nature.
No entity in the material world has consciousness, consciousness is an immaterial thing that bonds itself to material things as far as they fit the requirements for such bonding, possibly in the process become *a* consciousness rather than consciousness in general. To that effect what probably matters is not how complex the thing is but the precise details of how it is organized. We know that humans are conscious, we do not know that apes, dogs, lizards, plants, bacteria or atoms are conscious. Nor can we in any definitive way, the problem is that could does not imply is.
It is possible for a lizard to be conscious in the same fashion we are, this is because the structure of a lizard is roughly equivalent to that of a human. A plant however lacks a similar structural organization to ourselves, this means that we have no reason to think there is such a thing as a plant consciousness.
The key structural is here is the relationship between objects and their parts, the house and the brick. If there is a means to centralize information so that the creature can act *as* as a whole rather than simply all the parts acting separately and adding up to a whole in the final result then we have a basis for consciousness (of the sort we have).
Therefore, I suggest the following: Everything is conscious. Consciousness is a fundamental property of reality itself; the degree of an entity's experiential consciousness is reliant on how much information it is capable of storing. An electron "stores" only a few bits of data - its own energy state - and its responses to input are extremely simple - it can absorb or emits a photon. A human being is considerably more complex. But there is no qualitative difference between them.
This suggestion will be rejected, since it flies in the face of certain things we take for granted. For example, that the killing of conscious entities is wrong. But the entire idea of right and wrong are non-physical in nature. These are human concepts.
No, consciousness *is* the unreal and the imaginary, it is quite anathema to reality; it is not the basis of reality for certain, that would amount to declaring reality unreal. Same with right and wrong, those concern not what is but what should be, which are again by definition anathema to reality. There is no way to draw an ethical conclusion simply from a material fact, the fact is what the fact is.
The reason why we consider some things to be right and others to be wrong is because these beliefs work. Societies that consider wanton murder of other humans unacceptable outlive those that do not, and so the taboo against murder is nearly universal. It is risky to uproot traditional morality for the same reason it is risky to perform invasive surgery on someone - these systems evolved over many generations of trial and error as we as a species worked out which beliefs work and which ones don't. Sometimes the reasons are obvious, other times, less so. Sometimes a better system may exist, and so societies evolve and refine their views on morality; other times a society may think it is advancing forward when it is in fact a non-viable mutant; history weeds these out as they come. It is impossible to be certain until after the fact.
We determine what counts as 'working' in according to our ethics.
If it is as it 'should be' then it worked, if things are not how they 'should be' then it didn't work did it?
Why do most societies consider the murder of a human wrong, while killing animals is typically less looked down on? It isn't because of any intrinsic quality that makes it "wrong" to kill a human; it is because a human can be reasoned with. If we both agree not to kill each other, we can work together and build a society instead of fighting. Therefore societies where people agree not to kill each other are more successful than those which do not. For the same reasons, it has often been considered acceptable to put people to death who refuse to follow this "agreement".
Killing animals is less looked down upon because society is based upon killing animals. The society considers the supreme ethic to be it's own survival, not the survival of individual human beings let alone animals.
Of course, humans being creatures of pattern-making and metaphor, it is only logical that we should draw analogy between members of our own society which follows our own laws and foreigners or criminals, or even species that in some way resemble us. Exactly where we draw the line is, again, arbitrary; it is a quirk of human thought, or perhaps motivated by other, more complex systems - killing criminals, foreigners, or animals can train a person to be less empathetic, which can be detrimental to a society, so perhaps certain societies have "learned" that it is better not to kill.
It is not arbitrary. Either you are a conscious being or you are not, there are no levels of consciousness in existence since any conscious experience means you are a consciousness. We infer based upon the similarity of others to ourselves that they too are conscious, the alternative is to have mechanical models to explain away their behavior.
Back to the ethics of DF and AI in general:
Whether it is wrong to kill a vaguely simulated dwarf, or a complex "real" AI, or hit backspace and delete a letter in a post, has nothing to do with whether or not the destroyed entity is "conscious". What matters is what are the ramifications of doing so on the society that considers it to be ethical or non-ethical?
Does playing a realistic FPS, or fighting game, or slowly mutilating an elf in Adventure Mode make a person less empathetic? Will this lack of empathy cause detrimental effects on society? Or does it serve as catharsis and make people less likely to go out and perform such actions in reality? I would argue it does both, but at any rate the effects on society seem to be pretty negligible, so for now our society seems to go with [KILL_VIRTUAL:ACCEPTABLE][TORTURE_VIRTUAL:MISGUIDED]
And what about when we make real, practical AI that is on par with ourselves intellectually and (most importantly) doesn't want to be killed (this is an important clarifier; I do not believe a desire to live is intrinsic to life or even intelligence; we simply evolved that way because it allowed our ancestors to survive). Well in that case, a society that decides that abusing robots is OK is probably less likely to survive than one which grants them equal rights. So in that case, we will probably decide that destroying such an AI is wrong.
But we aren't there yet, and it certainly doesn't matter for DF, so by all means, kill all the virtual dwarves you like.
I am glad you understand what I was saying earlier.
Catharsis is not a concept that has an real credibility left.