This is a fun argument that may at some point in the future become relevant, once we create real AI.
That being said, let me pose the following suggestion: The quality of being an entity that experiences existence (for the sake of brevity, this concept will be referred to as "conscious") cannot be tied to any particular degree of complexity. Otherwise, any particular point of complexity you choose to "draw the line" will be completely arbitrary. Is an ape conscious? A human infant? A dog? A lizard? A plant? A bacterium? An atom? All are entities that respond to their environment in some sense, the only difference is the complexity with which they do so.
Therefore, I suggest the following: Everything is conscious. Consciousness is a fundamental property of reality itself; the degree of an entity's experiential consciousness is reliant on how much information it is capable of storing. An electron "stores" only a few bits of data - its own energy state - and its responses to input are extremely simple - it can absorb or emits a photon. A human being is considerably more complex. But there is no qualitative difference between them.
This suggestion will be rejected, since it flies in the face of certain things we take for granted. For example, that the killing of conscious entities is wrong. But the entire idea of right and wrong are non-physical in nature. These are human concepts.
The reason why we consider some things to be right and others to be wrong is because these beliefs work. Societies that consider wanton murder of other humans unacceptable outlive those that do not, and so the taboo against murder is nearly universal. It is risky to uproot traditional morality for the same reason it is risky to perform invasive surgery on someone - these systems evolved over many generations of trial and error as we as a species worked out which beliefs work and which ones don't. Sometimes the reasons are obvious, other times, less so. Sometimes a better system may exist, and so societies evolve and refine their views on morality; other times a society may think it is advancing forward when it is in fact a non-viable mutant; history weeds these out as they come. It is impossible to be certain until after the fact.
Why do most societies consider the murder of a human wrong, while killing animals is typically less looked down on? It isn't because of any intrinsic quality that makes it "wrong" to kill a human; it is because a human can be reasoned with. If we both agree not to kill each other, we can work together and build a society instead of fighting. Therefore societies where people agree not to kill each other are more successful than those which do not. For the same reasons, it has often been considered acceptable to put people to death who refuse to follow this "agreement".
Of course, humans being creatures of pattern-making and metaphor, it is only logical that we should draw analogy between members of our own society which follows our own laws and foreigners or criminals, or even species that in some way resemble us. Exactly where we draw the line is, again, arbitrary; it is a quirk of human thought, or perhaps motivated by other, more complex systems - killing criminals, foreigners, or animals can train a person to be less empathetic, which can be detrimental to a society, so perhaps certain societies have "learned" that it is better not to kill.
Back to the ethics of DF and AI in general:
Whether it is wrong to kill a vaguely simulated dwarf, or a complex "real" AI, or hit backspace and delete a letter in a post, has nothing to do with whether or not the destroyed entity is "conscious". What matters is what are the ramifications of doing so on the society that considers it to be ethical or non-ethical?
Does playing a realistic FPS, or fighting game, or slowly mutilating an elf in Adventure Mode make a person less empathetic? Will this lack of empathy cause detrimental effects on society? Or does it serve as catharsis and make people less likely to go out and perform such actions in reality? I would argue it does both, but at any rate the effects on society seem to be pretty negligible, so for now our society seems to go with [KILL_VIRTUAL:ACCEPTABLE][TORTURE_VIRTUAL:MISGUIDED]
And what about when we make real, practical AI that is on par with ourselves intellectually and (most importantly) doesn't want to be killed (this is an important clarifier; I do not believe a desire to live is intrinsic to life or even intelligence; we simply evolved that way because it allowed our ancestors to survive). Well in that case, a society that decides that abusing robots is OK is probably less likely to survive than one which grants them equal rights. So in that case, we will probably decide that destroying such an AI is wrong.
But we aren't there yet, and it certainly doesn't matter for DF, so by all means, kill all the virtual dwarves you like.