Again with the value judgement based on an unspecified set of parameters that you're assuming are universal! Game Theory is hardly evil, it's a system by which you can concretely compare apples to oranges by converting things to a universal measurement.
For example, the Star Trek movie where Spock argues that the good of the many outweighs the good of the one, Kirk's counterargument is that since the good of strangers has less weight to him, that (Good * Many) isn't always equal to or greater than (Good * Me).
I understand that someone in this thread is concerned that they might be doing harm to a potentially sentient creature, but economic modelling measures the issue fairly concretely. Is the potential harm you're doing to the potentially sentient creatures greater than the amount of harm you're doing to yourself by worrying about it?
(% chance you're doing harm) * (amount of harm you're doing) * (% chance that the subject can sense your actions)
vs
(time you spend worrying about this) * (value of what you could be doing instead).
How is that evil?
The issue here is that what is valued happens to be ethically significant. There is a difference between making calculations based values that two entities share and whether those values are ethical to start with. Torturing people is not a valid value ethically, however much torturers may value it.
While it's possible that AIs could be come sentient one day, DF entities are not.
And in fact, we're anthropomorphizing them: there are much more complex simulations that we don't wonder whether they're sentient or not, e.g. complex weather simulations. But when you make some ultra-simplistic model called "person" people immediately wonder whether it's sentient. DF creatures are just letters on a screen that we've assigned a semantic label of representing people. They're no more sentient than a cardboard cut-out is.
e.g "the sims" are just a paper-thin facade of skin and a bunch of prewritten animations. There's literally nothing going on "inside their head" because there's literally nothing inside their head. Meanwhile, Google Deep Dreams is a very complex neural network. It's actually more believable that there's a spark of "self-awareness" inside something like Google Deep Dreams than inside a Sims character or DF dwarf.
The problem is that they are an appearance/representation of humanity. It has nothing to do with what they objectively *are*.
Experience is not fundamental. Anything that I can determine about myself through introspection, I could theoretically determine about somebody else by looking at their brain. If there exists a non-physical soul, it does not seem to have any effects on the world. This lack of effects extends to talking about souls, and for that matter thinking about souls.
You modelled the world without taking the mind into account, of *course* it does not appear to have any effects on the world; that is because you made up a whole load of mechanics to substitute for the mind. You can make up as many mechanics as you like to explain away anything you like after-all. You can always make up redundant mechanics to explain away all conscious decision making, since you are prejudiced against what you scornfully call a 'non-physical-soul' to begin with the redundancy is not apparent.
You can make up as many mechanics as you like to explain anything you like, it does not mean that they exist or are not redundant.
Or alternatively, we can say that something is a mind if it appears to have goals and makes decisions, and is sufficiently complex to be able to communicate with us in some way. Not that this is the True Definition of mind - no such thing exists! And there might be a better definition. My point is that you don't have to define mind-ness by similarity to the definer.
Something does not have goals or make decisions unless it is genuinely conscious. What you are in effect saying is that it is observed to behave in a way that if *I* did it would impy conscious decision making. The point is invalid, you are still defining consciousness against yourself, though the assumptions are flawed in that they fail to take into account that two completely different things may still bring about the same effect.
How do you know that you are conscious?
Because I *am* consciousness. You can disregard the fact of your own consciousness in favour of what you think you know about the unknowable external world all you wish, but that is a stupid thing to do so *I* will not be joining you.
Ah, you mean philosophical zombies! Right? And you're saying that other people could be controlled by a Zombie Master. Is that correct?
It could be correct, but that is not exactly relevant. The zombie masters are then conscious beings and the main thrust (my being eternally alone) no longer applies.
But... what do you mean by something being "fake consciousness"? That's like something being "fake red", which acts just like red in all ways but is somehow Not Actually Red.
You might be able to imagine something that doesn't seem conscious enough, like a chatbot, but the reason that we call it Not Conscious is that it fails to meet certain observable criteria.
What I mean is something that exhibits the external behaviour of a conscious being perfectly yet does so by means that are completely different to how a conscious being does it.
It is nice and mechanical, different mechanics but same outcome. A cleverbot is a fake consciousness because it's programmers made no attempt to replicate an actual conscious being merely it's externally observable behaviour. It is does not become any less fake simply because it becomes good enough to perfectly replicate the behaviour rather than imperfectly.
I do not think I could do most of the things I do without having self-reflectivity, etc.
If you do the same thing a lot consciously, you tend to end up doing it reflectively without being aware of it I find. But that is just me, perhaps this is not so for you, it is one more reason to conclude you to be a philosophical zombie I guess, since the more differences there are between you and I, the lower the probability of your also being a conscious being.
What do you mean, "nowhere for the minds to go"? Minds are abstractions, not physical objects. It is not like the brain contains a Mind Lobe, which is incapable of being placed inside a processor. If a computer replicates the function of a brain, the mind has been transferred. The mind is software.
So wrong. Minds are not only objects, material or otherwise but they are only actual objects the existence of which is certain to be so. If a computer replicates the function of a brain, it is nothing but a computer that replicates the function of a brain. The cleverness is yours, not it's.
Being a thing does not imply having complete knowledge of the thing. Does a bridge know civil engineering?
A bridge is not conscious and neither are brains for that matter. If consciousness had a physical form then the being would necessarily know the complete details of it's own physical makeup because everything about it's physical makeup *is* made of consciousness.
It's a subtle and not entirely important difference. The mind is currently only found within the brain, and has never been separated. Because of this, we treat the mind and the brain as the same thing quite often.
The mind has never been found *anywhere*. The brain is at best the projecting machine that produces the mind, the mind itself however is not *in* the brain because if it were we would have an intuitive understanding of neuroscience, which we lack. That we need to learn neuroscience in the first place implies that our brain is part of the 'external reality' and not the mind.
The results of one's actions are fundamentally uncertain, and yet all consequentialist ethical systems depend upon the results of actions. "What should I do?" is dependent on the results of doing A, and B, and so on - even though there is an uncertainty in those terms. You still have to choose whichever consequence you think is best.
That is a problem with consequentialist ethical systems.