Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 10 11 [12] 13 14 ... 25

Author Topic: Is playing dwarf fortress ethical?  (Read 52334 times)

Egan_BW

  • Bay Watcher
  • Strong enough to crush.
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #165 on: January 22, 2018, 12:03:11 am »

The hell do you care. You're not reading them. And you've repeatedly stated that you don't care about the topic at all.

That is ad hominem.
Ad hominem is when you discount somebody else's arguments by alluding to their character. It doesn't apply here both because I wasn't alluding to kittytac's character, and because she wasn't making an argument in the first place, just appearing to complain about the very fact that debate is happening at all. In a thread he could have easily ignored.
Logged

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #166 on: January 22, 2018, 12:46:33 am »

I can build a merperson breeding camp for fun, actually.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Egan_BW

  • Bay Watcher
  • Strong enough to crush.
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #167 on: January 22, 2018, 12:52:43 am »

Yes, you can.
Logged

Rolan7

  • Bay Watcher
  • [GUE'VESA][BONECARN]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #168 on: January 22, 2018, 12:58:48 am »

"Can I do X" is a form of ethics.
It's somehow even baser than what most types of animals follow.  A pattern of thought that literally only sees the present, not even the next second.

But playing DF would definitely be ethical!
Logged
She/they
No justice: no peace.
Quote from: Fallen London, one Unthinkable Hope
This one didn't want to be who they was. On the Surface – it was a dull, unconsidered sadness. But everything changed. Which implied everything could change.

Romeofalling

  • Bay Watcher
    • View Profile
    • The Art of Amul
Re: Is playing dwarf fortress ethical?
« Reply #169 on: January 22, 2018, 07:07:50 am »

....  What is the benefit of being concerned with quantum states of existence that are, by definition, inaccessible? ..... What are the possible outcomes? What is the opportunity cost of a bad choice?
... the reasonable common denominator of modern societies.  Mostly basic things like "murder is bad".

.... the safe premise that it's unethical to create conscious entities just to harm them.  So the arguments are whether we're actually creating conscious entities or not....

Right, and I'm throwing down economic game theory as a challenge to this assumption. I say that another valid ethics is maximizing gain, with each individual responsible for defining "value," and maximizing their personal gain.

If the dorfs are sentient, then they are responsible for maximizing the value they get out of whatever life presents them. Each player is likewise responsible for maximizing the value they get out of their lives. If torturing dorfs provides value to their lives, then it is only as ethical to do it as it is profitable.
Logged

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #170 on: January 22, 2018, 07:23:44 am »

....  What is the benefit of being concerned with quantum states of existence that are, by definition, inaccessible? ..... What are the possible outcomes? What is the opportunity cost of a bad choice?
... the reasonable common denominator of modern societies.  Mostly basic things like "murder is bad".

.... the safe premise that it's unethical to create conscious entities just to harm them.  So the arguments are whether we're actually creating conscious entities or not....

Right, and I'm throwing down economic game theory as a challenge to this assumption. I say that another valid ethics is maximizing gain, with each individual responsible for defining "value," and maximizing their personal gain.

If the dorfs are sentient, then they are responsible for maximizing the value they get out of whatever life presents them. Each player is likewise responsible for maximizing the value they get out of their lives. If torturing dorfs provides value to their lives, then it is only as ethical to do it as it is profitable.

With the caveat that they aren't sentient.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

GoblinCookie

  • Bay Watcher
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #171 on: January 22, 2018, 01:40:49 pm »

I am a very materialistic person and I know the mind is contained in the brain. We don't have the technology to easily read it yet, though.

If that were so how come we need all that technology in the first place.  Since we *are* brains in this silly model you are using, why can we not understand the workings *of* the brain just by well introspection. 

Wait did you not say contained *in* the brain not the brain itself? 

With the caveat that they aren't sentient.

I would argue that since the sentience of all other beings is inherently uncertain, you cannot build an ethical system that depends upon sentience. 

Right, and I'm throwing down economic game theory as a challenge to this assumption. I say that another valid ethics is maximizing gain, with each individual responsible for defining "value," and maximizing their personal gain.

If the dorfs are sentient, then they are responsible for maximizing the value they get out of whatever life presents them. Each player is likewise responsible for maximizing the value they get out of their lives. If torturing dorfs provides value to their lives, then it is only as ethical to do it as it is profitable.

That is not really a theory, more of a how-to description of how to be evil.   ;) 8)
Logged

Romeofalling

  • Bay Watcher
    • View Profile
    • The Art of Amul
Re: Is playing dwarf fortress ethical?
« Reply #172 on: January 22, 2018, 03:07:07 pm »

That is not really a theory, more of a how-to description of how to be evil.   ;) 8)

Again with the value judgement based on an unspecified set of parameters that you're assuming are universal! Game Theory is hardly evil, it's a system by which you can concretely compare apples to oranges by converting things to a universal measurement.

For example, the Star Trek movie where Spock argues that the good of the many outweighs the good of the one, Kirk's counterargument is that since the good of strangers has less weight to him, that (Good * Many) isn't always equal to or greater than (Good * Me).

I understand that someone in this thread is concerned that they might be doing harm to a potentially sentient creature, but economic modelling measures the issue fairly concretely. Is the potential harm you're doing to the potentially sentient creatures greater than the amount of harm you're doing to yourself by worrying about it?

(% chance you're doing harm) * (amount of harm you're doing) * (% chance that the subject can sense your actions)

vs

(time you spend worrying about this) * (value of what you could be doing instead).

How is that evil?
« Last Edit: January 22, 2018, 04:37:26 pm by Romeofalling »
Logged

Reelya

  • Bay Watcher
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #173 on: January 22, 2018, 03:27:14 pm »

While it's possible that AIs could be come sentient one day, DF entities are not.

And in fact, we're anthropomorphizing them: there are much more complex simulations that we don't wonder whether they're sentient or not, e.g. complex weather simulations. But when you make some ultra-simplistic model called "person" people immediately wonder whether it's sentient. DF creatures are just letters on a screen that we've assigned a semantic label of representing people. They're no more sentient than a cardboard cut-out is.

e.g "the sims" are just a paper-thin facade of skin and a bunch of prewritten animations. There's literally nothing going on "inside their head" because there's literally nothing inside their head. Meanwhile, Google Deep Dreams is a very complex neural network. It's actually more believable that there's a spark of "self-awareness" inside something like Google Deep Dreams than inside a Sims character or DF dwarf.
« Last Edit: January 23, 2018, 12:51:02 am by Reelya »
Logged

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #174 on: January 22, 2018, 09:12:19 pm »

Yeah, I'd not harm a benevolent true AI. Because harming a true, sentient being has the same repercussions as harming a human.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Dozebôm Lolumzalìs

  • Bay Watcher
  • what even is truth
    • View Profile
    • test
Re: Is playing dwarf fortress ethical?
« Reply #175 on: January 23, 2018, 01:01:51 am »

If you can't tell whether anything is sentient or not, what even is sentience? Imagine that Omega* came down and told you that a certain thing was sentient;
 if this would not change your expectations about that thing, not even a little, then the concept is useless. Otherwise, we can tell whether things are sentient,
 but perhaps not with absolute certainty. (Principle: make your beliefs pay rent in anticipated experience.)

*Omega is a rhetorical/explanatory/conceptual tool that helps construct a thought experiment where the muddy vagueness of the world can be cleared aside to see how our thoughts work when unobscured by uncertainty. For the thought experiment, you trust that what Omega says is definitely true. This is like "pinning down" part of a model to better understand how it all functions. It's also sort of like controls in a scientific experiment.

Sentience is something experienced by the being that *is* sentient, only the behavior of that creature is ever experienced by other entities, whether sentient or not.  Sentience is an explanation for the fact there is an observer that can experience anything at all, that is how it is *useful*.
Experience is not fundamental. Anything that I can determine about myself through introspection, I could theoretically determine about somebody else by looking at their brain. If there exists a non-physical soul, it does not seem to have any effects on the world. This lack of effects extends to talking about souls, and for that matter thinking about souls.

The sentience of other beings is inferred by their similarity to the observer, the observer knows that they are human and that other beings are human, hence he infers that when other human beings act similarly to them they do so because they are themselves sentient.  This is because the only alternative is that they are alone in the universe and the other beings were made to perfectly duplicate the behavior that I carry out consciously.
Or alternatively, we can say that something is a mind if it appears to have goals and makes decisions, and is sufficiently complex to be able to communicate with us in some way. Not that this is the True Definition of mind - no such thing exists! And there might be a better definition. My point is that you don't have to define mind-ness by similarity to the definer.

The Omega has in effect already established the certainty of only one thing in real-life, the fact you exist and are conscious.
How do you know that you are conscious?

In regard to the other beings the options are either that they are real consciousnesses or fake simulated ones.  If you succeed in creating a program that simulates the external behavior of conscious beings then you have succeeded in creating one of those two things, but the problem is that you do not know which of the two you have created.  Remember also that other people are also quite possibly fake consciousnesses already.
Ah, you mean philosophical zombies! Right? And you're saying that other people could be controlled by a Zombie Master. Is that correct?

Quote
The problem is you have access only the external behavior of the thing.  The fake consciousness is a system to produce the external behaviors of a conscious beings without having any 'internal' behaviors that in me (the certainly conscious being) correspond to my actually being conscious.  The problem in making a new type of apparently conscious thing is that because it is *new* you cannot determine whether the internal mechanics that allow it to produce the behavior your associate with your being conscious even if you accept that other human beings are conscious.  It is necessary in effect to isolate the 'mechanic itself', which cannot be done because even if you could see everything that it is possible to see there is still the possibility of other things that you cannot see.  Other people's consciousness is inferred based upon the assumption that there is no essential mechanical difference between *I* and *you* and there is no reason to invent some unseen mechanical difference.
But... what do you mean by something being "fake consciousness"? That's like something being "fake red", which acts just like red in all ways but is somehow Not Actually Red.

You might be able to imagine something that doesn't seem conscious enough, like a chatbot, but the reason that we call it Not Conscious is that it fails to meet certain observable criteria.

But we know full well that not everything that we consciously do requires that we be conscious don't we?
I do not think I could do most of the things I do without having self-reflectivity, etc.

Should we therefore aspire to upload our minds into computers which can be more easily manipulated by quantum effects, thus gaining "free will"? :P
Or maybe we should make all our decisions based on cosmic noise, thus gaining a pretty good simulation of "free will".

We cannot upload our minds into computers because that is impossible.  In the computers there is nowhere for the minds to go, plus we have no idea where to find minds in order to actually transport them.
What do you mean, "nowhere for the minds to go"? Minds are abstractions, not physical objects. It is not like the brain contains a Mind Lobe, which is incapable of being placed inside a processor. If a computer replicates the function of a brain, the mind has been transferred. The mind is software.
Logged
Quote from: King James Programming
...Simplification leaves us with the black extra-cosmic gulfs it throws open before our frenzied eyes...
Quote from: Salvané Descocrates
The only difference between me and a fool is that I know that I know only that I think, therefore I am.
Sigtext!

Dozebôm Lolumzalìs

  • Bay Watcher
  • what even is truth
    • View Profile
    • test
Re: Is playing dwarf fortress ethical?
« Reply #176 on: January 23, 2018, 01:04:42 am »

I am a very materialistic person and I know the mind is contained in the brain. We don't have the technology to easily read it yet, though.

If that were so how come we need all that technology in the first place.  Since we *are* brains in this silly model you are using, why can we not understand the workings *of* the brain just by well introspection.
Being a thing does not imply having complete knowledge of the thing. Does a bridge know civil engineering?

Wait did you not say contained *in* the brain not the brain itself?
It's a subtle and not entirely important difference. The mind is currently only found within the brain, and has never been separated. Because of this, we treat the mind and the brain as the same thing quite often.

With the caveat that they aren't sentient.

I would argue that since the sentience of all other beings is inherently uncertain, you cannot build an ethical system that depends upon sentience.

The results of one's actions are fundamentally uncertain, and yet all consequentialist ethical systems depend upon the results of actions. "What should I do?" is dependent on the results of doing A, and B, and so on - even though there is an uncertainty in those terms. You still have to choose whichever consequence you think is best.

Right, and I'm throwing down economic game theory as a challenge to this assumption. I say that another valid ethics is maximizing gain, with each individual responsible for defining "value," and maximizing their personal gain.

If the dorfs are sentient, then they are responsible for maximizing the value they get out of whatever life presents them. Each player is likewise responsible for maximizing the value they get out of their lives. If torturing dorfs provides value to their lives, then it is only as ethical to do it as it is profitable.

That is not really a theory, more of a how-to description of how to be evil.   ;) 8)
"Evil" is a concept within ethical theories, and being evil does not make something not-a-theory.
« Last Edit: January 23, 2018, 01:06:55 am by Dozebôm Lolumzalìs »
Logged
Quote from: King James Programming
...Simplification leaves us with the black extra-cosmic gulfs it throws open before our frenzied eyes...
Quote from: Salvané Descocrates
The only difference between me and a fool is that I know that I know only that I think, therefore I am.
Sigtext!

Dozebôm Lolumzalìs

  • Bay Watcher
  • what even is truth
    • View Profile
    • test
Re: Is playing dwarf fortress ethical?
« Reply #177 on: January 23, 2018, 01:08:25 am »

Yeah, I'd not harm a benevolent true AI. Because harming a true, sentient being has the same repercussions as harming a human.
What repercussions do you mean? Is this "if I hurt it, it might hurt me," a sort of Rawlsian veil, or "it causes pain, which is bad"?

(To clarify, the Rawlsian veil is a second type of repercussion, not a description of the first. It is similar, but it is more "if I didn't know whether I was KittyTac or the AI, I would not want KittyTac to hurt the AI.")
« Last Edit: January 23, 2018, 01:09:57 am by Dozebôm Lolumzalìs »
Logged
Quote from: King James Programming
...Simplification leaves us with the black extra-cosmic gulfs it throws open before our frenzied eyes...
Quote from: Salvané Descocrates
The only difference between me and a fool is that I know that I know only that I think, therefore I am.
Sigtext!

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #178 on: January 23, 2018, 01:18:02 am »

I meant "harming sentient things is baaaaaaaaaaaaad."
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

GoblinCookie

  • Bay Watcher
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #179 on: January 23, 2018, 07:46:10 am »

Again with the value judgement based on an unspecified set of parameters that you're assuming are universal! Game Theory is hardly evil, it's a system by which you can concretely compare apples to oranges by converting things to a universal measurement.

For example, the Star Trek movie where Spock argues that the good of the many outweighs the good of the one, Kirk's counterargument is that since the good of strangers has less weight to him, that (Good * Many) isn't always equal to or greater than (Good * Me).

I understand that someone in this thread is concerned that they might be doing harm to a potentially sentient creature, but economic modelling measures the issue fairly concretely. Is the potential harm you're doing to the potentially sentient creatures greater than the amount of harm you're doing to yourself by worrying about it?

(% chance you're doing harm) * (amount of harm you're doing) * (% chance that the subject can sense your actions)

vs

(time you spend worrying about this) * (value of what you could be doing instead).

How is that evil?

The issue here is that what is valued happens to be ethically significant.  There is a difference between making calculations based values that two entities share and whether those values are ethical to start with.  Torturing people is not a valid value ethically, however much torturers may value it. 

While it's possible that AIs could be come sentient one day, DF entities are not.

And in fact, we're anthropomorphizing them: there are much more complex simulations that we don't wonder whether they're sentient or not, e.g. complex weather simulations. But when you make some ultra-simplistic model called "person" people immediately wonder whether it's sentient. DF creatures are just letters on a screen that we've assigned a semantic label of representing people. They're no more sentient than a cardboard cut-out is.

e.g "the sims" are just a paper-thin facade of skin and a bunch of prewritten animations. There's literally nothing going on "inside their head" because there's literally nothing inside their head. Meanwhile, Google Deep Dreams is a very complex neural network. It's actually more believable that there's a spark of "self-awareness" inside something like Google Deep Dreams than inside a Sims character or DF dwarf.

The problem is that they are an appearance/representation of humanity.  It has nothing to do with what they objectively *are*.

Experience is not fundamental. Anything that I can determine about myself through introspection, I could theoretically determine about somebody else by looking at their brain. If there exists a non-physical soul, it does not seem to have any effects on the world. This lack of effects extends to talking about souls, and for that matter thinking about souls.

You modelled the world without taking the mind into account, of *course* it does not appear to have any effects on the world; that is because you made up a whole load of mechanics to substitute for the mind.  You can make up as many mechanics as you like to explain away anything you like after-all.  You can always make up redundant mechanics to explain away all conscious decision making, since you are prejudiced against what you scornfully call a 'non-physical-soul' to begin with the redundancy is not apparent. 

You can make up as many mechanics as you like to explain anything you like, it does not mean that they exist or are not redundant. 

Or alternatively, we can say that something is a mind if it appears to have goals and makes decisions, and is sufficiently complex to be able to communicate with us in some way. Not that this is the True Definition of mind - no such thing exists! And there might be a better definition. My point is that you don't have to define mind-ness by similarity to the definer.

Something does not have goals or make decisions unless it is genuinely conscious.  What you are in effect saying is that it is observed to behave in a way that if *I* did it would impy conscious decision making.  The point is invalid, you are still defining consciousness against yourself, though the assumptions are flawed in that they fail to take into account that two completely different things may still bring about the same effect. 

How do you know that you are conscious?

Because I *am* consciousness.  You can disregard the fact of your own consciousness in favour of what you think you know about the unknowable external world all you wish, but that is a stupid thing to do so *I* will not be joining you. 

Ah, you mean philosophical zombies! Right? And you're saying that other people could be controlled by a Zombie Master. Is that correct?

It could be correct, but that is not exactly relevant.  The zombie masters are then conscious beings and the main thrust (my being eternally alone) no longer applies. 

But... what do you mean by something being "fake consciousness"? That's like something being "fake red", which acts just like red in all ways but is somehow Not Actually Red.

You might be able to imagine something that doesn't seem conscious enough, like a chatbot, but the reason that we call it Not Conscious is that it fails to meet certain observable criteria.

What I mean is something that exhibits the external behaviour of a conscious being perfectly yet does so by means that are completely different to how a conscious being does it. 

It is nice and mechanical, different mechanics but same outcome.  A cleverbot is a fake consciousness because it's programmers made no attempt to replicate an actual conscious being merely it's externally observable behaviour.  It is does not become any less fake simply because it becomes good enough to perfectly replicate the behaviour rather than imperfectly.

I do not think I could do most of the things I do without having self-reflectivity, etc.

If you do the same thing a lot consciously, you tend to end up doing it reflectively without being aware of it I find.  But that is just me, perhaps this is not so for you, it is one more reason to conclude you to be a philosophical zombie I guess, since the more differences there are between you and I, the lower the probability of your also being a conscious being. 

What do you mean, "nowhere for the minds to go"? Minds are abstractions, not physical objects. It is not like the brain contains a Mind Lobe, which is incapable of being placed inside a processor. If a computer replicates the function of a brain, the mind has been transferred. The mind is software.

So wrong.  Minds are not only objects, material or otherwise but they are only actual objects the existence of which is certain to be so.  If a computer replicates the function of a brain, it is nothing but a computer that replicates the function of a brain.  The cleverness is yours, not it's. 

Being a thing does not imply having complete knowledge of the thing. Does a bridge know civil engineering?

A bridge is not conscious and neither are brains for that matter.  If consciousness had a physical form then the being would necessarily know the complete details of it's own physical makeup because everything about it's physical makeup *is* made of consciousness. 

It's a subtle and not entirely important difference. The mind is currently only found within the brain, and has never been separated. Because of this, we treat the mind and the brain as the same thing quite often.

The mind has never been found *anywhere*.  The brain is at best the projecting machine that produces the mind, the mind itself however is not *in* the brain because if it were we would have an intuitive understanding of neuroscience, which we lack.  That we need to learn neuroscience in the first place implies that our brain is part of the 'external reality' and not the mind. 

The results of one's actions are fundamentally uncertain, and yet all consequentialist ethical systems depend upon the results of actions. "What should I do?" is dependent on the results of doing A, and B, and so on - even though there is an uncertainty in those terms. You still have to choose whichever consequence you think is best.

That is a problem with consequentialist ethical systems.
Logged
Pages: 1 ... 10 11 [12] 13 14 ... 25