Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 9 10 [11] 12 13 ... 25

Author Topic: Is playing dwarf fortress ethical?  (Read 52333 times)

GoblinCookie

  • Bay Watcher
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #150 on: January 19, 2018, 03:02:23 pm »

A rather concerning thought experiment. Suppose that sentience does have a mathematical model, i.e. can be programmed. Suppose, then, that the steps required to express sentience are expressed onto a piece of paper, and an individual proceeds to follow the instructions of each step. Fundamentally, this is no different than a computer program performing a series of steps in place of this individual. If we agree that the computer following these steps results in sentience, does the piece of paper, when coupled with someone willing to write out each step, produce sentience of its own? If not, what is the difference between an individual performing each step on a piece of paper and a computer processing each step on transistors and memory storage devices?

The problem with this is that we are making a mathematical model not of the sentience itself but the behavior we expect a sentient being to exhibit; there is no reason to think that there are not multiple ways to arrive at the result, only one of them actually involves the existence of a sentience.  We have the problem then of the fact that we have no way of knowing whether the means we are employing to our 'ends' is the right means because we are reverse engineering the procedure as it were.  The problem with true AI is as ever that it is essentially impossible to tell whether you have actually succeeded in creating it. 
Logged

Dozebôm Lolumzalìs

  • Bay Watcher
  • what even is truth
    • View Profile
    • test
Re: Is playing dwarf fortress ethical?
« Reply #151 on: January 19, 2018, 06:18:50 pm »

A rather concerning thought experiment. Suppose that sentience does have a mathematical model, i.e. can be programmed. Suppose, then, that the steps required to express sentience are expressed onto a piece of paper, and an individual proceeds to follow the instructions of each step. Fundamentally, this is no different than a computer program performing a series of steps in place of this individual. If we agree that the computer following these steps results in sentience, does the piece of paper, when coupled with someone willing to write out each step, produce sentience of its own? If not, what is the difference between an individual performing each step on a piece of paper and a computer processing each step on transistors and memory storage devices?

The problem with this is that we are making a mathematical model not of the sentience itself but the behavior we expect a sentient being to exhibit; there is no reason to think that there are not multiple ways to arrive at the result, only one of them actually involves the existence of a sentience.  We have the problem then of the fact that we have no way of knowing whether the means we are employing to our 'ends' is the right means because we are reverse engineering the procedure as it were.  The problem with true AI is as ever that it is essentially impossible to tell whether you have actually succeeded in creating it.
If you can't tell whether anything is sentient or not, what even is sentience? Imagine that Omega* came down and told you that a certain thing was sentient;
 if this would not change your expectations about that thing, not even a little, then the concept is useless. Otherwise, we can tell whether things are sentient,
 but perhaps not with absolute certainty. (Principle: make your beliefs pay rent in anticipated experience.)

*Omega is a rhetorical/explanatory/conceptual tool that helps construct a thought experiment where the muddy vagueness of the world can be cleared aside to see how our thoughts work when unobscured by uncertainty. For the thought experiment, you trust that what Omega says is definitely true. This is like "pinning down" part of a model to better understand how it all functions. It's also sort of like controls in a scientific experiment.

A rather concerning thought experiment. Suppose that sentience does have a mathematical model, i.e. can be programmed. Suppose, then, that the steps required to express sentience are expressed onto a piece of paper, and an individual proceeds to follow the instructions of each step. Fundamentally, this is no different than a computer program performing a series of steps in place of this individual. If we agree that the computer following these steps results in sentience, does the piece of paper, when coupled with someone willing to write out each step, produce sentience of its own? If not, what is the difference between an individual performing each step on a piece of paper and a computer processing each step on transistors and memory storage devices?
Ah, that's a good way of putting it. A more abstract and vague thought experiment along these lines was what pushed me toward omnirealism - either all computible minds are real, or no minds are real, or [some weird thing that says that you realize a mind by writing down symbols but not by thinking about the model] (but the model is only present in some interconnected neurons; paper is part of my extended brain, and this possibility is invalid), or [some weird thing that says that you realize a mind when you understand how it works], or [some weird thing that says that you realize a mind not by understanding it, but by predicting how it works]. I prefer the first, because I don't see an important difference between the mathematical structure being known and the structure being ran. (There are ways to get the output without directly running things. If I use abstractions to determine what a model-mind does, rather than going variable-by-variable, I don't think that the mind-ness has disappeared. And if you can make a mind real just by knowing the mathematical model that describes how it works... then we have to define "knowledge," because otherwise I could just make a massive random file and say "statistically, at least one portion of this file would produce a mind if ran with one of the nearly infinitely-many possible interpretation systems." Or if I make it even larger, the same can be said for any given language. Heck, a rock has information. Maybe the rock's atoms, when analyzed and put into an interpretation system, make a mind. That's just ridiculous. We've effectively said that all minds are real, anyway, but in a weird and roundabout way.)

(This assumes that the substrate is not inherently important to the mind - I am ran on a lump of sparky flesh, you are run on a lump of sparky silicon, but that doesn't make one of us necessarily not a person. This seems obvious to me, but is probably a controversial statement.)
Well, fundamentally the substrate doesn't really matter- that's the Church-Turing thesis, after all. If it works on a computer, it works on pencil and paper. In that regard, if we can consider that that sentience is Turing-complete, then it must be true that sentience can exist on any medium. So long as there is a retainer of information, and something to act upon them by explicit instruction, there can be sentience.

There is a catch, though. Unbounded nondeterminism- the notion of a nondeterministic process whose time of termination is unbounded- can arise in concurrent systems. Unbounded nondeterminism, under clever interpretations, can be considered to be hypercomputational- any actor in such a system has an unbounded runtime and a nondeterministic outcome, so it remains unknowable the end result of such a system. If sentience requires such unbounded nondeterminism, then such a system would no longer need to ascribe by the Church-Turing thesis, and not need be replicable on pen and paper. We are already aware that the human brain is highly concurrent, so it's plausible that sentience requires this unbounded nondeterminism arising through concurrency to exist. It wouldn't mean that we cannot produce sentience on a computer- we've already produced system setups with unbounded nondeterminism- but that its existence on a computer does not necessarily imply that it can exist on simpler mediums. All without violating any existing proofs.

So it plausible that sentience can exist in a form that can be done on a computer or in a brain, but not with pen and paper. It would simply require a great deal of concurrency.
I don't understand how a (non-quantum?) computer could do anything that I can't do on paper and pencil, given arbitrarily but finitely more time and resources.

Also, I don't see how unbounded nondeterminism applies to human beings. Unless quantum uncertainty plays an important role in human cognition, we're probably just deterministic (albeit very chaotic), right? And what does the time of termination even mean for a human mind?

But why would it be bad to simulate humans in such a situation, yet it's fine to script a story where that happens?
The main question is whether hypotheticals are morally real, then. And keep in mind that (as far as I know) we can never rule out that we are living in a simulation ourselves.
We're certainly living in a hypothetical universe that is being simulated by an infinite number of hypothetical computers.  But ours is special, as I'll demonstrate.

I'm now imagining a universe like XKCD's man-with-rocks, except the person is a woman.  Both these universes are now simulating our universe.  There are infinite permutations available, all simulating our universes.

In fact there are universes simulating every universe, including permutations of our universe.  Like in the comic, the man misplaces a rock - permutations like that, including the moon disappearing or the strong nuclear force ceasing.

If our universe is merely one of these infinite simulations, then the odds of physics continuing to work are statistically near zero.
If all conceivable, hypothetical universes had consciousness like you or I, then statistically speaking we should be experiencing total chaos.  But we aren't.
Therefore, it's morally safe to imagine hypothetical universes, since the beings within are astronomically unlikely to have consciousness. 

Even if they are otherwise copies, or near-copies, of us.  Even if they react as we would, and it's natural to feel empathy for them.

We could definitely be brains in jars, but I reject the idea that simulation can create consciousness.
(This "proof" from my butt sounds familiar, I'm probably remembering something I read...  Probably from some sci-fi growing up.  I'd like to know what it's called, if anyone recognizes it.  I really should study actual philosophy more.)
Or, alternatively, there could be somebody simulating universes who doesn't misplace bits often?

Or apply the anthropic principle. Nobody ever experiences ceasing-to-exist.
« Last Edit: January 19, 2018, 06:33:33 pm by Dozebôm Lolumzalìs »
Logged
Quote from: King James Programming
...Simplification leaves us with the black extra-cosmic gulfs it throws open before our frenzied eyes...
Quote from: Salvané Descocrates
The only difference between me and a fool is that I know that I know only that I think, therefore I am.
Sigtext!

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #152 on: January 19, 2018, 08:32:51 pm »

This is only slightly related to DF. I recommend moving your walls of text to a philosophy thread in General Discussion.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Eschar

  • Bay Watcher
  • hello
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #153 on: January 20, 2018, 12:44:31 pm »

Dwarves in DF are not remotely close to even being 1% sentient. You might have overestimated exactly how deep DF goes. Dwarves' "thoughts" are nothing but simple triggers that raise or lower a single number variable depending on specific stimuli. If a dwarf's relative happens to die, the dwarf's stress goes up by a set value written in the code. Dwarves do not have complex emotional responses; they only sometimes "tantrum" or go "insane", both of which are exactly as mathematical and procedural as their thoughts. They cannot think deeply, either. A military dwarf, if they "see" a hostile creature, will immediately run it down with mechanical precision, whether or not it would be better to wait for their fellow militia.

It is completely ethical to play Dwarf Fortress, because the creatures with which we interact are not in fact creatures, but simply data being manipulated by deterministic processes. Killing somebody in this game (or in any game) does nothing but change a few bytes in your computer's memory. Not even close to killing somebody in real life.



Humans on Earth are not remotely close to even being 1% sentient. You might have overestimated exactly how complex sentience is. Human "thoughts" are nothing but simple triggers that raise or lower a chemical depending on specific stimuli. If a human's relative happens to die, the human's stress goes up by a set value written in the DNA. Humans do not have complex emotional responses; they only sometimes "tantrum" or go "insane", both of which are exactly as mathematical and procedural as their thoughts. They cannot think deeply, either. A policing human, if he "feels" threatened, will immediately shoot it down with mechanical precision, whether or not it would be better to wait for backup.

It is completely ethical to play Thermonuclear War or Murdering Hobo, because the creatures with which we interact are not in fact creatures, but simply data in disposable biodegradable shells being manipulated by a deterministic process. Killing someone in these games (or any games) does nothing but change a few bits of matter in the universe. Not even close to killing a dwarf in Dwarf Fortress.
I think you have a very misinformed notion of how the human brain works.

Given a athiestic or materialist position, jecowa's conclusion is completely logical. (I am a theist, but as far as I can immediately tell I'm the only one in this discussion, so I'm going to keep it to myself.)

I do have a few technicalities to point out though; feel perfectly free to skip over them.
Spoiler: Technicalities (click to show/hide)

Have any of you read Godel, Escher, Bach: an Eternal Golden Braid by Douglas Hofstadter, by the way? It is a masterpiece about this very topic (minus the DF references.)

I don't know why people find merperson breeding camps even slightly horrifying. Guess that sentence sums up my standing on DF ethics.

Amen. I'm still annoyed at Toady for removing the mer-bone quality bonus.
« Last Edit: January 20, 2018, 12:48:33 pm by Eschar »
Logged

Dozebôm Lolumzalìs

  • Bay Watcher
  • what even is truth
    • View Profile
    • test
Re: Is playing dwarf fortress ethical?
« Reply #154 on: January 20, 2018, 03:53:55 pm »

I do have a few technicalities to point out though; feel perfectly free to skip over them.
1. The universe as we know it is not deterministic: the quantum processes at the subatomic level involve true randomness.
This is negligible on the level of human cognition, as far as I know. Quantum effects are easily overwhelmed by thermal noise in most situations. If a human brain is unstable enough that quantum effects can push it from one decision to another, it's unstable enough that thermal noise will do the same. To the extent that people have reliable and consistent personality traits etc., we are not quantum minds. (This is not a statement that we can never be quantum minds, but it will take laboratory-grade controlled conditions, I believe, not wet, squishy, warm brains.)
Have any of you read Godel, Escher, Bach: an Eternal Golden Braid by Douglas Hofstadter, by the way? It is a masterpiece about this very topic (minus the DF references.)
I've read the first third... I should get around to the rest.
« Last Edit: January 20, 2018, 03:57:15 pm by Dozebôm Lolumzalìs »
Logged
Quote from: King James Programming
...Simplification leaves us with the black extra-cosmic gulfs it throws open before our frenzied eyes...
Quote from: Salvané Descocrates
The only difference between me and a fool is that I know that I know only that I think, therefore I am.
Sigtext!

Egan_BW

  • Bay Watcher
  • Strong enough to crush.
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #155 on: January 20, 2018, 07:00:31 pm »

Should we therefore aspire to upload our minds into computers which can be more easily manipulated by quantum effects, thus gaining "free will"? :P
Or maybe we should make all our decisions based on cosmic noise, thus gaining a pretty good simulation of "free will".
Logged

Eschar

  • Bay Watcher
  • hello
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #156 on: January 20, 2018, 07:17:08 pm »

I do have a few technicalities to point out though; feel perfectly free to skip over them.
1. The universe as we know it is not deterministic: the quantum processes at the subatomic level involve true randomness.
This is negligible on the level of human cognition, as far as I know.

Indeed, hence it can be considered a technicality.
Logged

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #157 on: January 20, 2018, 09:19:59 pm »

Okay, fewer walls of text. Good.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Egan_BW

  • Bay Watcher
  • Strong enough to crush.
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #158 on: January 20, 2018, 11:00:54 pm »

The hell do you care. You're not reading them. And you've repeatedly stated that you don't care about the topic at all.
Logged

GoblinCookie

  • Bay Watcher
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #159 on: January 21, 2018, 08:18:35 am »

If you can't tell whether anything is sentient or not, what even is sentience? Imagine that Omega* came down and told you that a certain thing was sentient;
 if this would not change your expectations about that thing, not even a little, then the concept is useless. Otherwise, we can tell whether things are sentient,
 but perhaps not with absolute certainty. (Principle: make your beliefs pay rent in anticipated experience.)

*Omega is a rhetorical/explanatory/conceptual tool that helps construct a thought experiment where the muddy vagueness of the world can be cleared aside to see how our thoughts work when unobscured by uncertainty. For the thought experiment, you trust that what Omega says is definitely true. This is like "pinning down" part of a model to better understand how it all functions. It's also sort of like controls in a scientific experiment.

Sentience is something experienced by the being that *is* sentient, only the behavior of that creature is ever experienced by other entities, whether sentient or not.  Sentience is an explanation for the fact there is an observer that can experience anything at all, that is how it is *useful*.  The sentience of other beings is inferred by their similarity to the observer, the observer knows that they are human and that other beings are human, hence he infers that when other human beings act similarly to them they do so because they are themselves sentient.  This is because the only alternative is that they are alone in the universe and the other beings were made to perfectly duplicate the behavior that I carry out consciously. 

The Omega has in effect already established the certainty of only one thing in real-life, the fact you exist and are conscious.  In regard to the other beings the options are either that they are real consciousnesses or fake simulated ones.  If you succeed in creating a program that simulates the external behavior of conscious beings then you have succeeded in creating one of those two things, but the problem is that you do not know which of the two you have created.  Remember also that other people are also quite possibly fake consciousnesses already.

The problem is you have access only the external behavior of the thing.  The fake consciousness is a system to produce the external behaviors of a conscious beings without having any 'internal' behaviors that in me (the certainly conscious being) correspond to my actually being conscious.  The problem in making a new type of apparently conscious thing is that because it is *new* you cannot determine whether the internal mechanics that allow it to produce the behavior your associate with your being conscious even if you accept that other human beings are conscious.  It is necessary in effect to isolate the 'mechanic itself', which cannot be done because even if you could see everything that it is possible to see there is still the possibility of other things that you cannot see.  Other people's consciousness is inferred based upon the assumption that there is no essential mechanical difference between *I* and *you* and there is no reason to invent some unseen mechanical difference. 

But we know full well that not everything that we consciously do requires that we be conscious don't we?

Should we therefore aspire to upload our minds into computers which can be more easily manipulated by quantum effects, thus gaining "free will"? :P
Or maybe we should make all our decisions based on cosmic noise, thus gaining a pretty good simulation of "free will".

We cannot upload our minds into computers because that is impossible.  In the computers there is nowhere for the minds to go, plus we have no idea where to find minds in order to actually transport them. 
Logged

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #160 on: January 21, 2018, 09:48:54 am »

I am a very materialistic person and I know the mind is contained in the brain. We don't have the technology to easily read it yet, though.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Romeofalling

  • Bay Watcher
    • View Profile
    • The Art of Amul
Re: Is playing dwarf fortress ethical?
« Reply #161 on: January 21, 2018, 06:11:30 pm »

I'm coming to this conversation late and skimming, but I can't seem to find any point in the conversation where we define the value of "ethical behavior." What is the benefit of being concerned with quantum states of existence that are, by definition, inaccessible? There seems to be an implicit assumption here that  our choices will be judged by some third party by a rubrick not presented to us. What are the possible outcomes? What is the opportunity cost of a bad choice?
Logged

Eschar

  • Bay Watcher
  • hello
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #162 on: January 21, 2018, 10:47:14 pm »

The hell do you care. You're not reading them. And you've repeatedly stated that you don't care about the topic at all.

That is ad hominem].
Logged

Rolan7

  • Bay Watcher
  • [GUE'VESA][BONECARN]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #163 on: January 21, 2018, 10:56:42 pm »

I'm coming to this conversation late and skimming, but I can't seem to find any point in the conversation where we define the value of "ethical behavior." What is the benefit of being concerned with quantum states of existence that are, by definition, inaccessible? There seems to be an implicit assumption here that  our choices will be judged by some third party by a rubrick not presented to us. What are the possible outcomes? What is the opportunity cost of a bad choice?
My understanding (and I skimmed some as well) is that people mostly assumed "ethical behavior" as the reasonable common denominator of modern societies.  Mostly basic things like "murder is bad".

The more metaphysical arguments are based on the safe premise that it's unethical to create conscious entities just to harm them.  So the arguments are whether we're actually creating conscious entities or not.  (I think the consensus is well past DF at this point, and into whether it's even possible to do such a thing in simulations.)
Logged
She/they
No justice: no peace.
Quote from: Fallen London, one Unthinkable Hope
This one didn't want to be who they was. On the Surface – it was a dull, unconsidered sadness. But everything changed. Which implied everything could change.

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #164 on: January 21, 2018, 11:25:19 pm »

DF characters aren't sentient. That's it. The walls of text aren't really related to DF anymore. Just create another thread for them.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.
Pages: 1 ... 9 10 [11] 12 13 ... 25