Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 1252 1253 [1254] 1255 1256 ... 1420

Author Topic: Things that made you absolutely terrified today  (Read 1977698 times)

Reelya

  • Bay Watcher
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18795 on: May 26, 2018, 09:29:23 am »

Being able to simulate consciousness however would completely overturn what we think we know about concepts such as "self" and "identity".

e.g. if you're running a simulation on Machine#1, which is conscious, and stop the simulation at a point "X", then copy the data to Machine#2, then continue running it, we'd say the consciousness "carried over" from the old hardware to the new hardware, right? We basically froze the state, transferred the state to a new machine and kept running it, so the conscious entity should seamlessly carry over.

However, what if you ran both machines with identical inputs fro the start, up to the same point "X", then stopped Machine#1? From Machine#1's perspective, that's no different to the first scenario: machine#1 was conscious, then it stopped, yet machine#2 contains the state required for the consciousness to keep running, so machine#1's consciousness should just transfer over to the second machine, right? even though there's already a consciousness "in" machine#2. e.g. we have a merging of identical consciousnesses.

e.g. normally, we'd think that turning the hardware of one machine off would "wink out" the consciousness, but if there was another machine to "carry on the state" then we're positing that the consciousness from the first machine flips over to the second machine (e.g. we're saying transference is possible). But what if that second machine was already "full" with an existing consciousness, but happened to have the identical state at the right time, exactly as if the data had been copied over? e.g. in this example, we have two machines running two separate yet identical consciousnesses, then we turn one off. Did we just murder one, or did it transfer/merge with the other?

Then we can ask what if we run the two machines out of sync. If we run Machine#1 one second ahead of identical-inputs machine#2, then turn machine#1 off, machine#2's state will catch up with machine#1's in one second allowing machine#1's consciousness to "continue" in machine#2. Then we can ask well, what if we stop the other machine, e.g. the one that's 1-second behind. Then, the state also carries on on the other machine, but one second in the past instead of one second in the future. We can then ask, is conscious time-invariant e.g. the sequence of states doesn't actually have to be chronological, as long as the pattern exists, no matter how far spread it is in time, or what order the states appear in, according to "our" time.

But then, what happens when you turn machine#1 back on? e.g. we've already said machine#1's conscious "transferred over" to the new hardware, so now is that the same consciousness or not? e.g. the ability to put a consciousness into hardware and copy it over to new machines, splitting and merging identical consciousnesses at will, and having them do that in time-invariant fashions, all of that is going to be a serious challenge not just to ethics, but to our entire concept of what "self" even means. e.g. we might not be worrying about AI human rights, but dealing with true existential terror if we could actually do this, and it's going to be much worse than worrying about how much life as a brain-jar would suck.

e.g. some people ask why high-tech space societies don't visit us. perhaps soon after our current level of development we realize that the "self" doesn't even exist and is an illusion, and we all go insane.

e.g. imagine a situation in which you have multiple conscious AI's, which are deterministic, and you rig their inputs so that gradually they approach identical states. when you do so, you can merely turn all the AIs off, copy the (single) state to a new machine, and logically, that one is the continuation of all the consciousnesses, disproving that individual consciousnesses actually exists. e.g. if you want efficient storage of all uploaded humans, this is the way to do it. Compact them down to one personhood.
« Last Edit: May 26, 2018, 09:50:52 am by Reelya »
Logged

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18796 on: May 26, 2018, 10:22:12 am »

Sentience is fuzzy, actually, because free will is not actually a thing, just probabilistic thoughts.
If so, then it's quantifiable. Pick an arbitrary amount of probabilistic thought to define sentience/sapience for your simulation and you're set.
What amount of probabilistic thought? Thoughts are nothing special, just natural processes. What if in your simulation, a species evolves that does not use anything recognizable as a brain, but is what we would call sentient?
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

smjjames

  • Bay Watcher
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18797 on: May 26, 2018, 10:28:52 am »

Sentience is fuzzy, actually, because free will is not actually a thing, just probabilistic thoughts.
If so, then it's quantifiable. Pick an arbitrary amount of probabilistic thought to define sentience/sapience for your simulation and you're set.
What amount of probabilistic thought? Thoughts are nothing special, just natural processes. What if in your simulation, a species evolves that does not use anything recognizable as a brain, but is what we would call sentient?

I think you meant sapient, not sentient.
Logged

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18798 on: May 26, 2018, 11:03:11 am »

Sapient means wise. Sentient means... uh... sophisticated thoughts, I guess?
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Dunamisdeos

  • Bay Watcher
  • Duggin was the hero we needed.
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18799 on: May 26, 2018, 11:18:40 am »

Sapient is used in context to refer to a being that is like unto humans, generally used in an abstract sense such as capacity for thought. An AI that is self aware could be called Sapient, if it's thought processes could be compared identically to a human.

Sentient simply means the ability to feel or perceive things. I think, therefore I am. A chimpanzee or dog could be considered Sentient, though this does not imply a specific level of intelligence.

Human-level-intelligence or something like it is probably the most accurate way to describe a being that is of comparable mental capacity to ourselves. (But of course everyone knows exactly what Data means when he says he detects sentient life forms on the planet below, Captain. It's a technical distinction.)

« Last Edit: May 26, 2018, 11:20:13 am by Dunamisdeos »
Logged
FACT I: Post note art is best art.
FACT II: Dunamisdeos is a forum-certified wordsmith.
FACT III: "All life begins with Post-it notes and ends with Post-it notes. This is the truth! This is my belief!...At least for now."
FACT IV: SPEECHO THE TRUSTWORM IS YOUR FRIEND or BEHOLD: THE FRUIT ENGINE 3.0

Loud Whispers

  • Bay Watcher
  • They said we have to aim higher, so we dug deeper.
    • View Profile
    • I APPLAUD YOU SIRRAH
Re: Things that made you absolutely terrified today
« Reply #18800 on: May 26, 2018, 12:43:19 pm »

Thought I was dealing with aphids. Very satisfied with my efforts to eliminate them. I recently discovered they were root aphids, and 200 of them were emerging from the soil to the stems. Boiling water, boiling water everywhere to cleanse the corrupted Earth

IT IS THE ONLY WAY (until I find out where to buy neem oil)

Reelya

  • Bay Watcher
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18801 on: May 26, 2018, 01:27:53 pm »

Sentient simply means the ability to feel or perceive things.

Yep, the root of Sentient is the word "sense" or sensation. It denotes creatures capable of having sensory experience. e.g. an AI  "automata" could be smart/Sapient, but not Sentient since it has no inner consciousness.

My view is that sapience doesn't require sentience, but sentience evolved first because it's just a more efficient way to organize a living creature. e.g. it's impossible to hard-code rules for every eventuality that might come along, but you can make a simple sentience instead, then wire it up to signals such as hunger/pain/hot/cold along with some actuators, and then the sentience is motivated to  learn how the actions it can take help to control the sensory information coming in. Simple sentience plus some carrot & stick input wiring is thus a very efficient basic model for getting an organism to move around and do stuff it needs to do: seek good feeling, avoid bad feeling. It's nice simple standard programming that can be applied to virtually endless inputs and outputs.
« Last Edit: May 26, 2018, 01:40:58 pm by Reelya »
Logged

IcyTea31

  • Bay Watcher
  • Studying functions and fiction
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18802 on: May 26, 2018, 02:11:11 pm »

What amount of probabilistic thought? Thoughts are nothing special, just natural processes. What if in your simulation, a species evolves that does not use anything recognizable as a brain, but is what we would call sentient?
That's like saying heat is nothing special, just a natural process that sometimes lights what we would recognize as a fire. What is sapience, if not greater than or equal to the sum of many thoughts? If there were such a brainless species, we'd need to change our definitions, as sentience/sapience as we know it requires either some sort of central information processing unit, whether it be a brain, a computer or a cell nucleus; or a distributed intelligence such as that of an ant colony.
Logged
There is a world yet only seen by physicists and magicians.

Kagus

  • Bay Watcher
  • Olive oil. Don't you?
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18803 on: May 26, 2018, 02:45:19 pm »

Kagus, I didn't know you were me, if we were the same person all along, you should have told me.
Shit, sorry man. I thought we knew.

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18804 on: May 26, 2018, 10:20:23 pm »

What amount of probabilistic thought? Thoughts are nothing special, just natural processes. What if in your simulation, a species evolves that does not use anything recognizable as a brain, but is what we would call sentient?
That's like saying heat is nothing special, just a natural process that sometimes lights what we would recognize as a fire. What is sapience, if not greater than or equal to the sum of many thoughts? If there were such a brainless species, we'd need to change our definitions, as sentience/sapience as we know it requires either some sort of central information processing unit, whether it be a brain, a computer or a cell nucleus; or a distributed intelligence such as that of an ant colony.
Such a species is bound to arise in a simulation of a universe. :D
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

smjjames

  • Bay Watcher
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18805 on: May 26, 2018, 10:42:23 pm »

What amount of probabilistic thought? Thoughts are nothing special, just natural processes. What if in your simulation, a species evolves that does not use anything recognizable as a brain, but is what we would call sentient?
That's like saying heat is nothing special, just a natural process that sometimes lights what we would recognize as a fire. What is sapience, if not greater than or equal to the sum of many thoughts? If there were such a brainless species, we'd need to change our definitions, as sentience/sapience as we know it requires either some sort of central information processing unit, whether it be a brain, a computer or a cell nucleus; or a distributed intelligence such as that of an ant colony.
Such a species is bound to arise in a simulation of a universe. :D

Actually, a hive mind in the truest sense (eusocial insects, common analogues for hive minds in sci-fi, aren’t actually hive minds as they’re still made of independently acting individuals) could fit that, or an organism with a distributed neural network.

The internet, if it ever achieves self awareness, would also count since you can’t kill the entirety of it by killing a central node because there isn’t one. Though, getting into planet scale intelligences is going to be an entirely different thing.

Or maybe something like Sgt Schlock of  Schlock Mercenary, a carbosilicate amorph whose race evolved from data storage devices which escaped after their creator civilization collapsed.
Logged

Egan_BW

  • Bay Watcher
  • Normalcy is constructed, not absolute.
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18806 on: May 26, 2018, 11:05:23 pm »

If I ever became immortal, I'd want the end state of my brain to be something like a carbosilicate amorph. I'd be able to expand my brain just by eating things, survive large parts of it dying, and can't be reduced to a helpless brain-in-a-jar because said brain can break out of that jar.
...Having a good innate understanding of my own brain, and said brain being able to fight things is the best way I can think of to avoid being human-repositoried.
Logged

Bumber

  • Bay Watcher
  • REMOVE KOBOLD
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18807 on: May 27, 2018, 12:45:29 am »

[...] if you're running a simulation on Machine#1, which is conscious, and stop the simulation at a point "X", then copy the data to Machine#2, then continue running it, we'd say the consciousness "carried over" from the old hardware to the new hardware, right? [...]
Who's we? :P

This is basically the teletransportation paradox.
Logged
Reading his name would trigger it. Thinking of him would trigger it. No other circumstances would trigger it- it was strictly related to the concept of Bill Clinton entering the conscious mind.

THE xTROLL FUR SOCKx RUSE WAS A........... DISTACTION        the carp HAVE the wagon

A wizard has turned you into a wagon. This was inevitable (Y/y)?

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18808 on: May 27, 2018, 02:42:02 am »

[...] if you're running a simulation on Machine#1, which is conscious, and stop the simulation at a point "X", then copy the data to Machine#2, then continue running it, we'd say the consciousness "carried over" from the old hardware to the new hardware, right? [...]
Who's we? :P

This is basically the teletransportation paradox.
Yup, you get two clones.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Reelya

  • Bay Watcher
    • View Profile
Re: Things that made you absolutely terrified today
« Reply #18809 on: May 27, 2018, 03:21:00 am »

I think it's a little deeper than that. e.g. if you put new RAM chips into your brain-CPU would that still be "you" when the data was reloaded into the RAM? If so, is it still "you" when the same exact RAM chips are put over on a new machine? e.g. if "you" were in the RAM, then that RAM's extracted and put into a new PC then that should still be you. If copying that RAM to new RAM makes it "not you" then we have a fundamental problem: because the way you'd compute the next time-slice in the first place is by taking the current state, which is in one part of RAM, and computing the next time-slice, which you then store in a different part of RAM, then erase the old RAM, and repeat. e.g. since you're basically being constantly copied, updated and having the old state erased, then that's effectively no different to the teleportation paradox, but it's constant.

"clones" implies that the two machines both "carry" a consciousness that's separate to the other machine, but you can easily set up paradoxes where it's not clear whether that's a valid interpretation. e.g. if your consciousness is entirely embodied in the "current state" of the machine, and the machine's operation is defined as creating successive time-slices that are separate, it shouldn't matter which actual CPU the next time-slice is processed on.

Also if you turn off the PC for 1 second, then turn it back on, there's a discontinuity, but it's hard to see how that would be a different type of discontinuity to turning the PC off, copying the state to a different PC, running for 1 second, then copying it back to the original PC to continue. e.g. would there be a "gap" in the actual-consciousness of the first machine, and the second machine was conscious for 1 second, then "died". There would certainly be no perceptual difference, from the point of view of the sentient AI: that AI would recall being full conscious in machine#1, then being in machine#2, then being in machine#1 again. Turning the machine on and off would be similar: we can't prove it's the "same" consciousness before and after. But I guess we can't prove we don't "die" every time we go to sleep either. Maybe that's what will drive us insane, realizing "we" only exist for a fraction of a second, then there's a new "me" to carry on the chain of memories.

as for merging consider this. you have two AIs running on one machine, each writes their next state into some buffer-memory then frees up their old memory, and swaps the buffers. We should see these as two unique consciousnesses. However, you set it up so that the states converge (using known neural network tricks). And then, at the last step, you have each AI write their now-identical next state into the same buffer location. e.g. that one chunk of RAM is the successor-state to both AIs. since that one memory buffer is the successor to both AIs, you should only need to run one copy of it in the CPU, and it's a valid successor to the consciousnesses of both original AIs.
« Last Edit: May 27, 2018, 03:49:24 am by Reelya »
Logged
Pages: 1 ... 1252 1253 [1254] 1255 1256 ... 1420