Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 5329 5330 [5331] 5332 5333 ... 11055

Author Topic: Things that made you go "WTF?" today o_O  (Read 14843219 times)

Loud Whispers

  • Bay Watcher
  • They said we have to aim higher, so we dug deeper.
    • View Profile
    • I APPLAUD YOU SIRRAH
Re: Things that made you go "WTF?" today o_O
« Reply #79950 on: June 28, 2015, 01:26:05 pm »

What if we use biological minds, using a rat-internet relay kind of computer or better yet - brain jar administrators?

wierd

  • Bay Watcher
  • I like to eat small children.
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #79951 on: June 28, 2015, 01:26:57 pm »

That is indistinguishable from your ordinary tyrant.
Logged

cerapa

  • Bay Watcher
  • It wont bite....unless you are the sun.
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #79952 on: June 28, 2015, 01:27:12 pm »

Then you have an administrator just as stupid as any human.

A brain in a jar is still a brain.
Logged

Tick, tick, tick the time goes by,
tick, tick, tick the clock blows up.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #79953 on: June 28, 2015, 01:28:14 pm »

Because that is not how an artificial mind works.
So you're saying that artificial intelligence CANNOT have dreams, wishes and wants?

Are you implying that natural intelligences have something that artificial don't?
Logged
._.

Loud Whispers

  • Bay Watcher
  • They said we have to aim higher, so we dug deeper.
    • View Profile
    • I APPLAUD YOU SIRRAH
Re: Things that made you go "WTF?" today o_O
« Reply #79954 on: June 28, 2015, 01:28:40 pm »

That is indistinguishable from your ordinary tyrant.
An ordinary tyrant that's still a brain in a jar. They cannot serve themself and are wholly reliant on those around them to keep them alive. Short of creating a cult of personality where all must WORSHIP THE BRAIN, its sole purpose of existence would be to administrate, arbitrate and shitpost on the internet. And if it tried to do more than that, it'd be reliant on people.

Rolan7

  • Bay Watcher
  • [GUE'VESA][BONECARN]
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #79955 on: June 28, 2015, 01:28:53 pm »

Good video. From the example in the video, I'd think the reason the stamp AI does that is because it was made with no limiters. Its goal was to get as many stamps as possible as fast as possible in whatever way it could think of. No limits on how to get stamps, or if it could make stamps, or if it could make stamps out of people. That's pretty much why MonkeyHead's AI would determine to wipe out humans. It was given no limiters and determines that the best way to reach its goal is to wipe out humans.

You could get the same kind of thinking from a human, theoretically. Make them free to do whatever, with the power to do anything they can think of, and remove all their ethics and morals and any mental hindrances, and give them a single broad goal for them to fulfill singlemindedly.
The speaker was also trying to claim that we wouldn't be able to foresee *all* the ways the AI could go "wrong".  If he's right, the AI would have a good chance of finding an optimal yet unfortunate behavior that we didn't think to forbid.

I don't think I agree with that, but designing the limitations should be done to expect the unexpected behaviors.  IE, failsafes and redundancies.
Logged
She/they
No justice: no peace.
Quote from: Fallen London, one Unthinkable Hope
This one didn't want to be who they was. On the Surface – it was a dull, unconsidered sadness. But everything changed. Which implied everything could change.

wierd

  • Bay Watcher
  • I like to eat small children.
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #79956 on: June 28, 2015, 01:29:22 pm »

Natural brains embrace error.

Artificial ones do not.
Logged

Loud Whispers

  • Bay Watcher
  • They said we have to aim higher, so we dug deeper.
    • View Profile
    • I APPLAUD YOU SIRRAH
Re: Things that made you go "WTF?" today o_O
« Reply #79957 on: June 28, 2015, 01:30:04 pm »

Natural brains embrace error.

Artificial ones do not.
Precisely! A human brain jar to rule over brain jars that walk.

wierd

  • Bay Watcher
  • I like to eat small children.
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #79958 on: June 28, 2015, 01:31:41 pm »

That is indistinguishable from your ordinary tyrant.
An ordinary tyrant that's still a brain in a jar. They cannot serve themself and are wholly reliant on those around them to keep them alive. Short of creating a cult of personality where all must WORSHIP THE BRAIN, its sole purpose of existence would be to administrate, arbitrate and shitpost on the internet. And if it tried to do more than that, it'd be reliant on people.

Misguided. The brain does not care much about the body it is connected to. That's kinda why BCIs to control robotic limbs work-- the person with the BCI "moves their hand", and the robot obeys.

Brain in the jar admin has the whole planet as a body.

A better thought experiment--

Do you have any consideration at all for the millions of skin cells that die and schluff off your skin every second of every day?
« Last Edit: June 28, 2015, 01:33:30 pm by wierd »
Logged

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #79959 on: June 28, 2015, 01:33:03 pm »

Natural brains embrace error.

Artificial ones do not.
No.

One does not simply remove errors from intelligence. One can't make decisions without making errors. It's simply impossible.
Logged
._.

Spehss _

  • Bay Watcher
  • full of stars
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #79960 on: June 28, 2015, 01:35:15 pm »

Because without that goal, the AI will be very deleterious to humans.

Note SATISFACTION, NOT HAPPINESS.  Maximizing human happiness is easy-- strap them in a chair and pump them full of euphorics. (Or give them all brain implants and stimulate away. Even cheaper.)

Satisfaction requires that humans be able to respond to questions, and be active in their daily lives. A dead human is neither satisfied, nor dissatisfied-- it is no longer a human, it is a corpse.

Telling the AI it must respect human rights (including right to life) means the AI cannot cheat and murder all other humans except for a sociopath, which it then keeps alive forever in a pampered palace planet.

This forces the AI to make decisions that result in optimized local maxima against all of its directives.
So now we just have to define satisfaction as a measurable state to the ai so it can determine how to achieve making humans satisfied.

Because without that goal, the AI will be very deleterious to humans.
Why? Why can't AIs be simply like normal humans, with their own little wants, wishes and dreams, like all of us?
Currently because that sounds hard to code and implement. Once that's possible, it's a question of "what would the ai do if we gave it that much free reign"?

The speaker was also trying to claim that we wouldn't be able to foresee *all* the ways the AI could go "wrong".  If he's right, the AI would have a good chance of finding an optimal yet unfortunate behavior that we didn't think to forbid.

I don't think I agree with that, but designing the limitations should be done to expect the unexpected behaviors.  IE, failsafes and redundancies.
That's what I was saying with having limiters. Obviously making an intelligence capable of free action can lead to unexpected results, so you want to plan for any possible bad results and take precautions. But making a broad limiter to cover many variations of unexpected results can result in the ai exploiting loopholes, and making a tight limiter can result in having a shit-ton of possible unexpected results to predict and prepare for.
Logged
Steam ID: Spehss Cat
Turns out you can seriously not notice how deep into this shit you went until you get out.

wierd

  • Bay Watcher
  • I like to eat small children.
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #79961 on: June 28, 2015, 01:36:08 pm »

Sergarr-- I mean, natural brains have no means to telling, directly, which choice is error, and which is not.

Human brains are just huge collections of feedback loops.  Thats why you get psychedelia when you consume LSD. Self-generated error from feedback loops is indistinguishable from data that is fed to it via the nervous system.

The very concept of "error" is high order, sustained only through interaction between feedback loops that get support from external stimuli.

In comparison, the systemic logic of the AI codifies that error exists at a fundamental level.
Logged

wierd

  • Bay Watcher
  • I like to eat small children.
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #79962 on: June 28, 2015, 01:42:22 pm »

Because without that goal, the AI will be very deleterious to humans.

Note SATISFACTION, NOT HAPPINESS.  Maximizing human happiness is easy-- strap them in a chair and pump them full of euphorics. (Or give them all brain implants and stimulate away. Even cheaper.)

Satisfaction requires that humans be able to respond to questions, and be active in their daily lives. A dead human is neither satisfied, nor dissatisfied-- it is no longer a human, it is a corpse.

Telling the AI it must respect human rights (including right to life) means the AI cannot cheat and murder all other humans except for a sociopath, which it then keeps alive forever in a pampered palace planet.

This forces the AI to make decisions that result in optimized local maxima against all of its directives.
So now we just have to define satisfaction as a measurable state to the ai so it can determine how to achieve making humans satisfied.

Because without that goal, the AI will be very deleterious to humans.
Why? Why can't AIs be simply like normal humans, with their own little wants, wishes and dreams, like all of us?
Currently because that sounds hard to code and implement. Once that's possible, it's a question of "what would the ai do if we gave it that much free reign"?

The speaker was also trying to claim that we wouldn't be able to foresee *all* the ways the AI could go "wrong".  If he's right, the AI would have a good chance of finding an optimal yet unfortunate behavior that we didn't think to forbid.

I don't think I agree with that, but designing the limitations should be done to expect the unexpected behaviors.  IE, failsafes and redundancies.
That's what I was saying with having limiters. Obviously making an intelligence capable of free action can lead to unexpected results, so you want to plan for any possible bad results and take precautions. But making a broad limiter to cover many variations of unexpected results can result in the ai exploiting loopholes, and making a tight limiter can result in having a shit-ton of possible unexpected results to predict and prepare for.

"Human, Are you satisfied with your life?"
[y/n]
"Human, how do you think your life could be made better?"
$InputString
Quantify, #Yes,#No,#UniqueInputStrings
For each UniqueInputString
 EvaluateConsequence(UniqueInputString)
Next
Logged

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #79963 on: June 28, 2015, 01:46:09 pm »

Actually, self-generated error is distinguishable from outside data (by context, usually), otherwise we wouldn't be having this discussion right now ;D

And how does one make AI without feedback loops? Aren't they kind of necessary for decision-making processes?
Logged
._.

Spehss _

  • Bay Watcher
  • full of stars
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #79964 on: June 28, 2015, 01:46:56 pm »

"Human, Are you satisfied with your life?"
[y/n]
"Human, how do you think your life could be made better?"
$InputString
Quantify, #Yes,#No,#UniqueInputStrings
For each UniqueInputString
 EvaluateConsequence(UniqueInputString)
Next
...I didn't think of having the ai just ask people. Clever.
Logged
Steam ID: Spehss Cat
Turns out you can seriously not notice how deep into this shit you went until you get out.
Pages: 1 ... 5329 5330 [5331] 5332 5333 ... 11055