Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1] 2 3

Author Topic: AI risk re-re-revisited  (Read 18979 times)

SirQuiamus

  • Bay Watcher
  • Keine Experimente!
    • View Profile
AI risk re-re-revisited
« on: September 23, 2016, 08:04:16 am »

I'm pretty sure y'all know what AI risk is supposed to be, so I'm not going to waste time on introductions. I was prompted to start yet another thread on this subject because Scott is doing a pretty interesting experiment to figure out how to effectively persuade non-believers of the reality of AI risk, which is to say that the results are going to have obvious relevance to the interests of people on both sides of the debate.

Quote
’ve been trying to write a persuasive essay about AI risk, but there are already a lot of those out there and I realize I should see if any of them are better before pushing mine. This also ties into a general interest in knowing to what degree persuasive essays really work and whether we can measure that.

So if you have time, I’d appreciate it if you did an experiment. You’ll be asked to read somebody’s essay explaining AI risk and answer some questions about it. Note that some of these essays might be long, but you don’t have to read the whole thing (whether it can hold your attention so that you don’t stop reading is part of what makes a persuasive essay good, so feel free to put it down if you feel like it).

Everyone is welcome to participate in this, especially people who don’t know anything about AI risk and especially especially people who think it’s stupid or don’t care about it.

I want to try doing this two different ways, so:

If your surname starts with A – M, try the first version of the experiment here at https://goo.gl/forms/8quRVmYNmDKAEsvS2

If your surname starts with N – Z, try the second version at https://goo.gl/forms/FznD6Bm51oP7rqB82

Thanks to anyone willing to put in the time.

As someone who (still) thinks that AI risk is complete bunk, I really wanna see where this leads.
« Last Edit: September 26, 2016, 09:10:40 am by SirQuiamus »
Logged

Criptfeind

  • Bay Watcher
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #1 on: September 23, 2016, 09:19:25 am »

I got a placebo essay I guess, since it had nothing to do with AI.
Logged

Flying Dice

  • Bay Watcher
  • inveterate shitposter
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #2 on: September 23, 2016, 10:19:57 am »

Holy shit but that is a bad survey. The bias is real.

It opens reasonably, by asking whether we think AI risk is worth studying -- I agree that it is, because it's a potential extinction-level event if an AI with sufficient power acts against our interests. But then the fucker concludes with a bunch of leading questions which all prejudice participants towards viewing AI as a threat.

Not to mention that the essays, even ones unrelated to AI risk, are pretty shit themselves.

Here's one:
Spoiler (click to show/hide)
The author moved the goal-posts on the initial premise from "it is easy to imagine simulating civilizations" to "it is easy to simulate civilizations" and apparently doesn't expect the audience to notice. He's changed the initial assumption from a reasonable and provable one to one which has been designed to be contradictory.

Jesus, I hope none of these people are actually employed in the hard sciences, they've got a weaker grasp on experimental design, objectivity, and logic than I do, and I took a degree in Liberal fucking Arts. Like what the hell, they're not even trying to pretend that they're acting in good faith.
Logged


Aurora on small monitors:
1. Game Parameters -> Reduced Height Windows.
2. Lock taskbar to the right side of your desktop.
3. Run Resize Enable

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #3 on: September 23, 2016, 12:04:55 pm »

This is one of those things that I've been aware of for a long time, though it doesn't discuss weird routes like Accelerando: it isn't actively murdering people to build a computronium shell around the skyfire, it's just kinda pushing them aside, professional courtesy if you will.

That world is still pretty damn magical and amazing by modern standards, and definitely preferable to many options... but it isn't my first choice.

It sounds super goddamn sappy, like goddammit I am mad at myself for what I am about to type, but I really hope they teach any human level AI to love.

I'd much rather be the ward of a Mind than a matrioshka brain hungry for resources.
Logged

Gigaz

  • Bay Watcher
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #4 on: September 23, 2016, 12:40:27 pm »

I got a part of the Waitbutwhy about superintelligence.

I think the thing that bothers me most about the discussion is how easily people discard the distinction between intelligence and knowledge.
As long as man had intelligence, but hardly any culturally inherited knowledge, he wasn't significantly different from an animal.

All the scenarios where superintelligent AI kills all humans usually requires that it finds/knows a way to do this without significant effort. But how realistic is that? Extincting a specific pest is incredibly difficult for humans and requires a giant effort. Research is definitely slow and tedious even for a superintelligent AI, because AI can not change the scaling laws of common algorithms. The world is at least partly chaotic which means that prediction becomes exponentially difficult with time. There is nothing an AI can do about that.
Logged

SirQuiamus

  • Bay Watcher
  • Keine Experimente!
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #5 on: September 23, 2016, 01:25:18 pm »

Holy shit but that is a bad survey. The bias is real.

It opens reasonably, by asking whether we think AI risk is worth studying -- I agree that it is, because it's a potential extinction-level event if an AI with sufficient power acts against our interests. But then the fucker concludes with a bunch of leading questions which all prejudice participants towards viewing AI as a threat.

Not to mention that the essays, even ones unrelated to AI risk, are pretty shit themselves.
Each essay is pretty dreadful in its own special way, but at least the test has the potential to prove it to the authors of said essays---if enough people outside the LessWrong filter bubble take the survey, that is.

Here's one:
Spoiler (click to show/hide)
The author moved the goal-posts on the initial premise from "it is easy to imagine simulating civilizations" to "it is easy to simulate civilizations" and apparently doesn't expect the audience to notice. He's changed the initial assumption from a reasonable and provable one to one which has been designed to be contradictory.
It's a stupid and intellectually dishonest argument, but note that it's formally identical to the original one used by Bostrom, Yudkowsky, and others. You know, the argument that was apparently good enough to convince super-genius Elon Musk of the unreality of our reality.

Jesus, I hope none of these people are actually employed in the hard sciences, they've got a weaker grasp on experimental design, objectivity, and logic than I do, and I took a degree in Liberal fucking Arts. Like what the hell, they're not even trying to pretend that they're acting in good faith.
Nah, they're not scientists in any real sense of the word: Big Yud is an autodidact Wunderkind whereas Scott is a doctor, and Bostrom and his colleagues are just generic hacks who have found a fertile niche in the transhumanist scene. I'm not sure all of them are always acting in bad faith, though: when you spend enough time within an insular subculture, you'll genuinely lose the ability to tell what makes a valid argument in the outside world.

E:
Spoiler: Related (click to show/hide)
« Last Edit: September 23, 2016, 01:33:11 pm by SirQuiamus »
Logged

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #6 on: September 23, 2016, 01:31:11 pm »

I like Star Slate Codex, and I am a notorious transhumanist. But this? This is dumb. We could and should be doing so many better things like not all dying in the climate crisis and maybe trying to actually get a funded transhuman project off the ground instead of trying to invent AI Jesus to just give it to us. Or in this case, having abstract silly vacuous arguments about what we can do to keep AI Jesus from sending us to AI Hell.

God, I cannot stand singulatarians. The very idea runs afoul of every conception of computer science and biology and psychology.

~If anybody disagrees with this post I will unleash Roko's Basilisk on the thread; please precommit to obeying me as your ruler~
« Last Edit: September 23, 2016, 01:32:44 pm by MetalSlimeHunt »
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.

SirQuiamus

  • Bay Watcher
  • Keine Experimente!
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #7 on: September 23, 2016, 01:48:13 pm »

As for Bostrom being a hack, I'd like you to actually explore the argument that he made in his book (rather than just some essay loosely based on his books) to come to that conclusion. That essay is not particularly representative of Bostrom's ideas.
I swear to the robot-gods that one day I'll read Superintelligence from cover to cover. I've already tried a few times, but it's so riddled with goalpost-shifting shenanigans of the above type that I'm always overcome with RAEG before I get to page 5.
Logged

Cthulhu

  • Bay Watcher
  • A squid
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #8 on: September 23, 2016, 01:55:52 pm »

Holy shit but that is a bad survey. The bias is real.

It opens reasonably, by asking whether we think AI risk is worth studying -- I agree that it is, because it's a potential extinction-level event if an AI with sufficient power acts against our interests. But then the fucker concludes with a bunch of leading questions which all prejudice participants towards viewing AI as a threat.

Not to mention that the essays, even ones unrelated to AI risk, are pretty shit themselves.
Each essay is pretty dreadful in its own special way, but at least the test has the potential to prove it to the authors of said essays---if enough people outside the LessWrong filter bubble take the survey, that is.

Here's one:
Spoiler (click to show/hide)
The author moved the goal-posts on the initial premise from "it is easy to imagine simulating civilizations" to "it is easy to simulate civilizations" and apparently doesn't expect the audience to notice. He's changed the initial assumption from a reasonable and provable one to one which has been designed to be contradictory.
It's a stupid and intellectually dishonest argument, but note that it's formally identical to the original one used by Bostrom, Yudkowsky, and others. You know, the argument that was apparently good enough to convince super-genius Elon Musk of the unreality of our reality.

Jesus, I hope none of these people are actually employed in the hard sciences, they've got a weaker grasp on experimental design, objectivity, and logic than I do, and I took a degree in Liberal fucking Arts. Like what the hell, they're not even trying to pretend that they're acting in good faith.
Nah, they're not scientists in any real sense of the word: Big Yud is an autodidact Wunderkind whereas Scott is a doctor, and Bostrom and his colleagues are just generic hacks who have found a fertile niche in the transhumanist scene. I'm not sure all of them are always acting in bad faith, though: when you spend enough time within an insular subculture, you'll genuinely lose the ability to tell what makes a valid argument in the outside world.

E:
Spoiler: Related (click to show/hide)

Good thing Elon Musk isn't the be-all end-all of intellect then, apparently, because looking at that I'm pretty sure it's just a word salad of the thousand year old ontological argument.

The only thing I know about the lesswrongosphere is that a bunch of them got convinced they were in robot hell unless they gave all their money to Elon Musk.
« Last Edit: September 23, 2016, 02:02:10 pm by Cthulhu »
Logged
Shoes...

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #9 on: September 23, 2016, 02:02:20 pm »

As for the idea of "An AI would be able to predict every action of people blah blah blah". To be honest, I thought this to be true for a while. Putting it in a mathematical context, though, it's fundamentally impossible, assuming that the decision function of the human brain is a chaotic function, i.e. varies wildly for very close inputs (even if they appear close over small enough metrics of time close to 0), topologically mixes (covers every single possibility over a long enough period of time regardless of input) and has dense periodic orbits (for particular inputs the system might be predictable, but not everywhere). This system is, by its construction, impossible to mathematically model for all given inputs. No AI could "simulate" this system.

Note that this does not mean that an AI can never exist as a chaotic system. It just means that the AI cannot approximate any chaotic system sufficiently far in time, including itself.
A fun thing is that in the post-singularity setting Orion's Arm, the inability of AI to do this is considered one of their few absolute limits within the canon, alongside violating c and reversing entropy. It's worth noting that some of the AI in Orion's Arm are at a level where they are literally worshiped as gods even by people who understand what they are, on the basis that they fit the theological conception closer than any other demonstrable being.

This also resulted in some interesting writing such as deductive telepathy, which human-level intelligences and friendly AI play as a game, the latter trying to determine the inner thoughts of the former without any kind of access to their mind.

The less friendly version of this is baroquification, which is baselines intentionally becoming irrational actors in order to confuse superintelligences.
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.

Harry Baldman

  • Bay Watcher
  • What do I care for your suffering?
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #10 on: September 23, 2016, 02:06:21 pm »

The survey is kind of shit, since of course you'll exercise critical thinking (it being a skill you at least pretend to have if you're on the Internet) if told you're participating in a survey where you're supposed to exercise critical thinking about a matter you're told about in advance.

UNLESS IT'S A DIVERSION, in which case good show, only occurred to me just now. Though what it could be testing in that case, and what the other test groups are, will sadly remain an unfortunate mystery.

Good thing Elon Musk isn't the be-all end-all of intellect then, apparently, because looking at that I'm pretty sure it's just a word salad of the thousand year old ontological argument.

The only thing I know about the lesswrongosphere is that a bunch of them got convinced they were in robot hell unless they gave all their money to Elon Musk.

Actually, yeah. It does look a lot like the ontological argument. Very easily dismissed as complete nonsense, but it takes a little bit of doing and mental exercise to put into words why it's nonsense.
« Last Edit: September 23, 2016, 02:44:00 pm by Harry Baldman »
Logged

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #11 on: September 23, 2016, 02:15:34 pm »

I got a part of the Waitbutwhy about superintelligence.

I think the thing that bothers me most about the discussion is how easily people discard the distinction between intelligence and knowledge.
As long as man had intelligence, but hardly any culturally inherited knowledge, he wasn't significantly different from an animal.

All the scenarios where superintelligent AI kills all humans usually requires that it finds/knows a way to do this without significant effort. But how realistic is that? Extincting a specific pest is incredibly difficult for humans and requires a giant effort. Research is definitely slow and tedious even for a superintelligent AI, because AI can not change the scaling laws of common algorithms. The world is at least partly chaotic which means that prediction becomes exponentially difficult with time. There is nothing an AI can do about that.
Well, yes, this much is evident to anyone who has actually worked with data analysis (i.e. garbage in = garbage out, and no amount of clever algorithms will help produce non-garbage out of garbage), but there are also people - pretty popular people among the super-intelligence community - that say things like this:
Quote
A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple.  It might guess it from the first frame, if it saw the statics of a bent blade of grass.
Basically, these people understand "super-intelligence" as "being omniscient", and "omniscient" as "having arbitrarily-powerful reality-warping powers".

And this happens often enough to drown any real arguments in this bullshit. Which is the reason why I don't take them seriously.

EDIT: Took the poll, and well, the essay I've got contained this load of dung:
Quote
    One might think that the risk [..] arises only if the AI has been given some clearly open- ended final goal, such as to manufacture as many paperclips as possible. It is easy to see how this gives the superintelligent AI an insatiable appetite for matter and energy. […] But suppose that the goal is instead to make at least one million paperclips (meeting suitable design specifications) rather than to make as many as possible.

    One would like to think that an AI with such a goal would build one factory, use it to make a million paperclips, and then halt. Yet this may not be what would happen. Unless the AI’s motivation system is of a special kind, or there are additional elements in its final goal that penalize strategies that have excessively wide- ranging impacts on the world, there is no reason for the AI to cease activity upon achieving its goal. On the contrary: if the AI is a sensible Bayesian agent, it would never assign exactly zero probability to the hypothesis that it has not yet achieved its goal. […]The AI should therefore continue to make paperclips in order to reduce the (perhaps astronomically small) probability that it has somehow still failed to make at least a million of them, all appearances notwithstanding. There is nothing to be lost by continuing paperclip production and there is always at least some microscopic probability increment of achieving its final goal to be gained. Now it might be suggested that the remedy here is obvious. (But how obvious was it before it was pointed out that there was a problem here in need of remedying?)
Their "Bayesian" model of super-intelligence is so smart that it effortlessly takes over the world, yet so stupid that it can't even count. I'm fucking speechless.
Logged
._.

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #12 on: September 23, 2016, 02:50:01 pm »

We could and should be doing so many better things like not all dying in the climate crisis and maybe trying to actually get a funded transhuman project off the ground instead of trying to invent AI Jesus to just give it to us. Or in this case, having abstract silly vacuous arguments about what we can do to keep AI Jesus from sending us to AI Hell.
>.> Even the most outlandish model runs don't suggest anything which could possibly involve us "all dying in the climate crisis" being a thing for centuries man, where does this come from? Did I miss the part where people start to spontaneously combust if the planet were to become half a Kelvin warmer? How is panic over potential kilodeath-scale outcomes over the next couple hundred years via sea level rise/heat waves/aridification/etc any less silly than concern over potential gigadeath-scale outcomes via unexpected superintelligence excursion a matter of decades from now?

Though, if we start devoting ourselves to nothing but supercomputer construction on every bit of available land the waste heat dumped into the environment could let you freak out over warming AND strong AI takeoffs!
As for the idea of "An AI would be able to predict every action of people blah blah blah". To be honest, I thought this to be true for a while. Putting it in a mathematical context, though, it's fundamentally impossible, assuming that the decision function of the human brain is a chaotic function, i.e. varies wildly for very close inputs (even if they appear close over small enough metrics of time close to 0), topologically mixes (covers every single possibility over a long enough period of time regardless of input) and has dense periodic orbits (for particular inputs the system might be predictable, but not everywhere). This system is, by its construction, impossible to mathematically model for all given inputs. No AI could "simulate" this system.
I've been told my jerk circuits have a strangely attracting property, but I don't think I've ever been called a fractal before. My self-similarity is pretty low, after all.

More to the point: the axiom of choice exists, you can construct decision functions over sets of choices and criteria, and the physical limits on the complexity of the hardware running this function should put an upper bound on the possible information needed to represent it at any point in time. I do agree that an AI which assumes we've all got butterflies flapping around inside our skulls would be terrible at predicting our behavior, though, so I guess I have to agree with you in case we don't end up with a benevolent strong AI, though it might view too much noise in the system as something to be reduced... heck of a quandry there.
Logged

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #13 on: September 23, 2016, 04:51:40 pm »

We could and should be doing so many better things like not all dying in the climate crisis and maybe trying to actually get a funded transhuman project off the ground instead of trying to invent AI Jesus to just give it to us. Or in this case, having abstract silly vacuous arguments about what we can do to keep AI Jesus from sending us to AI Hell.
>.> Even the most outlandish model runs don't suggest anything which could possibly involve us "all dying in the climate crisis" being a thing for centuries man, where does this come from? Did I miss the part where people start to spontaneously combust if the planet were to become half a Kelvin warmer? How is panic over potential kilodeath-scale outcomes over the next couple hundred years via sea level rise/heat waves/aridification/etc any less silly than concern over potential gigadeath-scale outcomes via unexpected superintelligence excursion a matter of decades from now
We've already spoken at length about the dangers of the climate crisis, and how it's here now not in centuries, in other threads. Though the topic of this thread is an actually fake thing, let's not ruin it by repeating ourselves.
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #14 on: September 23, 2016, 04:58:11 pm »

Yeah, I don't have it in me to go any further than pointing out the silliness of choosing one possible bogeyman over another possible bogeyman anyways.
Logged
Pages: [1] 2 3