Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 [2] 3

Author Topic: AI risk re-re-revisited  (Read 18978 times)

Frumple

  • Bay Watcher
  • The Prettiest Kyuuki
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #15 on: September 23, 2016, 05:12:40 pm »

All the scenarios where superintelligent AI kills all humans usually requires that it finds/knows a way to do this without significant effort. But how realistic is that? Extincting a specific pest is incredibly difficult for humans and requires a giant effort.
Back a bit, but... the latter is incredibly difficult for humans and requires a giant effort because we still more or less require the same environment to live, and need it to mostly be there when the pest is gone. We could probably wipe out, say, mosquitoes relatively easily at this point, ferex (tailored diseases, genetic muckery, etc.), but we don't because of the various knock-on effects (biosphere disruption, potential mutation in diseases or whatev') aren't worth anything involved with it. Unfortunately, most of the knock-on effects of wiping out humanity are, uh. Pretty positive. Particularly if you can still use our infrastructure and accumulated knowledge without actually needing the fleshsacks walking around crapping on everything >_>
Logged
Ask not!
What your country can hump for you.
Ask!
What you can hump for your country.

IronyOwl

  • Bay Watcher
  • Nope~
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #16 on: September 23, 2016, 05:53:18 pm »

Back to the survey for a moment, can I take a moment to bitch about how stupid the questions are? "Once we invent a human-level AI, how likely is it to surpass all humans within a year" is a particularly dumbass way to phrase an escalation question, because you usually don't "invent" something that complicated out of whole cloth; you iterate something you already had until it starts to look kinda like something else. Like, we "invented" computers because the weird doohickies were were building eventually matched some arbitrary criteria, not because we up and made a useful modern computer on a whim.

So when you ask "will AI be better than us a year after we invent it," the impression I get is that you think somebody's literally just going to appear on television yelling GOOD NEWS I INVENTED A SUPERCOMPUTER THAT EVEN NOW GROWS IN STRENGTH, SOON IT WILL BE TOO POWERFUL FOR YOU TO STOP. As opposed to, you know, the far more likely scenario of Google's latest phone sex app getting patched to be better at managing your finances and rapping on command than you are.
Logged
Quote from: Radio Controlled (Discord)
A hand, a hand, my kingdom for a hot hand!
The kitchenette mold free, you move on to the pantry. it's nasty in there. The bacon is grazing on the lettuce. The ham is having an illicit affair with the prime rib, The potatoes see all, know all. A rat in boxer shorts smoking a foul smelling cigar is banging on a cabinet shouting about rent money.

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #17 on: September 23, 2016, 08:50:01 pm »

I've never seen human decision making presented as a chaotic process, I can't even find this being done somewhere, am I using the wrong search terms or something? It's like a perfectly irrational actor being subbed in for what I thought was the usual rational individual/irrational crowd type of modeling assumption.

Could it be possible that the chaotic function model only makes sense with the sort of incomplete information which one of us would possess?

Would that necessarily be the case were the completeness of our models and understanding improved?

Is there any point you can think of where someone or something vastly more intelligent than you might find the chaotic decision function model to be inaccurate?

I assume we have a similar quality of intelligence, but just from trying to reason it out as a starting assumption, couldn't the strange attractors for said assumption look like rational actor behavior anyways?

If so, why is the irrational actor assumption preferable? If not, why does the idea of a rational actor exist?

Also, in my essay there was something about the idea of spending time with a superintelligent spider being unpleasant, stop the spiderbro hate!
Logged

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #18 on: September 23, 2016, 09:22:07 pm »

That is weirdly charitable and simultaneously uncharitable, well done!

There is a phase space of actions any given person is likely to take, which is a subset of the actions they could possibly take, which is a subset of actions that might be prohibited due to time or distance or physical limitations but are at least theoretically possible.

It sounds like you're arguing that the strange attractor in this situation would begin tracing out the entire universe, rather than a portion of the likely actions phase space.

It is possible I could get up now, walk outside, stick a beetle in my ear and run into traffic, but that is well outside the phase space of likely actions given the simple assumption that my mental state won't wildly change from one moment to the next.
Logged

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #19 on: September 23, 2016, 09:46:11 pm »

Of course. However, stretch that period out to 10 years from now. It's very possible that a series of tragic events could leave you completely mentally broken and that that set of circumstances would enter the phase space.

As I said, the phase space is actually rather small initially, depending on the event in question (sufficiently potent events, i.e. torture, war, etc, could represent the more "extreme" divergences between possible events and push towards what would not normally be considered in the phase space), and an AI could reasonably predict over a more immediate time frame one's decisions based on that immediate event. The decisions made as a result of this event, say, a year in the future, could not be predicted with any such accuracy, because the error in the initial assumptions from reality increase exponentially over a period of time.
Assuming the chaotic decision function is a reasonable model, of course.

Still though, we're discussing hypothetical minds of arbitrarily greater intelligence here, if we assume we could simulate the processes in a human mind well enough that when run it believes itself to be a human, and is capable of demonstrating human level intelligence. If we take the leap that it should be possible to produce something which, when ran, is capable of demonstrating beyond human level intelligence, at what point is it too much of a leap to think it could run a subroutine with a simulation of a mind that, when ran, believes itself to be you?

You find yourself being told by what appears to be you, claiming to be speaking from outside a simulation, that YOU are a simulation. How do you respond to this? What could you do to prove to yourself that you are or are not you?

Perhaps the idea that you could be simulated so well that the simulation is actually sitting over there on the other side of this internet connection discussing this with me isn't particularly comforting, but aside from some unknown attribute of "you"ness there is no real reason this scenario couldn't take place, is there?

I can't find a plausible choice for that attribute which I could use to make this distinction between me and sim!me other than my continuity of awareness suggesting that I am either not the sim, or it is an extremely in depth model which either fully iterated my life, or recovered my mental state exactly enough to leave me convinced that it was in fact my life.
Logged

Baffler

  • Bay Watcher
  • Caveat Lector.
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #20 on: September 23, 2016, 10:01:22 pm »

In other words, why is it implausible that an AI could run a sufficiently advanced simulation that the simulation thinks that it is the person?

Because it wouldn't know how. This should give you an idea of the problems we're currently banging our collective heads against. Direct to the source. It won't know anything we can't tell it, and the idea that it could somehow divine the answers to these problems out of sheer processing power brings us into omniscient AI Jesus territory.
Logged
Quote from: Helgoland
Even if you found a suitable opening, I doubt it would prove all too satisfying. And it might leave some nasty wounds, depending on the moral high ground's geology.
Location subject to periodic change.
Baffler likes silver, walnut trees, the color green, tanzanite, and dogs for their loyalty. When possible he prefers to consume beef, iced tea, and cornbread. He absolutely detests ticks.

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #21 on: September 23, 2016, 11:00:45 pm »

In other words, why is it implausible that an AI could run a sufficiently advanced simulation that the simulation thinks that it is the person?

Because it wouldn't know how. This should give you an idea of the problems we're currently banging our collective heads against. Direct to the source. It won't know anything we can't tell it, and the idea that it could somehow divine the answers to these problems out of sheer processing power brings us into omniscient AI Jesus territory.
We do have weird progress towards this end goal though: http://browser.openworm.org/#nav=4.04,-0.03,2.8

In other words, why is it implausible that an AI could run a sufficiently advanced simulation that the simulation thinks that it is the person?

Assuming the chaotic model here, it's really rather quite simple.

The AI would have to simulate the entire universe to exact detail. Sure, you might argue that that could be possible on sufficiently potent software design and hardware architecture.

However, such an AI would necessarily need to simulate itself.

It would need to simulate itself simulating itself.

And so on.

As to why it would need to simulate the entire universe, any chaotic model requires an exact simulation to get exact results; the system is deterministic. However, any error increases exponentially over time, so the simulation must be exact or else risk serious errors coming up. No one thing can be neglected from the simulation, due to the nature of mathematical chaos.
Ah, you're looking at a different problem, intelligence is messy anyways, but the important thing is that there is no simple way for me to prove to you that I am sitting next to real!Ispil and we're watching the outputs on the machine running sim!Ispil, i.e. the you I am speaking with on this forum.

Similarly there are numerous things you can do which suggest to me that the hypothesis that you are just a chatroutine is falsified, I can't disprove that you actually think you exist without getting into solipsistic nonsense.

Now, taking the assumption that you have internally consistent mental states, and that you observe yourself to be embedded within a universe, what are the minimum requirements necessary to achieve that?

You can't go out and touch a star, so we only need to make them behave plausibly if observed with certain equipment, you can't actually directly interact with anything more than a few feet away so we need to apply a certain level of detail within that volume, thankfully we can fudge most of it because you lack microscale senses. We need to account for sound and light reflection, which is a bit more complex, but far from impossible, smell and taste could be tricky but they are usually running at a sub-aware level so we only need to call those up when prompted. Naturally the framework for your meatpuppet needs to send back certain data to emulate biomechanical feedback, but that isn't too onerous, and thankfully you are very unlikely to start trying to dig around inside your own chest cavity to see what is happening... though we should probably put in place some sort of placeholder we can drop the relevant models on just in case.

We could probably use backdrops and scenery from live footage to add another layer of versimilitude, but most of the extra processing power would go towards making sure the (probably claustrophobic sounding) box bounded by your limbs at full extension behaves as you expect it should, though the actual self!sim itself will still be eating up a decent chunk of resources as it trundles around, but we can make use of things like a limited attention span and fatigue to trim a good amount of the overhead down outside of extended bouts of deep existential pondering.

Now, I'm not saying you should open your abdominal cavity and see if there are any graphical errors as chunks of it are rendered, but can you think of a way to prove you aren't in a glass case of emotionsimulation?

It doesn't need to be exact and complete to produce something which would think it was you or I. Yes, after initializing it there would be divergences as the decision factors for both take them down different routes through their respective phase spaces...

...but hey, just in case you were comfortable with the idea of sim!you existing in some hypothetical, don't forget that it would probably be more productive if the likely region of your decision phase space were mapped out intensively, so the question would then become: how many iterations of sim!you does it take to map out the most likely responses for real!you to any given stimuli?

I'm not saying that I would run endless sims of you and then shut them down after selecting the most useful data from the runs, it sounds horrific to me to do that to someone with a similar level of intelligence and attachment to their own existence, but I'm not a godlike AI without a reason to be attached to the particular mental state of specific individuals, am I?

And yes, this is all assuming that the chaotic decision function is a reasonable model. If it isn't, then either the decision model is stochastic, or both deterministic and polynomial-scaling (or less) in perturbation scaling for any error in the input.

In other words, humans are either chaotic, random, or predictable over the entirety of the phase space (in this case, the Oxford comma is in use; the "entirety of the phase space" only applies to predictable). There of course exists plenty of given inputs for particular decision functions with particular priors that particular decisions are predictable; those are the periodic orbits of the decision function.
You omit that it could be chaotic and deterministic with perfect initial information. Figuring out what will happen when you start a chaotic system is totally possible if you know how you started it.

These seem like another way of describing the different subsets of possible actions, random actions covering the broadest region if given enough time to evolve, chaotic actions having a likely portion of the phase space, and deterministic actions providing anchor points--we know you and I will go eat, drink, breathe, sleep, and so forth though we can choose to activate or delay these processes--which staple parts of the other two sets together. There are no random actions which will result in someone living and breathing without any environmental protection in the upper atmosphere of jupiter tomorrow, there are no chaotic trajectories that wind up with you avoiding to eat and avoiding death in the near future.

I may not know the initial conditions well enough to make these predictions about a chaotic decision function, you may not, but can you confidently state that it is impossible to know them?
Logged

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #22 on: September 23, 2016, 11:13:29 pm »

Left it for you to edit again if you wanted but you didn't, but like I said, given perfect information of the initial state you can predict what a chaotic system will do. Which goes back to the question of, if that can be known, how does it prevent a simulation of someone from behaving just like they would? The whole universe simulation argument has some merit, but seems excessive if the goal is just producing something which believes it is a given individual, as we ourselves lack the vast majority of the information which would make said universe sim necessary.
Logged

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #23 on: September 23, 2016, 11:51:28 pm »

I am mad at you for making me delve into various searches trying to find something to support a chaotic decision making theory of mind, though I know you couldn't predict I would wind up encountering all the pseudospiritualistic bullshit with like, whoa, things are like, butterflies and like, shit happens man, so like, it's cool that I just went through.

I still think you're overestimating the requirements for doing something like fooling a human level intelligence into accepting and responding to the environment in a realistic fashion, chaos theory or not it is totally plausible for something with more information than you or I to produce a simulation with a fine enough grain that we can't distinguish it from reality.

Arguing that you need a perfect universe sim to produce a reasonably accurate human sim is going towards the implication that human behavior is damn near random, otherwise stuff like the specific motions of a particle here or there wouldn't matter.

The universe only observes itself with limited instruments, stuff like us, and we are really fucking shitty instruments for doing this.

Making a universe sim that could fool an arbitrarily powerful mind is where I totally agree with you about it being impossible, but we've gotten pretty good at convincing your mind that it is in fact doing something somewhere completely different from where you know yourself to be, like the edge of a building which isn't even trying to look realistic.

I love that guy, btw, "I am so glad you landed that... I had my eyes closed."
Logged

Folly

  • Bay Watcher
  • Steam Profile: 76561197996956175
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #24 on: September 24, 2016, 12:17:10 am »

I'm optimistic about cyborg technology outpacing AI.
By the time that AI starts evolving independent of human intervention, we should all have computer chips throughout our brains allowing us to match the AI's in thinking speed, and mega-man fists that can shoot lasers at any bots that try to attack us.
Logged

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #25 on: September 24, 2016, 04:57:12 am »

I'm optimistic about cyborg technology outpacing AI.
By the time that AI starts evolving independent of human intervention, we should all have computer chips throughout our brains allowing us to match the AI's in thinking speed, and mega-man fists that can shoot lasers at any bots that try to attack us.
Ah-ha, but you see, you would be Ozymandias: Which is easier, for your computer chips to do their programmed tasks or to brainwash you into wanting to nuke the world, thus curing cancer! Oh, you poor deluded innocent, thank god we have enlightened folk like me to rationalize your utility functions in these matters. /s
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #26 on: September 24, 2016, 06:19:34 am »

I wonder if it can be argued that if a system is Turing complete, then it exhibits chaos.
There's the halting problem, which, while isn't technically chaos, means that, in general, the only way to certainly predict the execution length of a program is to run it. Since a program's output can be made a function dependent on executing length, it means that, in general, the only way to certainly predict the output of a program is to run that exact program.
Logged
._.

Frumple

  • Bay Watcher
  • The Prettiest Kyuuki
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #27 on: September 24, 2016, 07:19:11 am »

I'm optimistic about cyborg technology outpacing AI.
By the time that AI starts evolving independent of human intervention, we should all have computer chips throughout our brains allowing us to match the AI's in thinking speed, and mega-man fists that can shoot lasers at any bots that try to attack us.
The trick is, just like our current functionally!cyborg technology, the chips probably won't be in our brain. We do have the occasional bit of internal or grafted cybertech at the moment, but it looks a lot like most of our development there is going to be like it currently is -- via external peripherals. It's a lot safer, probably a fair bit more efficient, and certainly currently a hell of a lot easier to just... make stuff that interfaces with the wetware via the wetware instead of implanted hardward. Smartphones, glasses, guns... bluetooth, developing AR software, etc., etc., etc. Conceptually we could probably wire some of those directly to our brain, even at the moment (if with likely fairly shoddy results -- results, but not particularly decent ones), but.. why, when you can get the same effect laying it on the palm of your hand or building it into your eyeware?

I mean. Other than the awesome factor and maybe the glowing laser eyes and whatnot. I'm sure that's reason enough to many but it probably won't be for the folks funding development for quite a long while :V
Logged
Ask not!
What your country can hump for you.
Ask!
What you can hump for your country.

Radio Controlled

  • Bay Watcher
  • Morals? Ethics? Conscience? HA!
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #28 on: September 24, 2016, 06:09:57 pm »

Quote
Their "Bayesian" model of super-intelligence is so smart that it effortlessly takes over the world, yet so stupid that it can't even count. I'm fucking speechless.

I thought the idea there was a sort of solipsism-for-computers: the AI can't be 100% certain it has made sufficient paperclips yet, so it'll keep working to diminish the chance. After all, it might have a camera feed to count the number of paperclips rolling of the factory floor, but who'se to say the video feed isn't a recording/simulation made by those dastardly humans! As part of a test to check the AI's behaviour perhaps, or because they thought a reverse matrix would be hilarious. Or maybe a small software glitch made the computer miscount by 1, so better make more paperclips just to be a little extra sure.
Logged


Einsteinian Roulette Wiki
Quote from: you know who you are
21:26   <XYZ>: I know nothing about this, but I have strong opinions about it.
Fucking hell, you guys are worse than the demons.

Dozebôm Lolumzalìs

  • Bay Watcher
  • what even is truth
    • View Profile
    • test
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #29 on: September 25, 2016, 04:13:26 pm »

Is that sarcasm? Cars and cranes are made by humans, and are both metallic. That doesn't make every manmade object metallic.

Unless every T-complete system can be described as a sum of multiples of R110 and GoL. But they aren't vectors, so I find that unlikely.
Logged
Quote from: King James Programming
...Simplification leaves us with the black extra-cosmic gulfs it throws open before our frenzied eyes...
Quote from: Salvané Descocrates
The only difference between me and a fool is that I know that I know only that I think, therefore I am.
Sigtext!
Pages: 1 [2] 3