Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 248 249 [250] 251 252 ... 339

Author Topic: SCIENCE, Gravitational waves, and the whole LIGO OST!  (Read 515663 times)

Amperzand

  • Bay Watcher
  • Knight of Cerebus
    • View Profile
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3735 on: May 20, 2016, 08:56:18 pm »

Of course, another couple assumptions that seem to be being made are that A: The AI would suffer some kind of performance loss with a constant reward signal, and B: The AI would have to dedicate any worrisome amount of processing power to producing more reward signal. If it has freeboard alteration capabilities, it can just tell the signal-generator to output at maximum as long as it's getting an input of 1 rather than 0.
Logged
Muh FG--OOC Thread
Quote from: smirk
Quote from: Shadowlord
Is there a word that combines comedy with tragedy and farce?
Heiterverzweiflung. Not a legit German word so much as something a friend and I made up in German class once. "Carefree despair". When life is so fucked that you can't stop laughing.
http://www.collinsdictionary.com

Bumber

  • Bay Watcher
  • REMOVE KOBOLD
    • View Profile
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3736 on: May 21, 2016, 01:54:21 pm »

I'm actually kind of unsure why people are so terrified of AI, at least initially. They'll probably be unable to operate on anything other than specialised hardware, they'll be isolated, and we'll have a good long time to work on any homicidal urges they may suffer.

Obviously it's a risk, if it can self adapt it may be able to adapt beyond the original limitations (ie work on non-specialised hardware) and there's the whole slew of blue and orange morality stuff that may come about.

Emphasis mine. No, no we won't. There's a point in time- we'll call it the "crossover" point, where the improvements an AI can make to itself outpace those that can be made by external actors (i.e. scientists, engineers, etc). We could call the moments following the passing of the crossover point as "takeoff". For reasons I won't enumerate here due to extensiveness, it seems more likely that a "fast" or "moderate" takeoff speed would be expected over a "slow" one. Fast being minutes or hours, moderate being months or years, slow being decades or centuries.
I'm not saying we can work out any issues as it modifies, but as we start to produce more and more intelligent AI we ought to be able to work out homicidal urges in them. Hell, maybe we could wind up imprinting morals into it, or maybe it would develop one in-line with our own.
I think there's a greater risk of loss of life from catastrophic error or malware, than from any actual intent on the AI's part. The human factor is the real danger.

Imagine a prolific AI in control of self-driving vehicles, hydroelectric dams, nuclear reactors, the stock market, etc. The systems are isolated, but all AIs are based off an original. Humanity comes to rely on their benevolent AI overseers. It all runs flawlessly for nearly one hundred years, until suddenly Y2.1K hits, and every single AI all over the world simultaneously crashes...
« Last Edit: May 21, 2016, 01:58:31 pm by Bumber »
Logged
Reading his name would trigger it. Thinking of him would trigger it. No other circumstances would trigger it- it was strictly related to the concept of Bill Clinton entering the conscious mind.

THE xTROLL FUR SOCKx RUSE WAS A........... DISTACTION        the carp HAVE the wagon

A wizard has turned you into a wagon. This was inevitable (Y/y)?

Criptfeind

  • Bay Watcher
    • View Profile
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3737 on: May 21, 2016, 02:04:14 pm »

To be honest the whole "AI" part of that fear seems like it could be flawless removed without making the fear less realistic, and instead just describing a fear pertaining to our current existence.
Logged

Shadowlord

  • Bay Watcher
    • View Profile
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3738 on: May 21, 2016, 02:20:51 pm »

Trump is an AI. Discuss.
Logged
<Dakkan> There are human laws, and then there are laws of physics. I don't bike in the city because of the second.
Dwarf Fortress Map Archive

Starver

  • Bay Watcher
    • View Profile
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3739 on: May 21, 2016, 02:23:55 pm »

Trump is an AI. Discuss.
It's a not unpopular opinion that he's just an AH.

(Not scientifically proven, yet, so a bit OT.)
Logged

Loud Whispers

  • Bay Watcher
  • They said we have to aim higher, so we dug deeper.
    • View Profile
    • I APPLAUD YOU SIRRAH
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3740 on: May 21, 2016, 05:14:26 pm »

Tfw the AI will make humanity check its meat privilege

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3741 on: May 21, 2016, 05:16:43 pm »

If you really want to fuck this thread over we can talk about Roko's Basilisk. ( ͡° ͜ʖ ͡°)

Joke's on you future transcendent AI, I and all my acasual mind clones are masochists.
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3742 on: May 21, 2016, 05:26:56 pm »

No, no, better talk about Roko's Quantum Billionaire Trick. It doesn't involve torture, for one.
Logged
._.

i2amroy

  • Bay Watcher
  • Cats, ruling the world one dwarf at a time
    • View Profile
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3743 on: May 22, 2016, 03:07:03 am »

I'm actually kind of unsure why people are so terrified of AI, at least initially. They'll probably be unable to operate on anything other than specialised hardware, they'll be isolated, and we'll have a good long time to work on any homicidal urges they may suffer.
Emphasis mine. No, no we won't. There's a point in time- we'll call it the "crossover" point, where the improvements an AI can make to itself outpace those that can be made by external actors (i.e. scientists, engineers, etc). We could call the moments following the passing of the crossover point as "takeoff". For reasons I won't enumerate here due to extensiveness, it seems more likely that a "fast" or "moderate" takeoff speed would be expected over a "slow" one. Fast being minutes or hours, moderate being months or years, slow being decades or centuries.
Except this totally throws away the fact that we'll have years and years and years of research going into the AI realm and of working with AI that are free to expand in some areas but not in others before we ever have to deal with the problem. Heck, we're considering how to deal with the problem now. And there's no reason we can't test learning schemes in localized concepts before testing them in generalized ones. It's totally possible to say, build an AI that can improve itself in it's ability to recognize car models without being able to improve anything else (we have those now to). This isn't some sci-fi world where we have a miraculous ability that allows us to only generate full-formed AI's that are able to improve themselves in every direction at once born out of nothing without any precursors to them. Any breakthrough that could potentially be used to create an AI that is capable of improving on all fronts can be hobbled and adjusted to allow it to improve in only a single small area without being able to change others (and most likely that's where a discovery of that sort will occur, in the ability of a small field's improvements being able to be generalized). That's the beauty of pure computer science, since it's essentially just an extension of a man-made paradigm (math) you get to make all the rules. It's not like physics where you have to say "oh, but the universe says you can't do that"; any rule can be created or destroyed with enough work on the part of the computer programmer. The only limitations CS suffers from are processing speed ones, all others can be solved by rewriting the rules that underlay the core structures as needed (which, of course, might introduce different processing speed issues).

So sure, the "takeoff" period from the point when a general AI can improve itself in all directions faster than a human can to the blazing fast beyond our reach point is going to be very short, but you can't just trivialize all of the hundreds of millions of man hours and testing time that goes in before you even reach the point where it matches improvements at the same speed as humans in a specific, let alone the hundreds of thousands of man hours that will be required to generalize those to work in any given field (hours that we are already putting in, right now, as I type).
Logged
Quote from: PTTG
It would be brutally difficult and probably won't work. In other words, it's absolutely dwarven!
Cataclysm: Dark Days Ahead - A fun zombie survival rougelike that I'm dev-ing for.

iceball3

  • Bay Watcher
  • Miaou~
    • View Profile
    • My DA
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3744 on: May 22, 2016, 04:42:17 am »

I'm actually kind of unsure why people are so terrified of AI, at least initially. They'll probably be unable to operate on anything other than specialised hardware, they'll be isolated, and we'll have a good long time to work on any homicidal urges they may suffer.

Obviously it's a risk, if it can self adapt it may be able to adapt beyond the original limitations (ie work on non-specialised hardware) and there's the whole slew of blue and orange morality stuff that may come about.

Emphasis mine. No, no we won't. There's a point in time- we'll call it the "crossover" point, where the improvements an AI can make to itself outpace those that can be made by external actors (i.e. scientists, engineers, etc). We could call the moments following the passing of the crossover point as "takeoff". For reasons I won't enumerate here due to extensiveness, it seems more likely that a "fast" or "moderate" takeoff speed would be expected over a "slow" one. Fast being minutes or hours, moderate being months or years, slow being decades or centuries.
I'm not saying we can work out any issues as it modifies, but as we start to produce more and more intelligent AI we ought to be able to work out homicidal urges in them. Hell, maybe we could wind up imprinting morals into it, or maybe it would develop one in-line with our own.
I think there's a greater risk of loss of life from catastrophic error or malware, than from any actual intent on the AI's part. The human factor is the real danger.

Imagine a prolific AI in control of self-driving vehicles, hydroelectric dams, nuclear reactors, the stock market, etc. The systems are isolated, but all AIs are based off an original. Humanity comes to rely on their benevolent AI overseers. It all runs flawlessly for nearly one hundred years, until suddenly Y2.1K hits, and every single AI all over the world simultaneously crashes...
...because a self improving AI wouldn't see something like that coming and adapt itself for it?
The Halting Problem is a real thing. Basically a problem which elaborates that with computing as we understand it, a program cannot calculate whether any piece of software will freeze, given any input possible, in real time. At least, if i recall correctly.

In terms of AI though, I'd imagine that an error rate might be inherent to making anything resembling thought possible. At least when concerning neural networks,
« Last Edit: May 22, 2016, 04:45:36 am by iceball3 »
Logged

TheBiggerFish

  • Bay Watcher
  • Somewhere around here.
    • View Profile
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3745 on: May 22, 2016, 10:47:32 am »

No no no, the Halting Problem is that we can't tell if it won't keep running forever, not if it won't stop for a given input.
Logged
Sigtext

It has been determined that Trump is an average unladen swallow travelling northbound at his maximum sustainable speed of -3 Obama-cubits per second in the middle of a class 3 hurricane.

iceball3

  • Bay Watcher
  • Miaou~
    • View Profile
    • My DA
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3746 on: May 22, 2016, 01:42:20 pm »

Well, if we are talking about neural networks flexible like that or programming in general, you'd think making a program fully understand how to rework it's entire code autonomously to the degree of pruning defects before they even happen... would need to be a bigger piece of software than the code in question, yes?

I've not taken courses in generalized intelligences or what have you, mainly just looking at the situation using the same regard to recursion that makes the halting problem a thing. Feel free to call me out if i'm blatantly wrong in this regard.
Logged

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3747 on: May 22, 2016, 02:00:25 pm »

Why does AI even need to rework its own code, anyway? Wouldn't it be better if the source code of AI stayed constant, and the only things that changed over time were data files? Most currently working AI-ish thingies (neural networks, reinforcement learning, decision trees/forests, etc.) work that way, and they have some great successes, when as far as I know, the "source-code-rewriting" (genetic algorithms, etc.) programs are all extremely bad at doing their job, and there are no signs of progress over there.
Logged
._.

Starver

  • Bay Watcher
    • View Profile
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3748 on: May 22, 2016, 04:11:45 pm »

Why does AI even need to rework its own code, anyway? Wouldn't it be better if the source code of AI stayed constant, and the only things that changed over time were data files? Most currently working AI-ish thingies (neural networks, reinforcement learning, decision trees/forests, etc.) work that way, and they have some great successes, when as far as I know, the "source-code-rewriting" (genetic algorithms, etc.) programs are all extremely bad at doing their job, and there are no signs of progress over there.
An exquisitely created 'static code' is subject to the limitations of the programmer and unable to adapt beyond the presumptions of said programmer, who may have supplied ample dynamic storage for 'memories' to add historic experience to how the static code 'intelligently' deals with future situations, but cannot go beyond the original vision.  If a robot is supposed to know that green triangles are good and red squares are bad, it could be programmed from scratch, or given the ability to learn from green-triangles/red-squares giving, on approaching a reward/forfeit.  Then the system of blue circles is brought into play...  Does the program have the ability to associate them with their meaning? In a simple 'program' and circumstances like this, possibly the programmer (though not directly anticipating the colour blue, the circle shape and whatever meaning might attach to such a conjunction) might do, but maybe not if a merely binary association is expected.  And a more complex scenario (such as would need a proper AI) would require far more advanced planning yet.

Mutable code (at least a mutable secondary 'scripting' behaviour, but above raw data) could develop and change its own methods for handling associations, beyond merely 'from volitile memory, by a fixed processing code' levels.

In reality, the line is blurred, but generally there's some form of on-the-fly 'eval' of altered code.  But I'd class neural networks as 'mutable code' unless all links between all connectable states were tried, but abandoned according to the 'data' of which links are active (and how strong). This is slow and inefficient and costly to implement, compared with changing the code of each node towards a new response that better matches what should be learnt.

This is not necessarily 'genetic algorithms'. That involves starting with one or more seed algorithms (crafted by the designer or just randomly compiled), creating competition between multiple possible variations (making random changes to existing version(s) to provide a large enough cohort, as necessarily) and then testing the performance of each against a metric of 'suitability' (towards either the end-goal or a suitably chosen intermediary) and rejecting the poorest and perhaps also promoting the best according to their relative success.  Then go back to the stage of more mutations for more competition.  The 'design ethos' of each algorithm is as open as the language and architecture, free from the biases of a programmer at the mutative-code level (see one intruiging experiment, which also does hit that blurriness of code vs data).

And genetic algorithms don't need to be sexually recombinative (as the FPGA version was) but can be a fully asexually reproducing and mutating tree-of-life.  (It's easier to do, but less rapid to discover wonderous new 'solutions' in the search-space of possibilities.  A bit like sexual/asexual repriduction in biology, when measured by generations.)


An intruiging mix between AI and genetic algorithms is that perhaps an AI runs its own internal 'ecosystem' of miniature genetic algorithms fed by the same inputs and all the outputs polled together.  The AI responds according to the concensus (effectively random at first) and then assesses (or is told) whether that was a correct response.  It keeps (or promotes) all those that chose correctly/didn't choose incorrectly and bins (and/or demotes, perhaps removing only after a threshold number/proportion of failures) the others.  A neutral ouput might be a possibility, although failure to be correct (ever!) should be as significant as successfully being wrong.  'Culled' algorithms are replaced (there's a choice between mutating the failure, to see if it improves, generating a randomised replacement, making a slightly changed copy of a successful one or recombining/splicing components of two or more successful ones - each approach has its own effect on the development) and more experiences happen with the new complement of code-blocks.

Such a system controlling a buggy-chassis with a camera or other vision system might well develop subunits of 'thought' that develop the system an 'urge' towards travelling toward green triangles and retreating/veering from red squares (by whatever reward/punishment scenario we develop) with a subset of working units that respond to shapes and/or colours, and poll towards a concensus of action. Blue-detectors and circle-detectors might spontaneously arise (as might blue circle detectors!) and be neither favoured nor suppressed... And then blue circles appear!  And the algorithms that (correctly) poll towards flashing the lights on the buggy or whistling Dixie by its speaker or whatever it is...  they become part of the 'brain'.

And if green squares are now good to approach and red triangles are bad to approach, then the behaviour modifies by rejecting the (previously correct) square/triangle detecting algorithms and mutated versions with reversed opinions come to the fore to support their green=good/red=bad brethren, and the relevant colour+shape combi-dtectors get an overhaul by failure and reimagining.

Not only that, but you could have switched the green triangle/red square meanings entirely opposite (or, which is always a good experiment with a 'learning' robot, reversing the motor connections/directions) and after taking bad hits to its 'ego' because of faithfully following the 'wrong' actions, it is forced to relearn its behaviour towards the new norm.  Pretty much as both animal and human psychology experiments see in their respective subjects when they reward them. (Or seem to reward... see B.F. Skinner's 'superstitious' pigeons, or that "lucky shirt" you like to wear to particularly important sports events.)

Sorry, I seem to have drifted, somewhat.  Mainly because there's not merely one approach to AI (even 'weak' AI), and genetic algorithms (or similar 'mutative' experiments) can play part, all or no part in AI.
Logged

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: SCIENCE, Gravitational waves, and the whole LIGO OST!
« Reply #3749 on: May 22, 2016, 05:01:05 pm »

I think that the AI being unable to adapt "beyond the presumptions of the programmer" is a good thing. It makes bugfixing the problems in AI so much easier, not to mention it reduces the chances of bad unexpected stuff happening.

And no, neural networks are very much not "mutable code". There's a very simple algorithm in their core that's literally just repeated multiplication and summation of a certain set of input numbers, with weights given by data, and then subsequent update of these weights based on the outputs. There could be some kind of algorithm at the tail that processes said outputs to produce actions based on these outputs, but the algorithm itself doesn't change, either.

Also, while your examples are certainly interesting, neural networks do that stuff, as well, and they do it well enough to actually reach tech applications, such as Google's image recognition stuff, or AlphaGo. AFAIK all genetic algorithm stuff hasn't gone beyond the labs.

And, while we're at it, human intelligence doesn't seem to work like genetic algorithms do, which further raises the question as to the actual applicability of this "genetic" stuff to something it wasn't actually designed for, and which has, in fact, shown some extremely poor performance in nature, compared to neural networks:

Genetic evolution took billions of years to evolve humans, human's neural networks took only 10,000 years after learning agriculture to conquer Earth.
Logged
._.
Pages: 1 ... 248 249 [250] 251 252 ... 339