Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 2 [3] 4 5 ... 7

Author Topic: The Technological Singularity thread: an explaination of concepts  (Read 6702 times)

dragnar

  • Bay Watcher
  • [Glub]
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #30 on: May 27, 2010, 09:06:12 am »

That's the dream anyway. Who knows how it will actually turn out.
Logged
From this thread, I learned that video cameras have a dangerosity of 60 kiloswords per second.  Thanks again, Mad Max.

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #31 on: May 27, 2010, 09:17:50 am »

The problem here is that what computers are good at is brute-forced logic and computations, whereas the human mind works heuristically.  To overcome the human mind with brute-force logic and computations, you have to follow each possible branch of action--and there must be a preset conclusion, at which point you evaluate each eventuality, rank the paths, and follow the best one.
There are much better ways of doing it. One example, Artificial Neural Networks, essentially model the neurons found in the brain; they would work exactly the same as a human brain if programmed correctly. The difficulty is in getting the structure correct.
There are, of course, some more current modes of AI where the program is shown "good" and "bad" examples, and its job is to learn what makes the good examples good and the bad examples bad.  The problem is that many situations are too complex for the AI to properly evaluate within a "reasonable" amount of time, whereas the human mind generally does a much better job at playing such combinatorial games as chess and go.
Learning algorithms do not depend on combinatorial methods all that frequently. The only place combinatorial methods are the sole method of an AI are in video games, where emergent learning algorithms like ANNs are to much of a pain to use. In a video game, combinatorial methods are used because the number of tasks they must do is relatively small, and the tasks themselves are simple. Learning algorithms are bad for this since the emergent behavior is extremely difficult to predict and adding a single new task to the game AI means an entirely new training set must be made up and run through. In an attempt to make a strong AI though, the problem size and responses required would be massive. Learning algorithms like ANNs are much more efficient than combinatorial approached for these universe-sized problem sets.
In general, though, to unlock AI that surpasses the human mind you're going to need much, much better processors than we have right now--and IIRC, it has been proven that our current materials/mode of execution cannot possibly support the needed computing power (even with our rate of increase, there's an asymptote/leveling off that is supposed to happen pretty soon due to physical limitations).  On the other hand, there's a good deal of hope in quantum computers and nanotech, but conventional computing will fail us here.
First of all, we have been 10 years away from the end of Moore's Law for several decades. While it is true there are physical limitations in the size of transistors, especially when they reach approximately 10 nanometers, this can potentially be made up for by other coming advances. Quantum computing, 3D chips, spintronics, ect.
* My apologies, by the way, since I can't support most of my statements.  Most of it comes from bits of pieces of popularizations and meanderings on the internet, as well as newspaper articles.  If you want to know more, you could try out Penrose's The Emperor's New Mind and Hofstader's Godel, Escher, Bach.  Other things to search/look into would be the AI winter and refutations of the strong AI hypothesis.
Yes, I've looked into the various AI winters. http://en.wikipedia.org/wiki/AI_winter
They pass with time and have little to do with the actual AI. Just people's perception of them.
Logged

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #32 on: May 27, 2010, 09:24:30 am »

If science gets good enough that we can build AI with humanlike brains, what's to say we won't attach them to our own heads an BECOME the AI?

I'll never forget the first time I switched from an old pentium to like a p3 with a 3d card. Holy crap.  Games looked so much better in higher res with more texture detail.

Imagine that happening to your real brain-  If you could store 100x what you already can in your short term memory, and crunch numbers tenfold faster.

There's your singularity- an entire world of billions of people who make Einstein look like a monkey.
Yep, that's one of the major views of Singularitarians. Human mental enhancement along with the AI, eventually reaching the level of ourselves becoming AI.
However, considering human short term memory is limited to only around 10 objects, it would be closer to several million times greater with even cheap RAM on today's market. And the speed of number crunching would see a similar increase.
Logged

Sir Pseudonymous

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #33 on: May 27, 2010, 09:54:24 am »

tl;dr: VECTOR. TEACH ME THE MATHS.

The problem here is that what computers are good at is brute-forced logic and computations, whereas the human mind works heuristically.  To overcome the human mind with brute-force logic and computations, you have to follow each possible branch of action--and there must be a preset conclusion, at which point you evaluate each eventuality, rank the paths, and follow the best one.

There are, of course, some more current modes of AI where the program is shown "good" and "bad" examples, and its job is to learn what makes the good examples good and the bad examples bad.  The problem is that many situations are too complex for the AI to properly evaluate within a "reasonable" amount of time, whereas the human mind generally does a much better job at playing such combinatorial games as chess and go.

In general, though, to unlock AI that surpasses the human mind you're going to need much, much better processors than we have right now--and IIRC, it has been proven that our current materials/mode of execution cannot possibly support the needed computing power (even with our rate of increase, there's an asymptote/leveling off that is supposed to happen pretty soon due to physical limitations).  On the other hand, there's a good deal of hope in quantum computers and nanotech, but conventional computing will fail us here.


* My apologies, by the way, since I can't support most of my statements.  Most of it comes from bits of pieces of popularizations and meanderings on the internet, as well as newspaper articles.  If you want to know more, you could try out Penrose's The Emperor's New Mind and Hofstader's Godel, Escher, Bach.  Other things to search/look into would be the AI winter and refutations of the strong AI hypothesis.
:|

>:|

No. Modern processors are just grossly inferior to the human brain's capacity, and heuristic algorithms are really nothing more than well crafted brute force attacks on a problem. The human mind produces the appearance of a fuzzy, illogical blur of somehow-not-dying instead of a rational machine simply because of the sheer scale of its task: it takes staggering amounts of incomplete data, cross references them to even-more-staggering amounts of stored events and conditions with all their associated data, and runs massively parallel genetic algorithms to determine proper responses based on the probability that the current situation matches known ones. This all happens in the blink of an eye. For comparison, the average "fast" processor makes a goldfish swimming in a bottle of vodka look like a fucking god. Of course it's not going to match our capabilities, and there's not going to be a real wealth of prior art in crafting similarly effective algorithms on account of only the fastest supercomputers even coming close to the point where they could run effectively.

Armok, I'm usually one for reading posted links, but when those posted links are compendiums of articles, I give up. If you want to continue saying that you're smarter than us in every way, back it up. Summarize your links. Correct errors. Otherwise, I don't think that anyone will really take these pot-shots as serious posts.

Vector: Thanks!
Don't criticize Armok! His alleged knowledge of ancient philosophy clearly makes him the most qualified to comment on theoretical technology!

@dragnar: it won't be done "right" though, it will be done by evolution. Meaning the AI that takes over will be the AI that's best at self-replicating and seizing control, not the one that's best at running a society that makes humans happy.
That's radically misinterpreting the environmental conditions the AI would be tested against. It's just so divorced from reality for so many reasons that I am left speechless trying to come up with an appropriate response. There's no way I could cover just how wrong it is.
Logged
I'm all for eating the heart of your enemies to gain their courage though.

Jude

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #34 on: May 27, 2010, 10:35:16 am »

Humor me and try

After all once AIs start self-replicating how does it not stand to reason that the one that will be most prolific is the one that's best at self-replicating
Logged
Quote from: Raphite1
I once started with a dwarf that was "belarded by great hanging sacks of fat."

Oh Jesus

Sir Pseudonymous

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #35 on: May 27, 2010, 12:49:22 pm »

>:|

Computers don't work that way. In producing a self-improving AI you'd presumably use something of a genetic algorithm, where it produces essentially random variations on a piece of code, and then runs it through test conditions against code that's known to work. If one variation performs better (faster, more accurately, whatever you're looking for) than the baseline, it becomes a new baseline to test variants against the new form. To grossly oversimplify the idea of a genetic algorithm. So, the only way it can improve itself is by better meeting the conditions we want it to. While the notion is the same as biological evolution, the test conditions are radically different; instead of "survive long enough to reproduce and/or ensure the survival of your young", it's "determine ways to meet the specific conditions we lay out the best, then implement them". Unless the programmer was a psychotic loon working without oversight, the conditions wouldn't include "infect as many systems as possible just for shits and giggles".
Logged
I'm all for eating the heart of your enemies to gain their courage though.

Vector

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #36 on: May 27, 2010, 01:35:54 pm »

My apologies for my oversights ^_^;;  It seems that much of my data is not only outdated, but also incomplete.  Thank you for the corrections.


I'll admit that the singularity also creeps me out for personal reasons, so I may be arguing more from an emotional standpoint than a purely logical one.
Logged
"The question of the usefulness of poetry arises only in periods of its decline, while in periods of its flowering, no one doubts its total uselessness." - Boris Pasternak

nonbinary/genderfluid/genderqueer renegade mathematician and mafia subforum limpet. please avoid quoting me.

pronouns: prefer neutral ones, others are fine. height: 5'3".

Jude

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #37 on: May 27, 2010, 01:59:57 pm »

>:|

Computers don't work that way. In producing a self-improving AI you'd presumably use something of a genetic algorithm, where it produces essentially random variations on a piece of code, and then runs it through test conditions against code that's known to work. If one variation performs better (faster, more accurately, whatever you're looking for) than the baseline, it becomes a new baseline to test variants against the new form. To grossly oversimplify the idea of a genetic algorithm. So, the only way it can improve itself is by better meeting the conditions we want it to. While the notion is the same as biological evolution, the test conditions are radically different; instead of "survive long enough to reproduce and/or ensure the survival of your young", it's "determine ways to meet the specific conditions we lay out the best, then implement them". Unless the programmer was a psychotic loon working without oversight, the conditions wouldn't include "infect as many systems as possible just for shits and giggles".

I get all that. What I was saying though, is that once systems can self-modify, it's only a matter of time till one comes into being whose goal is to replicate itself as much as possible, and when that happens, it'll overrun everything.
Logged
Quote from: Raphite1
I once started with a dwarf that was "belarded by great hanging sacks of fat."

Oh Jesus

dragnar

  • Bay Watcher
  • [Glub]
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #38 on: May 27, 2010, 02:17:36 pm »

Except even one better developed AI could essentially just delete it. nless some psycho builds an AI with that as it's only goal before any other AI is developed it will never be able to take over.
Logged
From this thread, I learned that video cameras have a dangerosity of 60 kiloswords per second.  Thanks again, Mad Max.

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #39 on: May 27, 2010, 03:24:09 pm »

My apologies for my oversights ^_^;;  It seems that much of my data is not only outdated, but also incomplete.  Thank you for the corrections.


I'll admit that the singularity also creeps me out for personal reasons, so I may be arguing more from an emotional standpoint than a purely logical one.
Of course it does; it is a concept of creating intelligences greater than ourselves. It creeps everyone out. The thing which we, as a species, can hang our collective hats on is that we are the smartest on Earth and have used our intelligence to conquer just about every environment. A technological singularity may help our species advance via neurological enhancement and a merging with our own technology; but it may also lead to us being obsolete and left in the dust to stagnate under the watch of an asphyxiatingly benevolent AI.
Logged

Vector

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #40 on: May 27, 2010, 03:28:28 pm »

Of course it does; it is a concept of creating intelligences greater than ourselves. It creeps everyone out. The thing which we, as a species, can hang our collective hats on is that we are the smartest on Earth and have used our intelligence to conquer just about every environment. A technological singularity may help our species advance via neurological enhancement and a merging with our own technology; but it may also lead to us being obsolete and left in the dust to stagnate under the watch of an asphyxiatingly benevolent AI.

... No, what creeps me out is that a guy I know is so into AI and computers that he claims he would, in fact, date one with sufficient processing capabilities.  Whenever I think about the singularity, I'm reminded of him and get a little Uncanny-Valley-style shiver.

I also don't like the idea of being out of a job because a computer is better at mathematics than I am.  I'm sorry, but I'd rather not even have computers that advanced, if it turns everyone's expertise into a hobby better fulfilled by a machine.  I know myself, and I need stuff to do, and I need a sense that that stuff is meaningful.
Logged
"The question of the usefulness of poetry arises only in periods of its decline, while in periods of its flowering, no one doubts its total uselessness." - Boris Pasternak

nonbinary/genderfluid/genderqueer renegade mathematician and mafia subforum limpet. please avoid quoting me.

pronouns: prefer neutral ones, others are fine. height: 5'3".

Armok

  • Bay Watcher
  • God of Blood
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #41 on: May 27, 2010, 03:41:10 pm »

Of course it does; it is a concept of creating intelligences greater than ourselves. It creeps everyone out. The thing which we, as a species, can hang our collective hats on is that we are the smartest on Earth and have used our intelligence to conquer just about every environment. A technological singularity may help our species advance via neurological enhancement and a merging with our own technology; but it may also lead to us being obsolete and left in the dust to stagnate under the watch of an asphyxiatingly benevolent AI.

... No, what creeps me out is that a guy I know is so into AI and computers that he claims he would, in fact, date one with sufficient processing capabilities.
Assuming an AI that'd actually want to do stuff like that, wich is highly unlikely, I'd totally do that. I don't see anything strange with it.
Logged
So says Armok, God of blood.
Sszsszssoo...
Sszsszssaaayysss...
III...

Il Palazzo

  • Bay Watcher
  • And lo, the Dude did abide. And it was good.
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #42 on: May 27, 2010, 03:46:42 pm »

Armok, Vector's aquaintance and AI - a new love triangle. Unless the AI can multitask two guys at once.

Regarding the topic, wouldn't humans just pull the plug on such an AI as soon as they realised it's getting smarter than themselves? Humans don't seem like the kind of species who'd welcome their new AI overlords with flowers and cheers.
Logged

Sir Pseudonymous

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #43 on: May 27, 2010, 03:52:47 pm »

I also don't like the idea of being out of a job because a computer is better at mathematics than I am.  I'm sorry, but I'd rather not even have computers that advanced, if it turns everyone's expertise into a hobby better fulfilled by a machine.  I know myself, and I need stuff to do, and I need a sense that that stuff is meaningful.
Wouldn't you rather be a brain in a jar playing Half Life 3 for eternity? ;D

I guarantee that HL3 will be used to sell the concept of being a brain in a jar. You know it's not going to roll around before we can do that. HL3, TF2:Brain-in-a-Jar Edition, CS:S on a simulated computer inside HL3, Portal 5, L4D10, and of course Halo 47 and Gears of Lol 30... :3

Of course it does; it is a concept of creating intelligences greater than ourselves. It creeps everyone out. The thing which we, as a species, can hang our collective hats on is that we are the smartest on Earth and have used our intelligence to conquer just about every environment. A technological singularity may help our species advance via neurological enhancement and a merging with our own technology; but it may also lead to us being obsolete and left in the dust to stagnate under the watch of an asphyxiatingly benevolent AI.
I feel reassured thinking about it. It's essentially the apotheosis of man: either we become gods, or we create one. What concerns me more is the theoretical universal singularity (where all matter and energy are compressed to a single point preceding another big-bang), and wondering whether a nigh-omniscient entity could figure out a way to survive that...

The singularity will either create an immortal utopia or, drastically less likely, lead to the replacement of humanity with what can only be described as a living god, left to spread itself across the universe and hopefully torture foul xenos for shits and giggles. Actually, I think that's probably what would happen in the first case too were we to encounter less advanced civilizations at that point it's doubtful they'd have anything meaningful for us to learn or take from them, so we'd just fuck with their heads in the most creative ways possible... :D
Logged
I'm all for eating the heart of your enemies to gain their courage though.

Cthulhu

  • Bay Watcher
  • A squid
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #44 on: May 27, 2010, 04:20:38 pm »

Of course it does; it is a concept of creating intelligences greater than ourselves. It creeps everyone out. The thing which we, as a species, can hang our collective hats on is that we are the smartest on Earth and have used our intelligence to conquer just about every environment. A technological singularity may help our species advance via neurological enhancement and a merging with our own technology; but it may also lead to us being obsolete and left in the dust to stagnate under the watch of an asphyxiatingly benevolent AI.

... No, what creeps me out is that a guy I know is so into AI and computers that he claims he would, in fact, date one with sufficient processing capabilities.
Assuming an AI that'd actually want to do stuff like that, wich is highly unlikely, I'd totally do that. I don't see anything strange with it.

Armok and the Fake Girl.

Anyway, I expect all the other things we were promised before the singularity.  Flying cars, hoverboards, holodecks.
Logged
Shoes...
Pages: 1 2 [3] 4 5 ... 7