Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 14 15 [16]

Author Topic: You wanna rescue the world?  (Read 16807 times)

Eagleon

  • Bay Watcher
    • View Profile
    • Soundcloud
Re: You wanna rescue the world?
« Reply #225 on: April 21, 2015, 06:39:58 pm »

Neural networks, genetic algorithms etc are what is defined as the field of AI. What they lack is consciousness. The concept of a singularity doesn't necessarily imply we get a "person in a box" out of it. So asking for that is a red herring.

The main point of the singularity concept is that these sorts of things aren't linear at all. Say you build a machine that can read and semantically understand college-level textbooks. That would be a very useful machine. Making the machine will take a long time. But once you have that machine, you can basically pour all the books into it in no time at all. That is a discontinuous growth in what that machine is then capable of. The implications of having the first such machine that is capable of doing that will have massive flow-on effects through every field of human thought.

But asking why AI would be useful, you might as well ask why any human smarter than average is useful. Think about possible future AIs. Is the potential limit of AI smartness less than an average  human, equal to an average human, or greater than an average human. There's nothing inherently special about the average intelligence of a human. The limit of machine intelligence is very unlikely to exactly fall there. The limit is likely to be vastly different to the limit for a human. And given that it's potentially possible to build a computer with many more components than a human brain, the limit is probably above the normal limit for a human. So, asking what good will AI do is like saying what's the point of having smarter-than-average people, when average people are perfectly sufficient.
No, you might not as well, because if you point at a smart person and a 'dumb' person, put them in a field that still has some potential left for innovation, and ask them to get to work, you can and will have moments of insight that don't come from simply being smart (whatever that means - ask a comp-sci student to design a bridge some time), but rather from the experiences each person has in learning the material. Intelligence is not something that can be measured on a scale. We are capable of intelligence because we can form shortcuts and discard information, we need cognitive blind spots to function and produce new shortcuts. Our intelligence depends on our social connections as much as our ability to analyze the world around us, and that's what makes it suited to solving human problems with any amount of rapid intuition.

It's not so simple as 'feed knowledge in slot A, get solution B.' For GA and NN designs, someone has to judge that solution against reality. The famous tank lighting anecdote (a NN learns that photos shot at a certain time contain tanks, rather than visually recognizing any part of the camouflaged tank itself) is a real problem for designers of neural networks - signal inflow still, for the most part, has to be designed for a set of known parameters, because neural networks are incredibly susceptible to situational bias. Signal output has to fit with physical reality. Now you start to talk about taking unregulated input from human text, which may contain humor or plain inaccuracy, and the whole gamut of action available to solve a problem in the real world, and the situation goes too far out of control to consider for serious design beyond basic optimizations like the antenna, or problems with perfectly known parameters and outcomes (there are very few).

The reason we can deal with it isn't even consciousness - it's cultural context and the decades of training we've had from comparatively incredibly efficient neural circuitry suited especially well for dealing with social learning. A neural network without that is going to be very, very confused by your barrage of obtuse chemical engineering texts. I think it does imply a person (or persons, as with angle's concept) in a box situation. I would ask for that if I were still working with AI and you told me to make something that could take every college major and make sense of it all. I'd still think you were insane, because the AI probably would not want to deal with that many linear algebra jokes.
« Last Edit: April 21, 2015, 06:47:12 pm by Eagleon »
Logged
Agora: open-source, next-gen online discussions with formal outcomes!
Music, Ballpoint
Support 100% Emigration, Everyone Walking Around Confused Forever 2044

Eagleon

  • Bay Watcher
    • View Profile
    • Soundcloud
Re: You wanna rescue the world?
« Reply #226 on: April 21, 2015, 11:43:01 pm »

Eagleon, you mentioned that you worked with AI before. I'm curious now, to what extent did you get into the field? Mostly to gauge exactly where your viewpoint is coming from.
Independent research and tests, programming some of my own ideas in growing and decaying shortcut structures in limited-connectivity neural networks. I wanted to see what kind of optimizations could be had by abandoning the typical fully connected network in favor of irreversably abstracting heavily saturated pathways into their own nodes, up to treating them as single neurons when it came time to introduce scaling degrees of atrophy.  I never got anywhere significant or published but I learned a fair bit about how things were being put together. There's a lot of papers I couldn't get access to at the time so I sort of moved on to other things. This was around six years ago.

Even being as out of touch as I am, I still maintain that our knowledge of science is communicated with a language that depends on enormous amounts of cultural understanding to be parse-able, and even then you have terminology clashes between the sciences that need to be reconciled before any kind of automated system can simply run through it all and jot down everything we've done.  The closest I've seen to that is Watson some years after I stopped, which last I heard has started to be developed for medical usage by nurses. After that, just having the knowledge is not enough to present new solutions - you can retrieve data using all kinds of systems we've developed, it's a little like the difference between a prosthetic that can learn you or someone learning a prosthetic (google searchese, database queries, etc.) but the independent movement and reflexes are not there.

You can have it transform the data to infer its consequences and present that as new data, and that's useful, certainly. We've made awesome progress with projects like these. Both human and computer-assisted big-data is undeniably awesome; Wolfram, that photoelectric material search a while back, Eyewire, Folding@Home, etc. But there are still things for which we have no solution that we can apply an optimization algorithm to. Humans will continue to provide those until we make our alien companions, and there's no telling whether they'll behave themselves enough to do so any more effectively than we could.
Logged
Agora: open-source, next-gen online discussions with formal outcomes!
Music, Ballpoint
Support 100% Emigration, Everyone Walking Around Confused Forever 2044

Eagleon

  • Bay Watcher
    • View Profile
    • Soundcloud
Re: You wanna rescue the world?
« Reply #227 on: April 22, 2015, 12:10:53 am »

Well, I think we have some other techniques beyond neural networking. Hidden Markov Models spring to mind, actually. Otherwise, I do see your point. My goal is mostly to... well, make full-blown artificial consciousness and see where that ends up. Probably infeasible if not impossible, but it's worth trying.
Go for it. My research interests were more or less shaped by the fact that I didn't have access to robotics - otherwise my theory bends towards intelligence being largely governed by the sensory, kinesthetic, and internal inputs it had access to, as well as the integration it could accomplish between them, the output of that integration between that treated as another input, and so on. My idea was that the closer we can make its embodiment like humanity, the more likely it is to 'snap' into our cultural understanding and start feeding on the empathy we give it, start trying to do work, and eventually begin to do so in ways we can encourage. If that sounds like there might be ethical problems involved, you'll have some understanding of another reason why I was hesitant to make it my career - I don't think we can have nurtureable AI (a concept I haven't seen talked about really) without some kind of a nurturing-comfort response equivalent, for instance, which probably requires discomfort, which probably requires pain. Being the guy that made the screaming robot babies probably wasn't going to endear me to the community.
Logged
Agora: open-source, next-gen online discussions with formal outcomes!
Music, Ballpoint
Support 100% Emigration, Everyone Walking Around Confused Forever 2044

FArgHalfnr

  • Bay Watcher
    • View Profile
Re: You wanna rescue the world?
« Reply #228 on: April 27, 2015, 11:05:30 am »

*Obvious attempt at re-railing/resurrecting the thread beginning*
It was already mentioned that even if we had the perfect AI, we wouldn't listen to it. What could be done to fix this issue? At this point our problem is not that we don't know how to fix things, it is that we aren't willing to do the change. Any suggestion? Personally I'd try tweaking the school system to favor a greater participation in politics and a better awareness of the world's issues. The problem with this is that it would take too long to see any noticeable change and also that we have no control over how schools teaches stuff. We'd need something that can be done by anyone preferably.
*Obvious attempt at re-railing/resurrecting the thread ending*
Logged
FArgHalfnr for the #1 eldrich monstrocity.
Pages: 1 ... 14 15 [16]