Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 2 [3] 4

Author Topic: Artificial Intelligence Thread  (Read 3665 times)

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #30 on: May 01, 2017, 09:29:49 am »

It's not even artificial skill. It's statistical analysis. Yes, there are a lot of areas within the field of "machine intelligence" that go beyond this (genetic algorithms at least try something different, even if local optimums produce endless pitfalls), but so much of this field is just altering particular values across several probability distributions such that they maximize the probability that they will produce the sample that you provided.

You can recurse it all you want (running these algorithms over a sample of algorithms applied to a sample space), but it's all just the same thing. Not to say that it isn't incredibly effective or useful, but that I don't see this being the route to AI as is commonly defined. There's something missing from this approach.
I'm... not sure what you're talking about here. I've read quite a few books of machine learning in my free time, and the words "probability distribution" comes only with Bayesian methods, and with the "distribution estimation" branch, which is not quite in the mainstream of current machine learning applications.
Logged
._.

Reelya

  • Bay Watcher
    • View Profile
Re: Artificial Intelligence Thread
« Reply #31 on: May 01, 2017, 09:33:19 am »

Well, where do "qualia" come from in the neural networks in our heads? The network must manufacture the qualia, since qualia have no meaning or existence outside of the network connections themselves.

My theory is that consciousness arose because it's the cheapest and simplest way to motivate a neural network to be goal-driven. It's more or less impossible to hard-code in rules for all possible situations and stimuli, so organic systems hit on consciousness as a driving factor behind network design. That's because large hard-coded systems are inefficient compared to consciousness.

Basically if you build a large enough neural network and put it under the same sorts of optimization pressures as real brains face, then Occam's Razor would suggest that it's going to hit on the same solutions to the same problems, regardless of what we tell it to do.
« Last Edit: May 01, 2017, 09:34:54 am by Reelya »
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile
Re: Artificial Intelligence Thread
« Reply #32 on: May 01, 2017, 09:50:05 am »

My theory is that consciousness arose because it's the cheapest and simplest way to motivate a neural network to be goal-driven. It's more or less impossible to hard-code in rules for all possible situations and stimuli, so organic systems hit on consciousness as a driving factor behind network design. That's because large hard-coded systems are inefficient compared to consciousness.
I'll give you that one - one of the biggest hurdles I see with our current AI tech is energy consumption. I mean, the human brain is astonishing considering you can run an entire human on an average of less than 100W (and that's using a "generous" 2000kCal/day energy budget).

So to continue more on that thread - I'm curious as to how many current "powerful" AIs actually have the capability to reconfigure themselves? I mean we set up things like how many layers and connections and what operations are available - have there been any set up where there is a "second" network that is put on top of the first one so that it reconfigures the first one to make it better?  (And maybe even have two of those, working on each other?)
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #33 on: May 01, 2017, 10:15:40 am »

My theory is that consciousness arose because it's the cheapest and simplest way to motivate a neural network to be goal-driven. It's more or less impossible to hard-code in rules for all possible situations and stimuli, so organic systems hit on consciousness as a driving factor behind network design. That's because large hard-coded systems are inefficient compared to consciousness.
I'll give you that one - one of the biggest hurdles I see with our current AI tech is energy consumption. I mean, the human brain is astonishing considering you can run an entire human on an average of less than 100W (and that's using a "generous" 2000kCal/day energy budget).

So to continue more on that thread - I'm curious as to how many current "powerful" AIs actually have the capability to reconfigure themselves? I mean we set up things like how many layers and connections and what operations are available - have there been any set up where there is a "second" network that is put on top of the first one so that it reconfigures the first one to make it better?  (And maybe even have two of those, working on each other?)
There have been some recent successful experiments in making a neural network with its configuration being defined by another neural network. Can't remember where I've saw it, though. But surprisingly enough, there isn't much here. I think it's probably because of the currently isolated nature of AI research, where you have a paradigm "1 problem = 1 neural network", which prevent the accumulation of data necessary to make the "boss" neural network - you do need much more data for that.

Neural networks are actually somewhat difficult to work with in those settings, because they take a shit-ton of time to learn, and in lots of situation, the learning time, as well as its results, can be pretty heavily affected by randomness inherent in most of the currently used training methods.

It'd be easier to use decision trees methods, like random forests or jungles, because they take an easily pre-determined time to build, but they're currently significantly less accurate, and there aren't a lot of people researching them. Which is a shame, because they have a lot of neat properties that neural networks fundamentally lack, such as being able to process any data without preparation procedures, their ability to work around missing data, and their basically instantaneous runtime.
Logged
._.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #34 on: May 01, 2017, 10:58:23 am »

It's not even artificial skill. It's statistical analysis. Yes, there are a lot of areas within the field of "machine intelligence" that go beyond this (genetic algorithms at least try something different, even if local optimums produce endless pitfalls), but so much of this field is just altering particular values across several probability distributions such that they maximize the probability that they will produce the sample that you provided.

You can recurse it all you want (running these algorithms over a sample of algorithms applied to a sample space), but it's all just the same thing. Not to say that it isn't incredibly effective or useful, but that I don't see this being the route to AI as is commonly defined. There's something missing from this approach.
I'm... not sure what you're talking about here. I've read quite a few books of machine learning in my free time, and the words "probability distribution" comes only with Bayesian methods, and with the "distribution estimation" branch, which is not quite in the mainstream of current machine learning applications.
Really? Unfortunately I can't find the paper that I most recently worked with (it was titled the "HOPE Hybrid Orthogonalization and... somethingorother algorithm), but it was most assuredly using max likelihood. It included methods to translate it to neural networks, and it was applied to analysis of several audio samples to yield faster results than other methods. The "probability distribution" part of the model is that you could just pick which distribution to use since the algorithm was general enough to apply across any distribution in the exponential family. It's just what you assume your "signal" part of the data follows.

When I look up "unsupervised learning" on Wikipedia, the first thing that comes up in the description of how they do so is "method of moments", which is most certainly a probability distribution thing. When I look up "supervised learning" I'm met with a discussion of "bias and variance", which is strictly an issue regarding estimators of probability distributions.

Hell, the entire notion of supervised learning, this "function that best approximates", is just a probability distribution. Your true function may be deterministic, sure, but you're not finding that. You're finding an estimate, which is a probability distribution. They might not necessarily use those words, but that's what they mean.

I look up "reinforcement learning" and I'm greeted with "fixed initial distribution" and expected value.


Seriously, where did you get that probability distributions are barely used in the field? It's literally the entirety of the field.
Misrepresenting what I've said, you are. I did not say that they weren't used in the field, I've said that I did not understand what you've said, here:
Quote
so much of this field is just altering particular values across several probability distributions such that they maximize the probability that they will produce the sample that you provided
I don't understand what you've said here, because I've never seen anything even close to that description in the books I've read.

Like I've said, the words "probability distribution" don't come up all that often. In the justification of why the methods work, maybe, but in the methods' actual practical implementations, only Bayesian methods use probabilities.
Logged
._.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #35 on: May 01, 2017, 11:05:07 am »

So, what do you think is missing from that approach? Because it sounds vague enough to me to be able to describe almost anything.
Logged
._.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #36 on: May 01, 2017, 11:32:51 am »

Neural network's only assumption is that the output variable is a continuous function of the input variables. It's a universal approximator, given enough data points it'll approximate any continuous function to any given degree of accuracy.

Antipodally, decision tree's sole assumption is that the output variable is a discrete step-wise function of the input variables. And, as with neural networks, they will approximate said function, given enough data. Though in a significantly different manner, heh.

I dunno how about you, but to me this doesn't seem as "making too many assumptions".
Logged
._.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #37 on: May 01, 2017, 11:42:41 am »

Most of the functions I've encountered in physics were continuous. Maybe if you start to assess various exotic transcendental functions, like the one where it's equal to 0 for all rational numbers and to 1 for all irrational ones, it happens to be that way, but I doubt that's in any way relevant to our natural intelligence.
Logged
._.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #38 on: May 01, 2017, 02:46:37 pm »

Okay, because this is pertinent- the input doesn't have to be continuous. That theorem simply states that neural networks can approximate any continuous function, when given appropriate parameters. It says nothing about the ability to learn said parameters.
Right, now find out where I've said that input has to be continuous:
Neural network's only assumption is that the output variable is a continuous function of the input variables.

Being able to represent and being able to find a representation are two very, very, very different things.
Improving the ability to do the latter is what the deep learning is all about. And it's fairly successful at that, with recent achievement of being able to solve Go, and to reduce a Google data center's cooling bill by 40%.

As for the discontinuity issue, a lot of things that are "learned" are discontinuous. Discrete sets are discontinuous. The theorem does not specify continuous except over a zero set of points- it simply states continuous. As far as I can tell that means even a simple jump discontinuity invalidates the theorem.
That's why I specified the other algorithm that's capable of learning discrete things.

Regardless of all of this, I still said that this doesn't feel like the main issue, and might not even be an issue at all.
Maybe the issue is that you don't feel like all these algorithms work like human intelligences. Which is correct, because the way humans work is closer to reinforcement learning.
Logged
._.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #39 on: May 01, 2017, 04:05:26 pm »

Well, this "shifting weights around" produces functional intelligence. It's not like natural processes do anything fundamentally different, right?
Logged
._.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #40 on: May 01, 2017, 05:31:14 pm »

No, they don't "create" new rules. Brains do not have rules inside of them, they have neurons connected to each other with various degrees of strength. It's shifting the strength of those connection (via Hebbian learning, reinforcement learning and undoubtedly many other processes) that creates the illusion of those "new rules" appearing, illusion of rationality and logic. But it's just an illusion.

EDIT: To be more accurate, it's not that it's "illusion" in a sense that it doesn't exist, it's an "illusion" in a sense that it's an emergent behavior, and that the "shifting weights around" is a more fundamental procedure that, under certain conditions, create something to the effect of the rule-based system.
« Last Edit: May 01, 2017, 05:33:08 pm by Sergarr »
Logged
._.

Reelya

  • Bay Watcher
    • View Profile
Re: Artificial Intelligence Thread
« Reply #41 on: May 01, 2017, 06:07:01 pm »

No, that says that brains aren't backpropagation networks. It's the choice of learning algorithm that's in question there.

But backpropagation isn't really built into those network either, it's just one choice of learning scheme, and you can in fact train neural networks completely without it. So it's a little arbitrary to claim that all possible neural networks can't operate like the brain, just because one current implementation uses a non-brain-like learning method.
« Last Edit: May 01, 2017, 06:16:46 pm by Reelya »
Logged

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #42 on: May 01, 2017, 06:08:00 pm »

I'm not convinced. Redunctionist-based psychology has never been of particular interest to me in regards to AI, because it just reduces the whole problem down to "once the hardware is good enough", which is just a cop-out.
It's mostly about software, though. Did you miss the whole "deep learning" thing being specifically about software breakthrough? Sure, hardware improvements matter, too, but fundamental improvements are made in software department.

In regards to "shifting weights", I was talking about reinforcement learning, not artificial neural nets.
Those are not really different things. Reinforcement learning can utilize neural networks, AlphaGo is made that way. Besides, artificial networks are also based entirely around "shifting weights", which doesn't prevent them from being universal approximators or anything.

Additionally, to quote Wikipedia:
Quote
Aside from their utility, a fundamental objection to artificial neural networks is that they fail to reflect how real neurons function. Back propagation is at the heart of most artificial neural networks and not only is there no evidence of any such mechanism in natural neural networks, it seems to contradict the fundamental principle of real neurons that information can only flow forward along the axon. How information is coded by real neurons is not yet known. What is known is that sensor neurons fire action potentials more frequently with sensor activation and muscle cells pull more strongly when their associated motor neurons receive action potentials more frequently. Other than the simplest case of just relaying information from a sensor neuron to a motor neuron almost nothing of the underlying general principles of how information is handled by real neural networks is known.
So saying that the brain is just a neural network seems fundamentally incorrect.

Right, I forgot that there are also many other different elements inside of the brain, but, fundamentally speaking, learning is physically represented by shifting the connections between those brain-elements, which is completely equivalent to shifting weights around - not in the bog standard artificial neural network, yes, but there exists a mathematical representation where you can represent it that way. I think it was called phys-something, I don't remember it clearly...
Logged
._.

Reelya

  • Bay Watcher
    • View Profile
Re: Artificial Intelligence Thread
« Reply #43 on: May 01, 2017, 06:20:21 pm »

They have no upper bound on the size or complexity of the networks, so why would there be some upper bound on the efficiacy. The human brain might be more constrained than simulated networks in the long run actually.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #44 on: May 01, 2017, 06:40:52 pm »

Why do people keep acting like I'm in some way attacking the efficacy or validity of these things? They work great. I'm just skeptical whether they're an approach that'll lead to general-intelligence AI.
Well, you'll have to define what "general-intelligence AI" means, first, before we can say that.

But no, whenever I talk apparently everyone has to go into condescending "maybe if we explain it slowly he'll understand" mode as if I don't understand. I've seen nothing in the arguments made here that lead me to believe that deep learning/artificial neuron networks will actually lead to general-intelligence AI. I'm not saying they don't work. I'm saying that they have an upper bound on their efficacy, and it's lower than what people would like.
K'.

Then prove it. These are mathematical concepts, after all, so it's entirely within the scope of the discussion to demand a proof. Prove to me that methods other than stochastic gradient descent lead to convergence to the solution.
Random permutation of weights + batch gradient descent will eventually reach the best possible solution for a given network, after you'll explore every local maximum/minimum (depending on where you go along the gradient, don't remember exactly), and since neural networks are a universal approximator, it could be made as close to reality as possible. Given enough data, of course.

Or rather, prove my thing that I edited in to the remark above- prove to me that artificial neural networks and deep learning exhibit mathematical chaos.
Different starting weights can, via the magic of gradient descent, result in reaching completely different solutions to the problem. Though, that's probably not what you want.


Decision trees could actually be more close to what you want, because they exhibit a great degree of instability. I forgot where I've seen it, but there was an example where you've had a decision tree built on a dataset, compared to a decision tree built on a dataset with one datapoint removed, and they were almost completely different. Though, in context that was perceived as a bad thing, since it meant that you couldn't rely on decision trees to give you an interpretation of the underlying phenomena that wouldn't completely change with every new observation.
Logged
._.
Pages: 1 2 [3] 4