Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 2 3 [4]

Author Topic: Artificial Intelligence Thread  (Read 3572 times)

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #45 on: May 01, 2017, 06:55:29 pm »

You've said "stochastic gradient descent". It's a specific variation of the batch gradient descent, that updates faster but adds randomness, which is kinda negative but also helps it to break out of local minima. I thought that you've added "stochastic" to indicate that specific variant of gradient descent.

Regardless of that, random permutation of weights will, given enough time, reach as close to the global solution as you want, though it would take one hell of a long time. The strict mathematical proof for the case of simple one-layer neural networks, called "perceptrons", should be in the book by Rosenblatt, "The perceptron: A probabilistic model for information storage and organization in the brain", 1958. I think I've had a copy of that book somewhere on my computer, but I'm not sure if it survived the reinstalling of OS. If you want, I can search for it.
Logged
._.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #46 on: May 01, 2017, 07:01:04 pm »

I hope it's the right one, because I may have not remembered its name correctly. Does it have like 500+ pages, with detailed illustrations? Because the one I was thinking of should have that.
Logged
._.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #47 on: May 01, 2017, 07:05:32 pm »

Shit, it's the wrong one. I've completely forgot that there was also an article about it (the perceptron), and have mistakenly copied its date and label from Wikipedia, because the actual book is named, quite confusingly, "The principles of neurodynamics". My bad, this is the right one.

EDIT: Hopefully I've linked it the right way.
Logged
._.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #48 on: May 01, 2017, 07:10:34 pm »

There's actually a lot of convergence theorems in that book, starting from something that resembles gradient descent and gradually weakening the conditions. The part I thought about should be at page 117 and onward.
Logged
._.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Artificial Intelligence Thread
« Reply #49 on: May 01, 2017, 08:20:43 pm »

It's actually a pretty important book overall, since it's AFAIK the first detailed written description of a neural network in history and its principles of operation.

There was somewhat of a rebuttal a few years later, a very mathematical kind of book called "Perceptrons" by Minsky and Papert, 1969, which was also pretty important since it delineated the weaknesses of perceptrons - generally, that learning difficult functions like "parity", where you have to output 1 when the image is symmetrical left-to-right, and 0 when it's not - or learning invariant representations of images like letters and geometric shapes, for example - would tend to increase the required weights exponentially with the number of inputs (pixels, in the case of images), which would rapidly increase the memory required to store it to an unreasonable amount, or alternatively, that they would take an exponentially-increasing amount of time to learn.

I'm not even sure if those difficulties have been overcome by switching to multi-layered neural networks, since, as I've heard, they're based on fundamental constraints on any kind of parallel-based computing system, which all neural networks are, by design.

Unfortunately, I don't have it, and it seems that it's not available online. So I don't know if it's still applicable or not. Shame, it sounds a lot like it mathematically justifies the same issues that you've raised.
Logged
._.
Pages: 1 2 3 [4]