Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 50 51 [52] 53 54 ... 158

Author Topic: Tech News. Automation, Engineering, Environment Etc  (Read 265581 times)

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #765 on: June 12, 2017, 06:05:41 pm »

Google’s New AI Is Better at Creating AI Than the Company’s Engineers

Quote
GOOGLE’S AUTOML
One of the more noteworthy remarks to come out of Google I/O ’17 conference this week was CEO Sundar Pichai recalling how his team had joked that they have achieved “AI inception” with AutoML. Instead of crafting layers of dreams like in the Christopher Nolan flick, however, the AutoML system layers artificial intelligence (AI), with AI systems creating better AI systems.

The AutoML project focuses on deep learning, a technique that involves passing data through layers of neural networks. Creating these layers is complicated, so Google’s idea was to create AI that could do it for them.

“In our approach (which we call ‘AutoML’), a controller neural net can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task,” the company explains on the Google Research Blog. “That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from.”


So far, they have used the AutoML tech to design networks for image and speech recognition tasks. In the former, the system matched Google’s experts. In the latter, it exceeded them, designing better architectures than the humans were able to create.
Stupid neural networks, stop being so damn effective!
Logged
._.

Reelya

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #766 on: June 12, 2017, 06:20:25 pm »

Actually that's a genetic algorithm arrangement, on top of the neural networks. It's a good example of how hybrid structures can achieve much more than trying to come up with one "magic" homogenous structure.

the giveaway is when it says it has thousands of rounds of training, i.e. iterations. The GA is altering the network topology, each topology is then trained, and meta-data is extracted (how well and how fast the topology learned the desired behaviour). Then, the tweaks can either be random, or you could apply gradient descent learning on the network topology itself (e.g. treat the possible topologies as a search space and estimate gradients for your changes in topology).

It's a nice approach but i don't think it's overly novel, because an amateur like me can think it up.
« Last Edit: June 12, 2017, 06:29:20 pm by Reelya »
Logged

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #767 on: June 12, 2017, 06:25:22 pm »

Actually that's a genetic algorithm arrangement, on top of the neural networks.
In our approach (which we call "AutoML"), a controller neural net can propose a “child” model architecture, which can then be trained and evaluated for quality on a particular task. That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from. Eventually the controller learns to assign high probability to areas of architecture space that achieve better accuracy on a held-out validation dataset, and low probability to areas of architecture space that score poorly. Here’s what the process looks like:

Sure looks like a neural network to me. Where did you get the "genetic algorithms"?
Logged
._.

Reelya

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #768 on: June 12, 2017, 06:36:10 pm »

It's an genetic programming structure, it doesn't really matter if the evaluation function is a neural network or not.

There are some criteria, different examples are created, then they are evaluated against the criteria, and new models are proposed. The only thing novel is that they're using an NN as the selection basis, but that in itself is just an example of generalization, since an NN can mimic many functions.

The giveaway in that diagram is where it says "Sample architecture with probability p". That part of the process is rolling dice, which is what you do with genetic algorithms, and is external to the neural network.

What I'm guessing is that they trained a separate "training NN" to guesstimate how effective each of the "target" NNs would be at learning the task based on real performance data of random networks. You can then rinse and repeat, but you use the "training NNs" predictions to guide you on which randomly-modified networks are the more promising ones.
« Last Edit: June 12, 2017, 06:46:52 pm by Reelya »
Logged

Reelya

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #769 on: June 12, 2017, 06:49:07 pm »

Spoiler (click to show/hide)

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #770 on: June 12, 2017, 07:13:34 pm »

It's an genetic programming structure, it doesn't really matter if the evaluation function is a neural network or not.

There are some criteria, different examples are created, then they are evaluated against the criteria, and new models are proposed. The only thing novel is that they're using an NN as the selection basis, but that in itself is just an example of generalization, since an NN can mimic many functions.

The giveaway in that diagram is where it says "Sample architecture with probability p". That part of the process is rolling dice, which is what you do with genetic algorithms, and is external to the neural network.

What I'm guessing is that they trained a separate "training NN" to guesstimate how effective each of the "target" NNs would be at learning the task based on real performance data of random networks. You can then rinse and repeat, but you use the "training NNs" predictions to guide you on which randomly-modified networks are the more promising ones.
The genetic programming structure, as far as I understand, is this:
Quote from: Evolutionary Computation for Modelling and Optimization, 2005, p.1
Generate a population of structures
Repeat
Test the structures for quality
Select structures to reproduce
Produce new variations of selected structures
Replace old structures with new ones
Until Satisfied
Similarity between this and the method Google has used for AutoML is very vague. It's not much of a genetic algorithm, when its "generated population of structures" is one, singular, structure.

It's a nice approach but i don't think it's overly novel, because an amateur like me can think it up.
It's annoying, right? Even an amateur like you could think of a method that's capable of beating highly qualified professionals from Google at doing something as difficult as configuring neural networks for maximizing accuracy, and its all because neural networks are made of pure hax and I don't even know why I'm wasting time trying to learn other methods when the cheating, winning approach, is right there, kicking asses and taking names.
Logged
._.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #771 on: June 12, 2017, 07:31:50 pm »

(with what mechanism?)
Well, if we take the picture at the face value, it somehow computes a "gradient of probability" (whatever that means) and then scales it by accuracy to obtain a datapoint, with which it then updates the neural network in charge. It would be good if they had a paper up there to explain how do they compute said "gradient of probability".
Logged
._.

Reelya

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #772 on: June 12, 2017, 07:33:19 pm »

Not according to this quote:

Quote
Eventually the controller learns to assign high probability to areas of architecture space that achieve better accuracy on a held-out validation dataset, and low probability to areas of architecture space that score poorly.

What is says here is that the controller is outputing a fitness function for any architecture proposed to it. But you still need to push possible architectures through that network to get a score, and pick one with a high score for the next round of training. The choice of what architectures to try next then does depend on some external heuristics to avoid brute force searching against the controller NN.
« Last Edit: June 12, 2017, 07:38:18 pm by Reelya »
Logged

Reelya

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #773 on: June 12, 2017, 07:50:40 pm »

I think it probably is, from the wording. They talk about identifying regions of the search space with a high score vs ones with a low score. To know that, you need to sample points in those regions and record the information. You then pick a point in the high-probability set, then do a more refined search in that region, to identify promising looking points, which then get fed into the main NN, and evaluated again.

This does in fact qualify as a genetic algorithm. It has selection rules which are the most important part of a GA, it has rounds. it has a population (it's keeping track of regions of the search space with high probability, i.e. it's keeping track of the best points to search around). it might lack crossover rules, but those are secondary to how a GA works. It does have mutation, in that given a good point, you have a mechanism for deriving other points to try that are close to that point.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #774 on: June 12, 2017, 08:04:54 pm »

Regardless of whatever it is, it works, works better than things from before, and continues to make neural networks even more overpowered than they were before. Neural networks are bullshit, they just keep getting better and better, with no alternative approach being even close to competing with them. Whyyyyyyy?
Logged
._.

inteuniso

  • Bay Watcher
  • Functionalized carbon is the source.
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #775 on: June 13, 2017, 02:40:21 am »

Why do our brains think up pleasing drink & food combinations in order to relax from the stresses of everyday life?

Our brains are holographic networks that use liquids & organic chemicals to perform fuzzy logic operations that allow for decades-long experiential memory storage that can be recalled as easily as smelling something, as well as being able to calculate how glass operates in infinite dimensions using thirty pages of algebra.

Why wouldn't we want to use said neural network to create a better neural network that can augment existing capabilities?
Logged
Lol scratch that I'm building a marijuana factory.

Reelya

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #776 on: June 13, 2017, 10:26:44 am »

So, my understanding is that it generates a neural network architecture and then proceeds to train a neural network on this architecture, then evaluates how to improve this model (with what mechanism?) and attempts the changes over the model, running it through the process over again.

Not a genetic algorithm per se, but it depends on the mechanism employed to evaluate and improve the child neural network.

What I think is happening is networks are generated, then they are trained by the normal means (standard backpropagation). The "controller" NN has nothing to do with this part.

Then some metrics/benchmarks are generated (how well the tested network performed in the training process. This is just standard data collection, e.g. looking at how quickly the network converged and how accurate it's results were).

The controller network then learns the mapping from network architecture to benchmarks (standard backpropagation again).

After this, the controller needs to generate new networks to try out. Two possible ways to do this: one would be to tweak "good" networks slightly, e.g. explore the search space in a guided way based on search space regions which look good. They hint at this in the article. The controller NN is basically a heuristic that can guesstimate how good novel topologies are going to be at the task.

But I have a hunch that what they might be doing is feeding the maximum possible score into the controller NN's output end, then backpropagate it all the way to the inputs. That way you would in fact get a single "ideal" network out of the controller, which you can test, and that would also challege the controller NN, since any error between the "ideal" network and the actual benchmark would be used to retrain the controller NN. I have a feeling this way of doing things might be prone to getting stuck in local maxima / ruts, however.
« Last Edit: June 13, 2017, 10:33:06 am by Reelya »
Logged

Tack

  • Bay Watcher
  • Giving nothing to a community who gave me so much.
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #777 on: June 13, 2017, 11:08:15 am »

Not sure if I'm doing this thread right (It's been sitting on my "recents" for a very long time with no actual action from me)
But kursgesaght posted a video recently about the automation revolution- very much in a "humans need not apply" fashion.
Thought it could prompt a new discussion about the economic ramifications of automation even in the absence of AI.
Logged
Sentience, Endurance, and Thumbs: The Trifector of a Superpredator.
Yeah, he's a banned spammer. Normally we'd delete this thread too, but people were having too much fun with it by the time we got here.

PTTG??

  • Bay Watcher
  • Kringrus! Babak crulurg tingra!
    • View Profile
    • http://www.nowherepublishing.com
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #778 on: June 13, 2017, 12:18:04 pm »

Yeah, there's two ways this ends:

1: FULLY AUTOMATED LUXURY COMMUNISM
2: The 0.1% exterminating the useless 99.9% of the population that are not already capitalists.

Since I am not in the top 0.1% of global wealth, I'm working on #1.

I may have missed something, but I'm pretty sure that anything besides these two is not a stable situation.

Oh, but the good news is that in either case the automation allows for virtualization and recycling at an extreme level, reducing humanities global footprint. So if the plankton survives the next 50 years, at least some of the biosphere will survive.
Logged
A thousand million pool balls made from precious metals, covered in beef stock.

Tack

  • Bay Watcher
  • Giving nothing to a community who gave me so much.
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #779 on: June 13, 2017, 01:14:22 pm »

I think capitalism can still live in this scenario... just that jobs will become very.. unimportant.
As long as we can assign value to Something, even the seconds left in our life, we will be able to spend those seconds sitting at a desk, staring at a wall, and earn money, and spend it.
Perhaps social media advertising will get to the point where people earn sustainable amounts of income simply by tagging or recommending a few products, periodically.

The laws of supply and demand are (imo) concrete, but the automation revolution will make them twist in some intriguing ways, methinks.
Possibly blood sport? Maybe utopian safe-space workplaces where all staff and customers operate on the same principles as you, and you get paid for doing what you love (even though all of your mistakes are fixed by a machine in transit).
Taxi drivers who are paid to be a friendly face, license not required.
Logged
Sentience, Endurance, and Thumbs: The Trifector of a Superpredator.
Yeah, he's a banned spammer. Normally we'd delete this thread too, but people were having too much fun with it by the time we got here.
Pages: 1 ... 50 51 [52] 53 54 ... 158