I don't understand why people think that every new technology develops in this way when there is a clear pattern - quick early development then stagnation and slow improvement and optimization.
Nuclear reactors are largely the same. Jet engines are largely the same. Even computers are largely the same. Practical difference between the year 2012 PC and the year 2024 PC is way smaller than the difference between 2012 PC and 2000 PCs.
Yes, this is how technology works, I am aware.
But the thing is that AI is nowhere near the end of the quick early part, we are still at the stage where individual people can make major breakthroughs. We are still at the stage where only a few handfuls of top of the line systems have ever been built. We are still at the stage where individual papers can increase the power of AI multiple times.
As a planet we have built just a few handfuls of top of the line AI, thinking we are near the peak of what we can do is like building a few vaccum tube computers and going "Whelp, this is probably it, computers are just about at their peak".
It's like people - even smart people - forget that there are these pesky things known as laws of physics. No physical process (and computation is indeed a physical process) is actually exponential; they are all actually logistic. They only look exponential on the early part of the curve but then the rate of change must inevitably start to get smaller and eventually reach zero.
Even a chain reaction can't be exponential forever; eventually the reactants are exhausted.
We already know that the laws of physics allow you to run and train human level intelligences (eg. humans) on just 20 watts of power.
We also know that humans aren't optimized for intelligence in the slightest, we are instead optimized to avoid dying and to pass on our genes, which means stuff like reaction speed, the ability to control our body, non-intelligence things (eg. the ability to throw rocks), and the need to run our sensorium eat up huge amounts of processing power.
Designed intelligences also have a host of advantages evolution can never match that will boost their efficiency; they can be specifically targeted at non-being alive goals, they can be modular and have parts of them removed, they can be trained on all the data the human race possesses, ect, ect.
There are obvious barriers in the way of actually getting fully to human intelligence, and getting to human energy efficiency is a pipe dream, but even the human mind isn't anywhere near the theoretical limits of computation.
We investigate the rate at which algorithms for pre-training language models have improved since the advent of deep learning. Using a dataset of over 200 language model evaluations on Wikitext and Penn Treebank spanning 2012-2023, we find that the compute required to reach a set performance threshold has halved approximately every 8 months, with a 95% confidence interval of around 5 to 14 months, substantially faster than hardware gains per Moore’s Law.
The algorithmic gains are absolutely huge and are driving much of the AI gains.
Now maybe they will slow and cease before we get to human level intelligence, but in many ways we are already there and the train shows no signs of slowing down.
Enjoy your buggy-ass code written by a glorified phone autocorrect. All I have ever heard about AI coding is that it's only useful for explaining things or writing boilerplates or small snippets.
"GPT2는 매우 나빴어요. GPT3도 꽤 나빴고요. GPT4는 나쁜 수준이었죠. 하지만 GPT5는 좋을 겁니다.(GPT2 was very bad. 3 was pretty bad. 4 is bad. 5 would be okay.)"
It was only good for small snippets, and now (with Devin) its good for substantially more. From a human perspective it would still be "bad" at programming, but I'm not *really* worried about what it can do today or next year (although I am still worried about what it can do next year because its existence will probably make the initial job search substantially harder), I'm really worried about where it will be in five or ten years.
It will never have agency.
Is there any action an AI could take that would make you think it had agency?
You can actually see this in that reddit video posted earlier - even in the highly constrained environment that was optimized for making a plausible-looking demo, the robot is still wrong about putting the dry, used dishes into the drying rack, because it doesn't know what that is, only the word we use for it. This is a separate problem domain that has to be solved, and while it's possible to solve parts of it with similar approaches, it is not practical to do so currently.
Based on the scene right now where do you think the dishes in front of you go next?
I disagree, the clear answer to the question the AI is given is that the dishes go with the other dishes in the drying rack because its obviously the intended answer to the question, most people would reach the same conclusion and would put it in the same place if they were given the same test as it.
E: To be clear I'm not saying that I think we're going to reach AGI within a few years or anything. It will probably take decades to actually get there, but that's a pretty far cry from the impossibility that some of you think AGI is.