Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (68.4%)
Universe
- 3 (15.8%)
The World
- 3 (15.8%)

Total Members Voted: 19


Pages: 1 ... 30 31 [32] 33 34 ... 42

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 26620 times)

Starver

  • Bay Watcher
    • View Profile
Logged

Nirur Torir

  • Bay Watcher
    • View Profile

Quote
One recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”
But that only works best for sets of 50 math problems, for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
Quote
Two minute papers video: The First AI Software Engineer Is Here!

Soon, we'll see AIs training new AI models specifically for AI use. Their prompts will appear to be a nonsensical blend of slightly wrong pop-culture references and strings of numbers. "Admiral, power up transporter phaser b4519418b array and beam the gollums into Mount Doom."
This will somehow be 53.2% more efficient than an English prompt for prototyping new android sandwich making instructions.

As for Skynet... this thing has no agency. It will never have agency.
It's not far from the GPT robot. If you can have a conversation with a robot about "What happens next to these dishes? [...] Okay, do that," and have it put them properly away in the rack, then you're one step away from having a robot set sub goals that let it do whatever household tasks you put in front of it. Isn't that pretty close to AI agency?
Can't it be pushed into the software world? Say, have it autonomously going around and trying to fix random github bugs?
Logged

Strongpoint

  • Bay Watcher
    • View Profile

Sure, you can make a piece of software that will take code as prompt and produce edited code as an output and go from one github project to another.

But how does this thing have any more agency than a script that would simply replace the code with zeroes?
Logged
They ought to be pitied! They are already on a course for self-destruction! They do not need help from us. We need to redress our wounds, help our people, rebuild our cities!

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

]It's not far from the GPT robot. If you can have a conversation with a robot about "What happens next to these dishes? [...] Okay, do that," and have it put them properly away in the rack, then you're one step away from having a robot set sub goals that let it do whatever household tasks you put in front of it. Isn't that pretty close to AI agency?
Can't it be pushed into the software world? Say, have it autonomously going around and trying to fix random github bugs?
Neither of those things are happening with current models.
Logged

Nirur Torir

  • Bay Watcher
    • View Profile

Sure, you can make a piece of software that will take code as prompt and produce edited code as an output and go from one github project to another.

But how does this thing have any more agency than a script that would simply replace the code with zeroes?
Instead of choosing randomly, have it find 100 charities, and choose one. Have part of its workflow be to post a blog about what bug it solved and why.
I'd consider that low level agency. Devon looks like it's past all the hard hurdles to build on to get there, but it's not going to happen like that, because of money and it might stumble onto A Solution To End All Suffering Forever.
I'd have to consider it at least a medium level of agency if a programming bot is assigned to spend 5% of its processing cycles on improving its work efficiency over time, and decides that the best way to do that is to start a gofundme to buy more computing tokens.
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

Sure, you can make a piece of software that will take code as prompt and produce edited code as an output and go from one github project to another.

But how does this thing have any more agency than a script that would simply replace the code with zeroes?
Instead of choosing randomly, have it find 100 charities, and choose one. Have part of its workflow be to post a blog about what bug it solved and why.
I'd consider that low level agency. Devon looks like it's past all the hard hurdles to build on to get there, but it's not going to happen like that, because of money and it might stumble onto A Solution To End All Suffering Forever.
I'd have to consider it at least a medium level of agency if a programming bot is assigned to spend 5% of its processing cycles on improving its work efficiency over time, and decides that the best way to do that is to start a gofundme to buy more computing tokens.
I don't think you really have a clue what you're talking about. It was already possible to write programs to do any of these things (although I'm assuming that you at least want an autonomous decision to start a gofundme, not one it was given). The essential advance of the LLM is the ability to generate text or other data obeying statistical patterns humans find natural. They just aren't in the same universe.

What people are talking about doing now, basically, is using LLMs to generate an input stream for the command processing component that already existed. This is an advance in a narrow sense, but the advance is in the least significant part. The world model needed to process arbitrary commands is the hard part and the current state is not adequate for the majority of use cases. You can actually see this in that reddit video posted earlier - even in the highly constrained environment that was optimized for making a plausible-looking demo, the robot is still wrong about putting the dry, used dishes into the drying rack, because it doesn't know what that is, only the word we use for it. This is a separate problem domain that has to be solved, and while it's possible to solve parts of it with similar approaches, it is not practical to do so currently.
Logged

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

I don't understand why people think that every new technology develops in this way when there is a clear pattern - quick early development then stagnation and slow improvement and optimization.

Nuclear reactors are largely the same. Jet engines are largely the same. Even computers are largely the same. Practical difference between the year 2012 PC and the year 2024 PC is way smaller than the difference between 2012 PC and 2000 PCs.
Yes, this is how technology works, I am aware.
But the thing is that AI is nowhere near the end of the quick early part, we are still at the stage where individual people can make major breakthroughs. We are still at the stage where only a few handfuls of top of the line systems have ever been built. We are still at the stage where individual papers can increase the power of AI multiple times.

As a planet we have built just a few handfuls of top of the line AI, thinking we are near the peak of what we can do is like building a few vaccum tube computers and going "Whelp, this is probably it, computers are just about at their peak".
It's like people - even smart people - forget that there are these pesky things known as laws of physics. No physical process (and computation is indeed a physical process) is actually exponential; they are all actually logistic. They only look exponential on the early part of the curve but then the rate of change must inevitably start to get smaller and eventually reach zero.

Even a chain reaction can't be exponential forever; eventually the reactants are exhausted.
We already know that the laws of physics allow you to run and train human level intelligences (eg. humans) on just 20 watts of power.
We also know that humans aren't optimized for intelligence in the slightest, we are instead optimized to avoid dying and to pass on our genes, which means stuff like reaction speed, the ability to control our body, non-intelligence things (eg. the ability to throw rocks), and the need to run our sensorium eat up huge amounts of processing power.
Designed intelligences also have a host of advantages evolution can never match that will boost their efficiency; they can be specifically targeted at non-being alive goals, they can be modular and have parts of them removed, they can be trained on all the data the human race possesses, ect, ect.

There are obvious barriers in the way of actually getting fully to human intelligence, and getting to human energy efficiency is a pipe dream, but even the human mind isn't anywhere near the theoretical limits of computation.
We investigate the rate at which algorithms for pre-training language models have improved since the advent of deep learning. Using a dataset of over 200 language model evaluations on Wikitext and Penn Treebank spanning 2012-2023, we find that the compute required to reach a set performance threshold has halved approximately every 8 months, with a 95% confidence interval of around 5 to 14 months, substantially faster than hardware gains per Moore’s Law.

The algorithmic gains are absolutely huge and are driving much of the AI gains.
Now maybe they will slow and cease before we get to human level intelligence, but in many ways we are already there and the train shows no signs of slowing down.
Enjoy your buggy-ass code written by a glorified phone autocorrect. All I have ever heard about AI coding is that it's only useful for explaining things or writing boilerplates or small snippets.
Quote from: Sam Altman
"GPT2는 매우 나빴어요. GPT3도 꽤 나빴고요. GPT4는 나쁜 수준이었죠. 하지만 GPT5는 좋을 겁니다.(GPT2 was very bad. 3 was pretty bad. 4 is bad. 5 would be okay.)"
It was only good for small snippets, and now (with Devin) its good for substantially more. From a human perspective it would still be "bad" at programming, but I'm not *really* worried about what it can do today or next year (although I am still worried about what it can do next year because its existence will probably make the initial job search substantially harder), I'm really worried about where it will be in five or ten years.
It will never have agency.
Is there any action an AI could take that would make you think it had agency?
You can actually see this in that reddit video posted earlier - even in the highly constrained environment that was optimized for making a plausible-looking demo, the robot is still wrong about putting the dry, used dishes into the drying rack, because it doesn't know what that is, only the word we use for it. This is a separate problem domain that has to be solved, and while it's possible to solve parts of it with similar approaches, it is not practical to do so currently.
Quote
Based on the scene right now where do you think the dishes in front of you go next?
I disagree, the clear answer to the question the AI is given is that the dishes go with the other dishes in the drying rack because its obviously the intended answer to the question, most people would reach the same conclusion and would put it in the same place if they were given the same test as it.

E: To be clear I'm not saying that I think we're going to reach AGI within a few years or anything. It will probably take decades to actually get there, but that's a pretty far cry from the impossibility that some of you think AGI is.
« Last Edit: March 16, 2024, 05:58:17 am by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

Strongpoint

  • Bay Watcher
    • View Profile

Quote
As a planet we have built just a few handfuls of top of the line AI, thinking we are near the peak of what we can do is like building a few vaccum tube computers and going "Whelp, this is probably it, computers are just about at their peak".
Vacuum tube computers did reach their near peak quite quickly. If we would keep improving those, they would be better than one from 1940s but not by much.

What you are doing is assuming that there will be transistors of AI technology as if it is somehow guaranteed. Like people assumed that there would be a breakthrough in fusion reactors and space travel.


Quote
We already know that the laws of physics allow you to run and train human level intelligences (eg. humans) on just 20 watts of power.

It doesn't mean that we can do this with binary computers and neural networks.
Logged
They ought to be pitied! They are already on a course for self-destruction! They do not need help from us. We need to redress our wounds, help our people, rebuild our cities!

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

There's also the fact that most of the "training" of the human brain (for example) is in the evolutionary processes that created its structure. It's unclear how much energy amortized over all of history was required for that.

Part of the problem with state-of-the-art AI is that we're trying to accomplish the equivalent of millennia of structural evolution in much shorter timeframes. Of course this is going to require more instantaneous wattage.  It's also dubious that the "structure" we've created (read: the weights in the computational neural networks) is actually anything close to an efficient way to do it even after training. My observation is that it isn't - doing digital computation of the type of computation that we're doing is a very inefficient (read: requires lots of energy) way to do it.

Also I think that digitally simulating neural networks is the most inefficient way possible to do it - we really need to start getting back to analog computing. Once you have the weights, create a "hard-coded" circuit that implements them, without having to do energy-expensive digital arithmetic to do the processing. This is how we're going to get more (energy) efficient AI - not by throwing more CUDA cores at it.

Also (I keep using that a lot D: ) I don't understand the appeal of AGI in the first place. We have enough intelligence around to do most tasks. The thing we don't have is the selfless willpower to actually solve problems, like hunger. Hunger in the US could be wiped out, say, with a mere $25B/year expenditure. The US spends more than $70B a year caring for pets.  This is not an "AI" problem, this is just a humans being human problem.  AI isn't going to solve that - not unless we actually just do what AI says.  But given that humans can't even do simple things that work when other humans tell them - like set up and follow a budget to stay out of debt - I really don't know what people think AI is going to do for anyone, other than make the rich get richer perhaps.
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

Hunger in the US could be wiped out, say, with a mere $25B/year expenditure.
Well... no, not bloody likely. NGOs throw calculations like that around for marketing purposes, but the problem is not one of simple expenditure.
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

Yes ok putting a simple price tag on it glosses over a lot and can't solve it by just spending that money, but it's reasonable to get the scale of the problem.  It comes down to willpower, not lack of technology.  You don't need Magic Tech to distribute the equivalent of $50 worth of food/person/week to people that need it.

Unless maybe you can? Maybe an AI can come up with some kind of plan that will make it trivial to solve problems like this. But I'm not going to hold my breath.
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

Yes ok putting a simple price tag on it glosses over a lot and can't solve it by just spending that money, but it's reasonable to get the scale of the problem.  It comes down to willpower, not lack of technology.  You don't need Magic Tech to distribute the equivalent of $50 worth of food/person/week to people that need it.

Unless maybe you can? Maybe an AI can come up with some kind of plan that will make it trivial to solve problems like this. But I'm not going to hold my breath.
I disagree. It's certainly not a problem of lack of willpower, but lack of feasibility, and there are definitely technological advances that could "solve" it in theory. I personally suspect no such technological advances are actually practical, but it's conceivable that there might be, for example, some hitherto untried type of fertilizer which can be made without fossil fuels, which might be discovered by intensive chemical simulation.

It's just as likely that such a search would turn up absolutely nothing, but that isn't really the fault of the technology, it's just the laws of physics not cooperating.

ETA: I should add that this still doesn't "solve hunger" in that hunger, especially in America, is never just a problem of not having enough access to food, but it would certainly be helpful.
« Last Edit: March 16, 2024, 03:05:16 pm by Maximum Spin »
Logged

Rolan7

  • Bay Watcher
  • [GUE'VESA][BONECARN]
    • View Profile

What technology would solve the problem that we shovel food into dumpsters, lock them, then call the police to guard them with guns?
Because we HAVE the FUCKING food.

Wait, I do know one piece of technology that solved that in the past.  It was very humane for the time.
Logged
She/they
No justice: no peace.
Quote from: Fallen London, one Unthinkable Hope
This one didn't want to be who they was. On the Surface – it was a dull, unconsidered sadness. But everything changed. Which implied everything could change.

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

What technology would solve the problem that we shovel food into dumpsters, lock them, then call the police to guard them with guns?
Because we HAVE the FUCKING food.

Wait, I do know one piece of technology that solved that in the past.  It was very humane for the time.
See, this is the kind of shallow misunderstanding you get when the only thing you know about the problem was overheard at a DSA meeting.

Generally from the same people who would be the first to blame capitalism if homeless people eating out of dumpsters start dying of ergotism or some other kind of food poisoning.
Logged

dragdeler

  • Bay Watcher
    • View Profile

I'm not following what technology was very humane? technology?

How you're going to aim for the stomach with a strawman that broad, this is the same kind of criticism leveled by....




I know I know! Extreme lawyerdom 4000! You know how the world just get's to complex and we're all overburdened so that theoretical case studies are useless what we wantis to apply short simple solutions... Why try to get an appointement with specialists, or represent your interests in court or whatever instititutional task: stop bothering, everybody get's their own LLM, they argue for us all day long, negotiating and grandstanding on such silly notions as rights, while we revert into a primal state where we just munch what the our personal lawyergodking procured us, ethically sourced from the community through the power of consensus. There refistribution solved (cool typo you stay).
Logged
let
Pages: 1 ... 30 31 [32] 33 34 ... 42