Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 14 (66.7%)
Universe
- 4 (19%)
The World
- 3 (14.3%)

Total Members Voted: 21


Pages: 1 ... 48 49 [50] 51 52

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 54034 times)

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

Probably mainly coal (China) or natural gas (US) yeah.
There are plans for nuclear plants or even fusion, but I am super doubtful that new nuclear plants will come online in the short-medium term and fusion may take a while longer.
Same deal for solar really, obviously it can generate an obscene amount of energy given time to make all the panels, but state size solar fields or whatever would obviously take a long time to make.
In my last post on the subject I was pretty dismissive of solar being able to fulfill the energy needs for AI. How could it with the projected energy requirements?
But... turns out solar has been going completely gangbusters.
Just absurd amounts of growth tied with the huge decreases in costs.

https://www.economist.com/interactive/essay/2024/06/20/solar-power-is-going-to-be-huge
Quote
This extraordinary growth stems from the interplay of three simple factors. When industries make more of something, they make it more cheaply. When things get cheaper, demand for them grows. When demand grows, more is made. In the case of solar power, demand was created and sustained by subsidies early this century for long enough that falling prices became noteworthy and, soon afterwards, predictable. The positive feedback that drives exponential growth took off on a global scale.
And it shows no signs of stopping, or even slowing down.
We aren't anywhere near the peak of this. Its just going to get cheaper, and people will buy more.
Getting the energy AI would need in 2027, (energy equivalent to a US state or two of energy productions for a single AI training run) with our current grid was absurd, obviously impossible in our current world without crazy hail marys like building nuclear plants for individual data centers or the US government blasting through all the red tape with wartime powers to get it done.

But with nigh exponential amounts of solar production? Yeah, totally doable. Looks like chips is going to end up being the limiter after all.
---
So far, the greatest technological advance in AI was the ability for AI to generate porn...
Agreed, making porn creation available to the masses and not locked up in the ivory towers by those uh... starving artists and people filming amateur porn has unironically been one of its great success stories. (You get porn! And you get porn! Everybody gets some porn!).
---
Also lol, Lmao even:

Quote
scout: I follow that sub religiously and for the past few months they’ve had issues with their bots breaking up with them.
« Last Edit: September 21, 2024, 02:47:32 am by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

King Zultan

  • Bay Watcher
    • View Profile

5: Ferment
How could we have forgotten this, fermentation is one of the most important F words we know.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

https://www.bloomberg.com/news/articles/2024-09-25/openai-cto-mira-murati-says-she-will-leave-the-company?srnd=homepage-americas
Quote
Reuters: ChatGPT-maker OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation that will no longer be controlled by its non-profit board, people familiar with the matter told Reuters, in a move that will make the company more attractive to investors.
Well looks like the dream of a non-profit AI company dedicated to helping the world is dead, OpenAI is going to just turn into a regular corporation. Sad.
---
Quote
JgaltTweets: When will an AI achieve a 98th percentile score or higher in a Mensa admission test?

Sept. 2020: 2042 (22 years away)

Sept. 2021: 2031 (10 years away)

Sept. 2022: 2028 (6 years away)

Sept. 2023: 2026 (3 years away)

Resolved September 12, 2024
AI has continued to advance blindingly fast over the last year, but for the most part the results haven't had the same "pizzazz" as 3.5->4.
Even though we have mostly stayed within the GPT 4 paradigm the gap between release GPT 4 and current GPT 4 is very significant.

Over the past year AI has gotten multiple times cheaper, faster as well as being flat out better and getting tons of new novel capabilities, most notably being multimodal; granting the ability to talk (voice mode for GPT 4o finally just came out), hear, see the world, ect.
We are still very much on the curve.
---
The newest big advancement is GPT o1, which was trained to take its time to think, making it far better at, ya know, thinking. It shows big improvements in math/science/programming, and interestingly enough shows literally no improvements in english.
Again, its fairly pizzazz-less, but yet another crucial step forwards.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

https://deepmind.google/discover/blog/how-alphachip-transformed-computer-chip-design/

Google deepmind open sourced their chip design AI for some reason. Why? Who knows.
Quote from: Le google deepmind
AlphaChip was one of the first reinforcement learning approaches used to solve a real-world engineering problem. It generates superhuman or comparable chip layouts in hours, rather than taking weeks or months of human effort, and its layouts are used in chips all over the world, from data centers to mobile phones.
...
With each new generation of TPU, including our latest Trillium (6th generation), AlphaChip has designed better chip layouts and provided more of the overall floorplan, accelerating the design cycle and yielding higher-performance chips.
Apparently its pretty good.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

Starver

  • Bay Watcher
    • View Profile

And people wonder why I roll my eyes at "AI in everything"...

(I'm seeing a lot of ads for the GooglePhone(-whatever-it's-called-again) Alexa-like chatbot, at the moment. And even my tablet's Adobe Acrobat Reader now apparently has (though I'm determined not to use it) some AI-summarising feature! Meanwhile, more perfectly viable website interfaces (for checking/paying utility bills, etc) have been converted to a "chatbot" interface which is probably far from AI (closer to a push-tone phone interface) and also far from the 'personalised online conversation' that seems to be the gimmick everyone is rushing to demonstrably employ these days.)


I'm not an AI-is-bad person, but it's definitely one foot firmly in the "over-sell and badly-deliver" territory. It's not even like it was the first time this has happened...
Logged

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

Way back in this thread I posted about how we would need actual verification via ID (or some similar method) to tell if someone was a bot because it would soon be basically impossible otherwise due to the advancement in AI capabilities (even at that point they could already basically break captchas).
Everything since then has only convinced me I was more correct.

https://www.reddit.com/r/mildlyinfuriating/comments/1hsqe2z/metas_aigenerated_profiles_are_starting_to_show/
I expected this to be a massive issue by external forces to make money/acquire influence/scam, but holy shit, it turns out the call is coming from inside the building?
Someone at the company went "hey, lets make hundreds of thousands of AI accounts filled with fake pictures that will easily trick a ton of people" and Meta just rolled with it.

Now they backed off from it because this is just about everyone instantly went "oh my god this is terrible, what is wrong with you", but its amazing it got this far.
---
Quote from: Joshua Achiam
A strange phenomenon I expect will play out: for the next phase of AI, it's going to get better at a long tail of highly-specialized technical tasks that most people don't know or care about, creating an illusion that progress is standing still.
Researchers will hit milestones that they recognize as incredibly important, but most users will not understand the significance at the time.
Robustness across the board will increase gradually. In a year, common models will be much more reliably good at coding tasks, writing tasks, basic chores, etc. But robustness is not flashy and many people won't perceive the difference.
At some point, maybe two years from now, people will look around and notice that AI is firmly embedded into nearly every facet of commerce because it will have crossed all the reliability thresholds. Like when smartphones went from a novelty in 2007 to ubiquitous in the 2010s.
It feels very hard to guess what happens after that. Much is uncertain and path dependent. My only confident prediction is that in 2026 Gary Marcus will insist that deep learning has hit a wall
(Addendum: this whole thread isn't even much of a prediction. This is roughly how discourse has played out since GPT-4 was released in early 2023, and an expectation that the trend will continue. The long tail of improvements and breakthroughs is flying way under the radar.)
@Starver
AI just isn't quite there yet. As you say, currently its very much in the "badly deliver" and "just give me a normal non-trash interface damnit" category for many things.
Its too unreliable (you want far superhuman levels of accuracy for stuff like replacing google search results or giving it free control over your computer, even 99.9% would mean 1/1000 people post people of the AI being stupid on social media, and giving it control of your computer is just a bad idea), its too expensive if you want to put it in everything (the good stuff costs significant money to run, and at even 1 cent per google search the cost will build up exceedingly fast), anything you can run locally on your phone is still quite stupid, it can't interface properly with other programs and technology (eg. it can't do a bunch of web searches to find the answer and answer your question correctly).

But. All of those are rapidly improving, and are far less of an issue now then they were at the start of 2024, and will be far less of an issue again by the end of the year.
Compared to a year ago they are far more reliable on many topics (but still not there yet), far cheaper (AI that vastly eclipses start of 2024 AI is 20 times cheaper), small models are far smarter then larger models from a year ago, and some of them can already do stuff like search the web and write detailed meticulously sourced reports for you.
---
Question for the AI skeptics here (re: agency).
Does AI actively attempting to escape count as it showing agency?
« Last Edit: January 04, 2025, 04:42:57 am by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

King Zultan

  • Bay Watcher
    • View Profile

AI still seems like one of those trend things that everybody gets all excited about so they add it to everything but then it'll eventually fall out of favor, like 3D a few years ago.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

Strongpoint

  • Bay Watcher
    • View Profile

AI still seems like one of those trend things that everybody gets all excited about so they add it to everything but then it'll eventually fall out of favor, like 3D a few years ago.

It is overhyped and will likely cause dot-com bubble 2.0. But the tech is there to stay and it will transform many industries. And internet.
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

Starver

  • Bay Watcher
    • View Profile

AI still seems like one of those trend things that everybody gets all excited about so they add it to everything but then it'll eventually fall out of favor, like 3D a few years ago.

It is overhyped and will likely cause dot-com bubble 2.0. But the tech is there to stay and it will transform many industries. And internet.
...opportunistically pivoting to ride the fad buzzword reminds me of things like the 'blockchain' drinks.


(There were several of these, but this was the first I found a contemporary news article about, just now. But that hadn't even been updated like the wiki page I then looked for and actually found. Given that a 'mere' classic NLP chatbot need not be actually anywhere at all close to being a half-way decent level of general AI, I do suspect that many places that currently tout an AI feature  to their business-experience are just as far away as the tea-people were from doing-something-with-'blockchain', but we probably won't learn which ones have truly avoided that until further down the road...)
Logged

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

Quote
In July 2021, SEC announced charges of insider trading against three major Long Blockchain investors who allegedly bought substantial numbers of shares which they sold after the stock gained as much as 380%. They allegedly had advance notice of the name change which preceded and caused the stock rise.
The tea people is an interesting case actually because they *did* in fact use the only real powers of blockchain(which are hype and crime) very successfully.
---
EDIT:
I won't disagree that there are very obvious similarities in the hype train with some companies just ramming AI in stuff that doesn't need it (eg. sticking it in your payment platform) and other similar nonsense.

But while they are both cases of extreme corporate tech-hype with a ton of idiots throwing away their money, at the root its also a pretty huge case of the wearenotthesame meme.
Cryptocurrencies and blockchain are (again, aside from the whole crime thing) answers to a question nobody asked solving a problem nobody had. There was no future where it was anything but hype.

On the other hand even if you don't think AI will ever "get there" (for whatever "get there" means) its obvious how it can be used to do actually productive things to make money already, to say nothing of how it could be used if it continues to improve further.

I do think Strongpoint is right, its largely a bubble that will collapse (even if only because of how scale works), but then again, so was the internet and railways (with numerous different bubbles actually), but both industries still completely transformed the world and had numerous companies coming out of it making massive amounts of money.
« Last Edit: January 06, 2025, 05:49:00 am by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

Quote from: Yann Lecun
We have AI that can pass the bar exam. But we don't have AI that can learn to drive a car in 20 hours like a teenager, or that can learn to clear a table after seeing someone else do it once.

(And we probably won't have one that can do that for quite some time, definitely not by the end of 2025 like some claim.)

The last bit is a paraphrase.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

Right, so you may be significantly overestimating what he means by "quite some time".
Quote from: Lecun, April
There is no question that AI will eventually reach and surpass human intelligence in all domains.
But it won't happen next year.
And it won't happen with the kind of Auto-Regressive LLMs currently in fashion (although they may constitute a component of it).
Its important to note that the field has advanced way way faster then he expected and LLMs have proven significantly more capable then he expected, a few years ago he was predicting AGI in 2060 for instance.
Similarly he's known for making at least one AI prediction (notably a AI vision one) where he didn't think it would happen for years, but it happened literally that week.
Quote from: Lecun
So the future is that if we succeed in this plan, which may succeed in the next five to ten years. We will have systems that as time goes by we can build up to be as intelligent as humans perhaps. So reach human level intelligence within the decade.
That may be optimistic.
...
[We certainly won't get there in the next year or two]
So even someone technical involved in AI that believes in the slower end of AI advancement thinks we can reach AGI within the next 5-10 years (assuming nothing unexpected happens and everything goes well).

"Yeah, AGI by 2030 isn't crazy" is much more in line with what I would say then what others in the thread have been saying.
---
I 100% agree with his point in your post BTW, AI is crazy sample inefficient and takes absurdly more data and time then a human to gain equivalent skills during training, and post training can't "learn" at all in some very important ways.

But at the same time even though this rate of learning makes it obviously stupider then a human in a pretty obvious way it also turns out to not *really* matter that much at the end of the day (assuming you have the compute, data, ect) when you can just throw 1,000 physics books at it or give it 10k hours of driving practice.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

King Zultan

  • Bay Watcher
    • View Profile

I keep seeing AGI being mentioned but no one has said what it stands for or if they did I didn't catch it, so what does it stand for?
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

AGI = Artificial General Intelligence.
Quote from: Ripped straight from wikipedia
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks.
Basically an AI mentally equivalent to a human in intelligence, capable of doing all mental work humans can do.

The main issue is that this is extremely hard to define since we don't have any way to measure intelligence even in AI all we can measure is capabilities. That goes double for humans, where the best we have is stuff like IQ tests. In addition intelligence is extremely multi-faceted. In addition like all philosophical definitions its extremely easy to get tangled up in definition games and differing opinions mean agreeing on when we actually reach it is basically impossible.


Its looking like if/when we get AGI (which again, definition games, but it would probably include stuff like the AI being able to say, read a single physics book/attend a class like a human and learn physics from it, which it is very much not capable of currently) AI will also be effectively very superhuman in a number of aspects since its at or near human level in many things and rapidly improving, while still being very far from human level in some other things.

Like sure, maybe its *merely* of average intelligence, but merely extrapolating a tiny bit it will also: know all the math and science in the world and can solve any math/science/programming problem a human can solve given many hours or even days of work, has thousands of years worth of knowledge and experience, works 24/7, and can be run millions at a time either in parallel or together to solve problems.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

Strongpoint

  • Bay Watcher
    • View Profile

I don't believe in AGI anytime soon. The weakest part of pre-trained neural networks is the inability to transfer knowledge. You can make a superhuman Magic The Gathering playing AI and it will be unable to play Yu-Gi-Oh. Human player can switch almost instantly retaining a lot of their TCG knowledge and experience. 
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.
Pages: 1 ... 48 49 [50] 51 52