Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (68.4%)
Universe
- 3 (15.8%)
The World
- 3 (15.8%)

Total Members Voted: 19


Pages: 1 ... 26 27 [28] 29 30 ... 42

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 26624 times)

feelotraveller

  • Bay Watcher
  • (y-sqrt{|x|})^2+x^2=1
    • View Profile
Logged

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

Novel was around about a month ago, as per their profile page. And their avatar seems to have changed.
I don't think we've seem the last of that cosplaying AI.

dragdeler

  • Bay Watcher
    • View Profile

The quiet before the storm.

First novel scoops, then final ladle.
Logged
let

Starver

  • Bay Watcher
    • View Profile

Don't forget intermediate tablespoon!
Logged

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

That example is described all wrong in the article, anyway. The AI is not vulnerable to CAPTCHAs, as clearly it absolutely can deal with them (that kind, certainly) easily enough. It's the process parcelled around the AI (probably programmed in, fallibly, or else insufficiently taught through supplementary learning material given to a less sophisticated 'outer skin' of AI/Human mediation[1]) that fails by not forcing a processing failure and refusal message.
Yeah, my bad, I linked the article because I was too lazy to grab the pictures and host them on Imgur, so I didn't really read it after a very quick skim.
Google's CAPTCHAs aren't to catch robots though. They are basically hidden, uncompensated training programs for their AI.  I haven't figured out how to charge Google $1 or whatever for every CAPTCHA I "solve", for the effort of training their stuff.

They do both. On one hand they are used as training data for AI. On the other hand they are used whenever google can't be sure you are a human because their existing orwellian surveillance system fails (most commonly if you use a VPN or otherwise actually manage to hide your advertising signature from google).

Their use in detecting bots is a key component of the modern internet. But as you say, they aren't designed for AI, and AI picture recognition is pretty damn good already.
The picture ones are certainly harder for the AIs then the text ones, but I'm extremely doubtful they could stop GPT or Gemini if they hadn't been trained to not break them.
(Honestly, I'm shocked that it even bothers to reject a normal captcha given that there is no conceivable value to asking ChatGPT to solve old-fashioned, already-broken captchas for you, one at a time, then processing its response for the content. It seems more like an ass-covering effort.)
https://gptforwork.com/tools/openai-chatgpt-api-pricing-calculator
You don't need to do it one at a time though, you can do it ten or a hundred thousand at a time if you pay for API access.
Although you are correct, there is copious amounts of ass covering involved in the whole AI thing altogether.
I suspect that training a specialized captcha-reading neural network is very easy nowadays so who cares if GPT can read those?
Because making a big AI takes time and lots of technical knowledge. The field is just so fresh, and even for smaller models training and running them is expensive and time consuming.
It will happen of course given a few years of time, but openAI and now Google's caution is delaying that currently.

Plus, costs are still an issue. Is it worth it for your mafia to invest ten million dollars in making a new AI that will solve captchas for 1/2 a cent each but that will be obsolete in two years when you can just hire people in china/india to solve captcha's for less then a cents a piece?
(Skipping past the diversion into "CAPTCHA clearly has the wrong idea of what a tractor/motorbike/chimney is, but I need to tell it what it thinks or it'll think *I'm* wrong" or "which extended bits of the traffic light (light, frame, pole?) it expects me to select" issues, both of which I've definitely mentioned before, here or elsewhere, as I started on the following overlong post before the last few messages appeared.)
That's people's fault actually. The "correct" answers to CAPCHA (except one of the squares which you are the one to check for the first time) were selected by other people when they previously did it, so what you really need to do is figure out what other people would select.
I don't believe this is anything except a mere swing in the arms race between bots and captcha makers that has been going on since the 90s. Stands to mention something is being developed (likely kept secret to avoid AI spammers preparing for it effectively) that we can't quite grasp the concept of currently. AI isn't magic.
Of course it isn't magic, and of course they will have solutions that work to some degree, its just that many of these solutions are likely to involve fundamentally violating your privacy.
Because at the end of the day AI have already gotten to the point where they can fool other automated systems even if they can't fool humans, and unless you require people trying to join your forum to post a essay or whatever that's unlikely to change.
You have strawmanned me. I am well aware of how capitalism works, and I haven't said that corpos will stop investing in AI.
Apologies, your position makes far more sense now.
Quote from: KittyTac
diminishing returns.
Not really?
I mean sure, if you are just increasing the size the cost to train it increases exponentially, but that isn't actually diminishing returns because it will also gain new emergent properties that the smaller versions don't have. These fundamentally new abilities mean that it isn't really diminishing returns.
Its like a WW1 biplane VS a modern fighter jet.
The modern plane is only 10 times faster but costs 1000x more, but in return it can do a ton of stuff that even 1000 biplanes would be useless at.
Its the same for AI, sure the 1000x cost AI might "only" have a score of 90% instead of 50% on some test, but it can do a ton of stuff that the weaker AI would be useless at.
1) By "Moore's law is dead" I meant that we are reaching a point where physics prevents the exponential rise of computing power.
Ehh, to some degree?
Sure we can't make the individual transistors much smaller, and compute growth does seem to be slowing down, but that doesn't mean that its anywhere near its peak.
Last month, DeepMind’s approach won a programming contest focused on developing smaller circuits by a significant margin—demonstrating a 27% efficiency improvement over last year’s winner, and a 30% efficiency improvement over this year’s second-place winner, said Alan Mishchenko, a researcher at the University of California, Berkeley and an organizer of the contest.
Quote
From a practical perspective, the AI’s optimisation is astonishing: production-ready chip floorplans are generated in less than six hours, compared to months of focused, expert human effort.
Stuff like AI designed chips show that there is still significant amounts of possible growth left.
Now obviously its impossible to know how much compute growth there is left, but I'm skeptical that we are at the end of the road, especially since one of the big limits to chip design speed is the limits of the human mind.
if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.
I think its likely we will soon (within a few years) see a GPT 4 equivalent that can run locally. What I disagree with is that there will only be a 5% difference between running it locally and the ~hundred(?) thousand dollars worth of graphics cards that the latest GPT model is running on.
No, the difference will be similar or even greater then what it is now, the non-local versions will simply be vastly better due to having 100x more processing power and having had training costing billions of dollars.
2) I was talking about "good enough" being good enough for general-purpose AI. Which I think is a point that will be reached and be open-source-runnable very soon. And this is what would both allow the detection of AI text (which I believe always lacks a certain spark to it) and eat up market share for "chatbox" AI. I feel GPT-6 would be mostly for research purposes or marketed to perfectionists... if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.
For the average user I agree, once you get to a certain point, (one that I think is well past GPT 4 since current GPT does indeed lack something), your average user will be content with the text generation capabilities and won't want anything more.

The issue is that AI is already far more then text, its multimodal, including things like picture generation, math solving, ability to read pictures, to code, ect. Eventually it will include video generation, ability to voice anyone, and even more exotic things.
Your average person might not care about all of those, but companies will very much pay tens of thousands for the best AI driven coding assistance for a single individual.
They will pay out the nose for AI to track all their employees, or to generate amazing advertising video's instead of hiring a firm, or even to simply replace a dozen people on their phone line with a vastly more knowledgeable, capable, and empathetic (sounding) AI, or that can solve any math problem that any regular person without a degree in Math can solve, ect.

Yes, eventually you will be able to run an AI locally that can do all those things, but by that point the "run on ten million dollars of hardware" AI is going to be even better and have even greater capabilities.
---
There are three main areas that will lead to vast decreases in AI cost.

1) Hardware improvements.
These include generic compute improvements, but also more exotic improvements such as analog chips (which could reduce electricity costs by 14 times), ect.
This is the hardest area to tell how much give is left, but there is almost certainly some exponential growth left in it.
2) Software improvements surrounding using AI
Eg. Optimization such as Xformers, software/math advancements, using different techniques for the context windows of already trained AI's, LORAs, finetunes of existing models, prompt engineering, plugins to existing AI, ect.
There is quite a bit here to gain here. For instance it turns out that that merely giving the right prompt can make an AI act significantly smarter
3) Fundamental advances in AI knowledge allowing the same level of performance at much lower sizes.
Massive breakthroughs have happened numerous times over the past few years, and are the primary reason for the vast increase in AI capacity at the same level of compute. This includes stuff like restructuring the AI to have modular subsystems in the same way as the human brain does.
Quote
Keeping the original 300B tokens, GPT-3 should have been only 15B parameters (300B tokens ÷ 20).
This is around 11× smaller in terms of model size.
OR
To get to the original 175B parameters, GPT-3 should have used 3,500B (3.5T) tokens (175B parameters x 20. 3.5T tokens is about 4-6TB of data, depending on tokenization and tokens per byte).
This is around 11× larger in terms of data needed.
For instance chinchilla found that AI's were using only 10% of the training data they should use at their size.

Other things which have shown vast improvements are AI alignment, better training data and knowledge of how to use training data, better structuring of AI training goals, breaking AI up into submodules, ect. (If you want I can find a few dozen papers about advances in AI in 2023 because seriously, the field is moving so fast).

I think that the Einstein comparison you made in a previous post is highly relevant as well. Ultimately the only special thing about Einstein or Newton or Ramanujan is that their brains were optimized for slightly different things then a normal human.
While AI exceeds human capability in quite a few areas in some others current AI are below even stuff like mice in intelligence (eg. they lack proper long term memory), so the amount of optimization left is without a doubt vast.
---
These three factors combined will lead to a vast decrease in costs for anything on the current level over the coming years.
I'm pretty confident that they will also lead to far greater capabilities and that the AI of 2030 will be fundamentally different from the AI of 2023, but that's a whole other kettle of fish.
« Last Edit: February 05, 2024, 10:36:03 pm by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

Laterigrade

  • Bay Watcher
  • Is that a crab with
    • View Profile

I miss novel.
Me too. He was so fascinating.
Logged
and the quadriplegic toothless vampire killed me effortlessly after that
bool IsARealBoy = false
dropping clothes to pick up armor and then dropping armor to pick up clothes like some sort of cyclical forever-striptease
if a year passes, add one to age; social experiment

King Zultan

  • Bay Watcher
    • View Profile

I to miss Novel, maybe one day he will return to us with his strange wisdom.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

I don't believe this is anything except a mere swing in the arms race between bots and captcha makers that has been going on since the 90s. Stands to mention something is being developed (likely kept secret to avoid AI spammers preparing for it effectively) that we can't quite grasp the concept of currently. AI isn't magic.
Of course it isn't magic, and of course they will have solutions that work to some degree, its just that many of these solutions are likely to involve fundamentally violating your privacy. What's wrong with simply legislating takedowns of AI-generated websites? Even IF (and I doubt that's an if) consumer-runnable AI detectors with good success rate don't become a thing, the government would have enough resources to run them.
Because at the end of the day AI have already gotten to the point where they can fool other automated systems even if they can't fool humans, and unless you require people trying to join your forum to post a essay or whatever that's unlikely to change. Where we differ is that I don't believe this state of affairs can last forever. Or for long.
Quote from: KittyTac
diminishing returns.
Not really?
I mean sure, if you are just increasing the size the cost to train it increases exponentially, but that isn't actually diminishing returns because it will also gain new emergent properties that the smaller versions don't have. These fundamentally new abilities mean that it isn't really diminishing returns.
Its like a WW1 biplane VS a modern fighter jet.
The modern plane is only 10 times faster but costs 1000x more, but in return it can do a ton of stuff that even 1000 biplanes would be useless at.
Its the same for AI, sure the 1000x cost AI might "only" have a score of 90% instead of 50% on some test, but it can do a ton of stuff that the weaker AI would be useless at. Like what? Give some examples of what GPT-5 could POSSIBLY do that GPT-4 couldn't, besides simply knowing more uber-niche topics. What I'm getting at is that those new use cases, at least for text AI, are not something the average user needs at all.
1) By "Moore's law is dead" I meant that we are reaching a point where physics prevents the exponential rise of computing power.
Ehh, to some degree?
Sure we can't make the individual transistors much smaller, and compute growth does seem to be slowing down, but that doesn't mean that its anywhere near its peak.
Last month, DeepMind’s approach won a programming contest focused on developing smaller circuits by a significant margin—demonstrating a 27% efficiency improvement over last year’s winner, and a 30% efficiency improvement over this year’s second-place winner, said Alan Mishchenko, a researcher at the University of California, Berkeley and an organizer of the contest.
Quote
From a practical perspective, the AI’s optimisation is astonishing: production-ready chip floorplans are generated in less than six hours, compared to months of focused, expert human effort.
Stuff like AI designed chips show that there is still significant amounts of possible growth left.
Now obviously its impossible to know how much compute growth there is left, but I'm skeptical that we are at the end of the road, especially since one of the big limits to chip design speed is the limits of the human mind. I'll believe it when I see it.
if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.
I think its likely we will soon (within a few years) see a GPT 4 equivalent that can run locally. What I disagree with is that there will only be a 5% difference between running it locally and the ~hundred(?) thousand dollars worth of graphics cards that the latest GPT model is running on.
No, the difference will be similar or even greater then what it is now, the non-local versions will simply be vastly better due to having 100x more processing power and having had training costing billions of dollars. What I'm getting at by diminishing returns is that at some point, "better" becomes nigh on imperceptible. On some automated tests it might score 30% more, sure. But at what point does the user stop noticing the difference? I don't believe that point is far away at all. The quality gap between GPT-3 and GPT-4 is technically higher than between 2 and 3 (iirc) but they feel much more similar.
2) I was talking about "good enough" being good enough for general-purpose AI. Which I think is a point that will be reached and be open-source-runnable very soon. And this is what would both allow the detection of AI text (which I believe always lacks a certain spark to it) and eat up market share for "chatbox" AI. I feel GPT-6 would be mostly for research purposes or marketed to perfectionists... if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.
For the average user I agree, once you get to a certain point, (one that I think is well past GPT 4 since current GPT does indeed lack something), your average user will be content with the text generation capabilities and won't want anything more.

The issue is that AI is already far more then text, its multimodal, including things like picture generation, math solving, ability to read pictures, to code, ect. Eventually it will include video generation, ability to voice anyone, and even more exotic things.
Your average person might not care about all of those, but companies will very much pay tens of thousands for the best AI driven coding assistance for a single individual.
They will pay out the nose for AI to track all their employees, or to generate amazing advertising video's instead of hiring a firm, or even to simply replace a dozen people on their phone line with a vastly more knowledgeable, capable, and empathetic (sounding) AI, or that can solve any math problem that any regular person without a degree in Math can solve, ect.

Yes, eventually you will be able to run an AI locally that can do all those things, but by that point the "run on ten million dollars of hardware" AI is going to be even better and have even greater capabilities. That's not really the kind of AI I consider a real threat in the "flood the internet" sense. But yeah, fair enough. But I think it won't be one AI but more of a suite of AI tools than anything. And besides, AI image gen basically plateaued already, for the general use case.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Criptfeind

  • Bay Watcher
    • View Profile

I feel like it's maybe assholeish for me to say, but I expect everyone who feels this way thinks that, and thus is gunna stay sorta quiet on the topic so I'm just gunna say it so there's at least some opposition.

I really don't miss Novel. Primarily I really don't miss insane dribble driving other topics off the front page. I mostly engage with bay12 via browsing the first page of a section, clicking on new and updated threads and reading the latest. During Novels time GD was essentially ruined for me, since he'd spam so many bullshit topics that'd have little to no response other then random clowns thinking they are far funnier then they were spamming nothing replies to his nothing topics that he'd drive other threads deeper into GD and you'd need to dig around to find actually interesting conversations. It wasn't worth the effort of digging though his bullshit, and I mostly stopped reading GD for a while until he left.
« Last Edit: February 06, 2024, 02:54:34 am by Criptfeind »
Logged

Biowraith

  • Bay Watcher
    • View Profile

I feel like it's maybe assholeish for me to say, but I expect everyone who feels this way thinks that, and thus is gunna stay sorta quiet on the topic so I'm just gunna say it so there's at least some opposition.

I really don't miss Novel. Primarily I really don't miss insane dribble driving other topics off the front page. I mostly engage with bay12 via browsing the first page of a section, clicking on new and updated threads and reading the latest. During Novels time GD was essentially ruined for me, since he'd spam so many bullshit topics that'd have little to no response other then random clowns thinking they are far funnier then they were spamming nothing replies to his nothing topics that he'd drive other threads deeper into GD and you'd need to dig around to find actually interesting conversations. It wasn't worth the effort of digging though his bullshit, and I mostly stopped reading GD for a while until he left.
I almost exclusively lurk here so yeah, I'd have stayed quiet, but to ensure you're not the only one feeling maybe assholeish: I agree.  Especially since the vast majority of Novel threads could easily have been condensed down to one or two 'mega' threads ("the future's coming too fast and it's overwhelming" and "random one-line stray thoughts" would have covered almost all of them).
Logged

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

To be fair, if Novel rolled in for day, it would be awesome.
If Novel stayed for a week, it would be awful.

I appreciate Novel in very small doses.

MorleyDev

  • Bay Watcher
  • "It is not enough for it to just work."
    • View Profile
    • MorleyDev

The tldr of my opinion is:
Whenever there's an advancement in AI it's always that there are two possibilities:
a) We're just at the forefront of what can be achieved with this new advancement and we're on the verge of a singularity
b) The room for growth in this new advancement is actually fairly small before you run into intractable problems

And every time, without fail, it's talked about like (a) will happen, and every time, without fail, (b) happens. So I'm going to need extraordinary evidence of a before I don't treat that claim like I do claims of aliens. "It's never aliens until it definitely is aliens", so to speak.

For the new LLM models, the intractable problem I think it seems to have is *context*. To generate a whole novel with consistent context, you'd need to tokenize the previous data and feed it in when generating the next chunk. This is an exponential problem, and basically kills any significantly large content generation.

Which means when the inevitable gold rush calms down, for creation it'll settle into a place as another tool for speeding up work and like all other such tools it'll cost jobs when the total required output is limited such that you'd be creating more than demand with current numbers.
Logged

Starver

  • Bay Watcher
    • View Profile

I'm going to express my eternal dissapointment at the popularised term "Singularity", for what has always been explicitly more analagous to "Event Horizon".

(And, the way I read Bay12, I hadn't actually noticed Scoops's absence. So I'm ambivalent about their posting, though concerened if there's a RL reason behind why their interactions stopped. Hope NS's human controller is just having a fulfilling time in other realms of existence, 'real' or virtual. Unaugmented Reality has become quite well developed, over the years, I hear...)
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

I'm going to express my eternal dissapointment at the popularised term "Singularity", for what has always been explicitly more analagous to "Event Horizon".
If you mean the fictional technological "singularity", you're misunderstanding.

Singularity is a math term for a point on an axis where a function becomes undefined (or a few other closely related cases depending on context, like suddenly becoming undifferentiable), most often because it asymptotically goes to infinity in the vicinity, so a value at that point is never reached. f(x) = 1/x in the vicinity of 0 is the most classically obvious example. So in this case, the idea of the "technological singularity" is a time t at which f(t) becomes undefined for some f which depends on exactly what the speaker has in mind. It's a mathematical singularity, not a black hole.

Incidentally, this is also why black hole singularities are called singularities.
Logged

Starver

  • Bay Watcher
    • View Profile

For exactly the same reason as physical problems irrecoverably occur way before reaching the gravitational singularity (assuming there is a causal path to make such reaching possible...), the understanding of the technological singularity always tends to describe the point of no-return, not where it then leads.

To quote the current Wiki page:
Quote
The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.
...to me, this does not describe an impossible  inflection directly into undefinable infinity, but the point at which there is absolutely no (physical/technological) way of preventing the subsequent hazards of the situation, whatever they may be. (Of all people to misapply the terminology, I'm most dissappointed with Hawking, with a better than normal understanding of what may lie beyond the EH, with whatever form of geometry within either leading up to the hidden central mystery or funneling past that undefinable point and out again to who-knows-where.)

But the memetic pressure is against me, I know. It seems to have to have earned coinage beyond what it ought to. I've raised my objection, once more, and that is as far as I expect it to get.
Logged
Pages: 1 ... 26 27 [28] 29 30 ... 42