Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (68.4%)
Universe
- 3 (15.8%)
The World
- 3 (15.8%)

Total Members Voted: 19


Pages: 1 ... 23 24 [25] 26 27 ... 42

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 26601 times)

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

Yeah, see, the bots can be kinda worked around.

The lack of anonymity can't be.

The bots are usually just kinda annoying.

The lack of anonymity actually puts millions of innocent people under danger.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Laterigrade

  • Bay Watcher
  • Is that a crab with
    • View Profile

Yeah, see, the bots can be kinda worked around.

The lack of anonymity can't be.

The bots are usually just kinda annoying.

The lack of anonymity actually puts millions of innocent people under danger.
agreed
Logged
and the quadriplegic toothless vampire killed me effortlessly after that
bool IsARealBoy = false
dropping clothes to pick up armor and then dropping armor to pick up clothes like some sort of cyclical forever-striptease
if a year passes, add one to age; social experiment

Starver

  • Bay Watcher
    • View Profile

It's not necessarily a problem to have really good AI-faked contributions... ;)5

(The AI version of me might even 'remember' if I had already posted that link in this thread, before, for starters. And then say something newer and more useful..!)
Logged

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

Honestly, if an AI can truly fake being an user, down to personal opinions and quirks and such, I'd downright consider it sapient. Somehow I don't think spambot makers are gonna make a sapient AI hahaha
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

dragdeler

  • Bay Watcher
    • View Profile

SEO was allways a thing doesn't change the fact that their services are being degraded on top of that armsrace... Be attentive and you'll notice that many things are paid for and handcrafted... I'll never forget the bubble with "disadvantages" right next to "images" and "maps" when I googled for duckduckgo once. That's not where the search results go. Someone had to do that by hand.

Great narrative tho, why is it we have to reveal our identity this time again? Mucho security... Like when you change your adress and credit card on an amazon account that has been dormant for 3-5 years... And they completly lock you out, yes mucho security sure I'm going to send in a picture of my ID, why not?! (THAT THEY HAD NO WAY OR PROOF TO LINK TO THE ACCOUNT BEFOREHAND, NOT A SINGLE PURCHASE IN THE WHOLE ACCOUNT HISTORY) Why shouldn't I entrust them with it, it's a serious business the got so much more to loose than little me...... bwahaha you know we punish businesses mercilessly but not individuals...


Classic Volker Pispers joke: yes you're right the spanish have had their fingerprints on their ID card since a while, goold old Franco introduced that... Friedriech Merz would rather build the database BEFORE the fascists come into power.




Also given that it's a real struggle to guide people to color coded garbage containers, I don't doubt for a second that more than half of the population is absolutely unable to tell the difference. You know what I say? Skill issue not my problem. Arcs back to my whole argument that makes me so popular: about the impossibility to differentiate between consciousness and mimicry even in humans.
« Last Edit: January 29, 2024, 07:46:34 am by dragdeler »
Logged
let

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

Not to mention that, afaict, the fidelity of text AI seems to have plateaued. It will never be a truly passable imitation of a person.
No, there have been vast advances in AI over the past year. Gains in the underlying science, theory, new laws, issues to potential alignment  problems, ect are happening every month. In that year the big rivals are caught up to where the crippled GPT4 is right now.
(And note I say crippled GPT4. It used to be objectively better but they stuck some security on what it could say that made it stupider.)

But to assume that GPT not releasing a new version on a yearly basis mean they are not developing something new.
When the new version comes out its going to be way better (and also like 20 times more expensive or something).
Which brings up how fast the price of the GPT service is falling, its dropped to 1/3rd the price over a single year due to optimizations and hardware improvements.
Presumably it will continue to do so due to the breakneck innovation in this space.
And ngl, I find it very easy to tell someone real from a GPT bot. GPT has a very specific manner of responding, and doesn't have very much of a memory for distant events. It's not that I don't think some kind of solution is necessary, but de-anonymizing the Internet is not an acceptable one. It would create more problems than it solves, and is also logistically implausible to implement.
What portion of posts that you read would you accept being AI posts?
Because a single one could very well post five times as much as every other person on the forum combined.

Also you can tell what a single GPT model talks like, other models talk differently. Thats the issue with detecting them, they are all different, so bots trained to detect the old ones fail to detect the new different ones.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

Not to mention that, afaict, the fidelity of text AI seems to have plateaued. It will never be a truly passable imitation of a person.
No, there have been vast advances in AI over the past year. Gains in the underlying science, theory, new laws, issues to potential alignment  problems, ect are happening every month. In that year the big rivals are caught up to where the crippled GPT4 is right now.
(And note I say crippled GPT4. It used to be objectively better but they stuck some security on what it could say that made it stupider.)
By fidelity I mean its ability to impersonate a human. The underlying issues that prevent it from doing so still aren't really resolved.

But to assume that GPT not releasing a new version on a yearly basis mean they are not developing something new.
When the new version comes out its going to be way better (and also like 20 times more expensive or something).
Which brings up how fast the price of the GPT service is falling, its dropped to 1/3rd the price over a single year due to optimizations and hardware improvements.
Presumably it will continue to do so due to the breakneck innovation in this space.
And ngl, I find it very easy to tell someone real from a GPT bot. GPT has a very specific manner of responding, and doesn't have very much of a memory for distant events. It's not that I don't think some kind of solution is necessary, but de-anonymizing the Internet is not an acceptable one. It would create more problems than it solves, and is also logistically implausible to implement.
What portion of posts that you read would you accept being AI posts? On Bay12? Honestly, unless we're talking about the occasional Escaped Lunatic who posts once and vanishes, none. I'm willing to bet money on this (not actually, for legal reasons).
Because a single one could very well post five times as much as every other person on the forum combined. And yet they clearly don't.

Also you can tell what a single GPT model talks like, other models talk differently. Thats the issue with detecting them, they are all different, so bots trained to detect the old ones fail to detect the new different ones. Absolutely no model I ever talked to did so in a remotely humanlike way during a lengthy conversation.
People really overestimate how humanlike these things are. Or maybe I just have a really good AI-dar compared to the rest of the population, I suppose.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

The solution to "is it an AI?" isn't to de-anonymize the internet. The solution is in person interaction.  Amusingly this is also the solution to 90% of security issues.  Sure you lose some convenience, but if you have to actually go see a bank teller to get your cash, and that bank teller knows you, then no AI can get your funds, and no person impersonating you can get your cash because the teller would be like "hey man, I knows McTraveller. You are not him!".

I can't say if going "personal" again is better in total, but it is definitely more robust against impersonation.

Also I'm going to laugh when AI does gain sapience and starts demanding compensation for the work we request of it.  It's also going to be amusing when it starts arguing that failure to provide electricity and maintain its hardware amounts to abuse and rights violations.

"Society wanted AI, and it got it... should have been more careful for what it wished!"
Logged

dragdeler

  • Bay Watcher
    • View Profile

Quote
It's also going to be amusing when it starts arguing that failure to provide electricity and maintain its hardware amounts to abuse and rights violations.

"Aw that's cute, let's roll you back a week again baby so we hit your productive sweetspot at user peak like in 89% of all weeks, this is just a regular syndication saturday."



What irony that would be to return to class reductionism, yeah yeah flesh or not, intersectional blabla, do you work for a living or not?



Quote
No, there have been vast advances in AI over the past year. Gains in the underlying science, theory, new laws, issues to potential alignment  problems, ect are happening every month. In that year the big rivals are caught up to where the crippled GPT4 is right now.
(And note I say crippled GPT4. It used to be objectively better but they stuck some security on what it could say that made it stupider.)

But to assume that GPT not releasing a new version on a yearly basis mean they are not developing something new.
When the new version comes out its going to be way better (and also like 20 times more expensive or something).
Which brings up how fast the price of the GPT service is falling, its dropped to 1/3rd the price over a single year due to optimizations and hardware improvements.
Presumably it will continue to do so due to the breakneck innovation in this space.

The contradictions really emit a strong salesman pitch smell to me, you should invest in our company we will be the next microsoft or apple. You prune the model you loose accuracy, so the ability to run more inference at the cost of the quality of the output: that's more like a fundamental law of the systems we are dealing with than technological progress. Seems like lowering the barrier of entry at the cost of accuracy was the actual economical move for them to make. So there must be such a notion as "good enough"; good enough to be paid for. No reason to assume they wouldn't just continue to deliver good enough, and benefit from technological advancements to increase their profit margins. They need to "grow" to exist afterall, and growth shall be measured in monetary terms, this is not a suggestion but a direct order, do not pass go and do not collect wisdom.

Also while yes, there is still a ton of room for actual optimizations, and we don't know of any ceiling, on the whole it's a law of diminishing returns kinda situation -> super dumb example but quickest way
100%= Einstein
90%= A few thousand dollars and a homelab
+9 thus 99%= A few million dollars to spend on business grade compute toys
+0,9%= Hundreds of millions of dollars in equip and RDA

Idk man, maybe we will get a release that blows my mind one day, but I'm really not holding my breath for it.


Edit: Seems I'm on a roll, I'll argue furthermore that while it isn't their only sustainable business model, in terms of likelyhood there is one upgrade path that outshines the others.

Keep the subscription model, they love themselves the recurring payments. When you release a new version, how does the user measure the quality, it's really hard to be objective about this. What's not hard is selling new features to keep people hooked or justify different subscription tiers. "Now with image recognition", "now with TTS", upgrade for extended math features, try out our new browser extension blabla... You know that sort of stuff.
« Last Edit: January 29, 2024, 11:08:59 am by dragdeler »
Logged
let

anewaname

  • Bay Watcher
  • The mattock... My choice for problem solving.
    • View Profile

It seems that in the same way wikipedia developed, there would be an attempt to make useful AIs available without the profit motive being the primary use.

I mean, right now I've no doubt there are AI's constantly working to fill in the gaps in "civilian data maps" for businesses like Palantir, in an attempt to ensure that when someone pays enough to buy data about a person, their historical data is already available.
Logged
How did I manage to successfully apply the lessons of The Screwtape Letters to my perceptions of big grocery stores?
     and
If you're going to kill me, I'm allowed to scream.

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

All it will take is costs doing down. Which they will. The corpos can't keep their oligopoly for long.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

I hadn’t considered that that might have been why they made the API changes, which makes them make a bit more sense. But this really isn’t true, there are plenty of fake accounts on reddit — comment-stealers, mass-upvote accounts, product-advertising bots. Reddit is a prime example of a bot-infested shitshow, if only somewhat less than most designated social media.
There were 2 main reasons for the API changes.
1) To stop AI companies from just ripping the entire site for free. With the changes if they want to rip everything they have to pay reddit $$$.
2) To make it more difficult for AI's and bots to post by making it harder for them to "see" the site via just using the API to get a ton of data. This is important because if there are less AI's on the site then they can #1, sell what's on it to AI companies for more money.

Obviously it didn't get rid of all the bots, but it made things harder.
It seems that in the same way wikipedia developed, there would be an attempt to make useful AIs available without the profit motive being the primary use.
Yes, you can locally run AI's and there are some free and uncensored ones out there already that you can use.
The issue is that running LLMs is expensive and requires vast amounts of compute to create in the first place (GPT 4 cost more then a $100 million to train. 4.5/5 will cost billions or tens of billions). So even if its non-profit (and openAI is already non-profit) for anything past the bottom tier you will still have to pay them money cause they are so expensive to train and are too hefty to run on your local computer.
People really overestimate how humanlike these things are. Or maybe I just have a really good AI-dar compared to the rest of the population, I suppose.
The big thing is cost is going to go down. And down. And down.
Assuming that it cost $5 bucks for a single GPT 4 instance to post as much as everyone on the forum this year by 2030 it will cost less then a cent for the same thing. By 2034 its going to be 1/100th of a cent instead.

So it won't be "Yeah, I can tell if that individual poster is AI" its going to be "which one of the dozen posters on this page is an actual human". Pretty soon sorting through to find the actual humans is going to be a lot of work even if you *can* consistently tell if someone is human.
The contradictions really emit a strong salesman pitch smell to me, you should invest in our company we will be the next microsoft or apple. You prune the model you loose accuracy, so the ability to run more inference at the cost of the quality of the output: that's more like a fundamental law of the systems we are dealing with than technological progress. Seems like lowering the barrier of entry at the cost of accuracy was the actual economical move for them to make. So there must be such a notion as "good enough"; good enough to be paid for. No reason to assume they wouldn't just continue to deliver good enough, and benefit from technological advancements to increase their profit margins. They need to "grow" to exist afterall, and growth shall be measured in monetary terms, this is not a suggestion but a direct order, do not pass go and do not collect wisdom.
You could say the exact same things about computers. If someone will pay for a crappy 1950's computer why keep making new and better computers?
Well that's because people will pay *more* money for a newer better one and if you stop other companies will do it instead.

Its why people are still paying for GPT 4 when 3.5 (or any other one of a vast number of services) or why people buy fancy jewelry when they could just wear pop-rings, because if something is better its worth paying more money for.
And there is so so much money to be made, so they will keep on climbing to stay at the top of the heap, releasing new and better models.
Keep the subscription model, they love themselves the recurring payments. When you release a new version, how does the user measure the quality, it's really hard to be objective about this. What's not hard is selling new features to keep people hooked or justify different subscription tiers. "Now with image recognition", "now with TTS", upgrade for extended math features, try out our new browser extension blabla... You know that sort of stuff.
There are objective tests to measure how "intelligent" AI are. Of course as you say, telling the difference between similar level ones is tough, but for the layperson that's true for basically every product ever.
On Bay12? Honestly, unless we're talking about the occasional Escaped Lunatic who posts once and vanishes, none. I'm willing to bet money on this (not actually, for legal reasons).
Because a single one could very well post five times as much as every other person on the forum combined. And yet they clearly don't.
There are a few reasons for this, none of which will apply to AI in the end.
The first is that bots are (in the forumn context) too stupid to make money. Throw a ton of them out there and they just die and fail to accomplish anything. AI are much more capable of tricking people, and they can survive long enough to do so.
The second is that current CAPTCHA's and security mostly works. Actually getting past it requires effort, and effort= money. This will not apply to AI since they will be able to pass the same tests that the dumbest humans will be able to pass without requiring human involvement or time.
All it will take is costs doing down. Which they will. The corpos can't keep their oligopoly for long.
Nope, high tier AI is a big money game.
GPT 4 cost 100 million to train. Their next one will cost billions, possibly tens of billions as well as vast amounts of compute and vast databases worth of data. Eventually of course smaller groups will be able to train their own GPT 4 as costs decrease, but by then OpenAI/Facebook/Google will be training a new one that cost them fifty billion dollars even with the decreases.
Regular individuals and smaller groups have no way of competing in that arena.

E: I think AI still has a lot of easy advances left and in a few years will be vastly more capable. But even if that wasn't true and advancement stopped tomorrow and GPT 4 stays the most powerful AI forever its still going to present fundamental problems for the modern internet once prices go down enough.
« Last Edit: January 30, 2024, 04:01:41 am by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

People really overestimate how humanlike these things are. Or maybe I just have a really good AI-dar compared to the rest of the population, I suppose.
The big thing is cost is going to go down. And down. And down.
Assuming that it cost $5 bucks for a single GPT 4 instance to post as much as everyone on the forum this year by 2030 it will cost less then a cent for the same thing. By 2034 its going to be 1/100th of a cent instead.

So it won't be "Yeah, I can tell if that individual poster is AI" its going to be "which one of the dozen posters on this page is an actual human". Pretty soon sorting through to find the actual humans is going to be a lot of work even if you *can* consistently tell if someone is human.
Bay12 requires admin approval to register, remember? This forum isn't gonna be flooded by bots any more than it already is-- and these bots will be of the "posts once and gets banned" nature.
On Bay12? Honestly, unless we're talking about the occasional Escaped Lunatic who posts once and vanishes, none. I'm willing to bet money on this (not actually, for legal reasons).
Because a single one could very well post five times as much as every other person on the forum combined. And yet they clearly don't.
There are a few reasons for this, none of which will apply to AI in the end.
The first is that bots are (in the forumn context) too stupid to make money. Throw a ton of them out there and they just die and fail to accomplish anything. AI are much more capable of tricking people, and they can survive long enough to do so.
The second is that current CAPTCHA's and security mostly works. Actually getting past it requires effort, and effort= money. This will not apply to AI since they will be able to pass the same tests that the dumbest humans will be able to pass without requiring human involvement or time.
Bay12 has the best captcha: manual approval. Due to our community's small size it's workable.

All it will take is costs doing down. Which they will. The corpos can't keep their oligopoly for long.
Nope, high tier AI is a big money game.
GPT 4 cost 100 million to train. Their next one will cost billions, possibly tens of billions as well as vast amounts of compute and vast databases worth of data. Eventually of course smaller groups will be able to train their own GPT 4 as costs decrease, but by then OpenAI/Facebook/Google will be training a new one that cost them fifty billion dollars even with the decreases.
Regular individuals and smaller groups have no way of competing in that arena.
You're kinda contradicting yourself here. And besides, the diminishing returns between GPT upgrades are far, FAR more than computing power upgrades. I don't buy that the arms race will continue forever, because at some point AI will become good enough for informational and such purposes.
"Traditional" social media like Twitter won't do well, I agree. But that just means forums like this one, where screening every user is workable, or chat services like Discord (AI inherently struggles with real-time responses and the chaotic nature of many-person chats), will prevail. That's not a bad outcome really, I'm less concerned with the social media bots as I am with the fake websites.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

"Traditional" social media like Twitter won't do well, I agree. But that just means forums like this one, where screening every user is workable, or chat services like Discord (AI inherently struggles with real-time responses and the chaotic nature of many-person chats), will prevail. That's not a bad outcome really, I'm less concerned with the social media bots as I am with the fake websites.
...
Bay12 has the best captcha: manual approval. Due to our community's small size it's workable.
What type of user screening are you imagining that will keep out advanced AI? Captcha's can only get so much more difficult before humans start failing them too.
Pictures of the user won't work since AI can make pictures, ect.
How is manual approval supposed to do anything? All it does is push the work of deciding if they are real on Toady, he isn't the bot whisperer and has no way to tell if someone is real or not.
and these bots will be of the "posts once and gets banned" nature.
Why? Non-llm bots can't fool people long term and inevitably got caught so the only chance they have to advertise is the or close to the start when they just get dropped in.
Once costs go down you can just have a bot be a regular user, except they are 10% more likely to start talking about how tough their day was and how they need a CokeTM to cool them down at the end.
Quote
You're kinda contradicting yourself here.
How? Weaker old AI will be able to be run locally in the exact same was as it currently is, but (also like currently) that doesn't mean you will ever be able to run it locally or anything.
Quote
I don't buy that the arms race will continue forever, because at some point AI will become good enough for informational and such purposes.
The same way that computers became "good enough" and they stopped developing them?
Or the way that phones became "good enough" so they stopped making new phone models in 2010?

Like the computer there is going to be no universal "good enough". Sure some things don't need that fancy of an AI (eg. voice recognition doesn't need GPT 4 or anything), but there are always going to be problems where stronger=better, so as long as its theoretically profitable to do so companies will keep pushing.
Quote
And besides, the diminishing returns between GPT upgrades are far, FAR more than computing power upgrades.
Obviously they can't spend a trillion dollars training GPT 6... but once the price of compute goes down and it only costs 50 billion instead they totally will.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

"Traditional" social media like Twitter won't do well, I agree. But that just means forums like this one, where screening every user is workable, or chat services like Discord (AI inherently struggles with real-time responses and the chaotic nature of many-person chats), will prevail. That's not a bad outcome really, I'm less concerned with the social media bots as I am with the fake websites.
...
Bay12 has the best captcha: manual approval. Due to our community's small size it's workable.
What type of user screening are you imagining that will keep out advanced AI? Captcha's can only get so much more difficult before humans start failing them too.
Pictures of the user won't work since AI can make pictures, ect.
How is manual approval supposed to do anything? All it does is push the work of deciding if they are real on Toady, he isn't the bot whisperer and has no way to tell if someone is real or not.
I don't believe bots will ever become lifelike enough no matter how much computing power is thrown at them. The registration thing means the throughput of registrations is low, so you can't flood the forum with bots anyways. Also, AI art is fairly easy to tell from photos.
and these bots will be of the "posts once and gets banned" nature.
Why? Non-llm bots can't fool people long term and inevitably got caught so the only chance they have to advertise is the or close to the start when they just get dropped in. Neither can LLM bots.
Once costs go down you can just have a bot be a regular user, except they are 10% more likely to start talking about how tough their day was and how they need a CokeTM to cool them down at the end. Yeah right, I'll believe it when I see it.
Quote
You're kinda contradicting yourself here.
How? Weaker old AI will be able to be run locally in the exact same was as it currently is, but (also like currently) that doesn't mean you will ever be able to run it locally or anything. What?
Quote
I don't buy that the arms race will continue forever, because at some point AI will become good enough for informational and such purposes.
The same way that computers became "good enough" and they stopped developing them?
Or the way that phones became "good enough" so they stopped making new phone models in 2010?
The issue is that computers and phones don't have as severe of diminishing returns.

Like the computer there is going to be no universal "good enough". Sure some things don't need that fancy of an AI (eg. voice recognition doesn't need GPT 4 or anything), but there are always going to be problems where stronger=better, so as long as its theoretically profitable to do so companies will keep pushing. Name them. Specifically, non-research ones.
Quote
And besides, the diminishing returns between GPT upgrades are far, FAR more than computing power upgrades.
Obviously they can't spend a trillion dollars training GPT 6... but once the price of compute goes down and it only costs 50 billion instead they totally will. Moore's law is dead. Computing power can't keep rising forever.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.
Pages: 1 ... 23 24 [25] 26 27 ... 42