Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (65%)
Universe
- 4 (20%)
The World
- 3 (15%)

Total Members Voted: 20


Pages: 1 ... 24 25 [26] 27 28 ... 50

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 49678 times)

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

(sorry for brief responses, I got class in 5 minutes)

I'm not one of those anti-AI fanatics or anything. I recognize the tech's potential, and I frequently talk to ChatGPT and gen AI art. But all that led me to is a realization that it's fundamentally non-humanlike.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

I think it is important to note that Russia and China are notorious for employing armies of AI and pushing their preferred websites to the top of the Google search.
I almost posted a Chinese website as a fact about US law on these forums. They're clever.
And both those countries HATE the anonymous internet.

As for me, I mostly stick to 2-3 similar profiles. And I'm pretty tough RL. 
My experiences do vary from others, not least of which because I am a full-grown adult, as opposed to an adolescent who REALLY should not have their RL persona exposed on the internet.

As for AI and computing power: That shit ain't free. Just look at the economics of Crypto Mining. It's basically like Real Mining. It costs power, infrastructure (physical space certainly ain't free), and administrative overhead (people gotta do at least some work, and they expect to be paid). ChatGPT is a Trial Version: They're offering it for FREE to get the market primed. Eventually, someone has to foot that bill.

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

I think it is important to note that Russia and China are notorious for employing armies of AI and pushing their preferred websites to the top of the Google search.
I almost posted a Chinese website as a fact about US law on these forums. They're clever.
And both those countries HATE the anonymous internet.

As for me, I mostly stick to 2-3 similar profiles. And I'm pretty tough RL. 
My experiences do vary from others, not least of which because I am a full-grown adult, as opposed to an adolescent who REALLY should not have their RL persona exposed on the internet.

As for AI and computing power: That shit ain't free. Just look at the economics of Crypto Mining. It's basically like Real Mining. It costs power, infrastructure (physical space certainly ain't free), and administrative overhead (people gotta do at least some work, and they expect to be paid). ChatGPT is a Trial Version: They're offering it for FREE to get the market primed. Eventually, someone has to foot that bill.
Yeah, that's why I'd rather have the money and man-hours that would be spent on some kind of Orwellian ID system be spent on developing detection tools and crackdowns on AI-generated non-factual websites, than broad policy changes.

Sometimes reactive solutions really are the best solutions.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

King Zultan

  • Bay Watcher
    • View Profile

I've yet to really notice any AI related fuckery going on, maybe I'm not hanging around in the right places to see it.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

I've yet to really notice any AI related fuckery going on, maybe I'm not hanging around in the right places to see it.
Try searching for literally any information on the less-good search engines.

To be fair, though, they were like that for several years already, it's just that there used to be random people being paid to write the trash spam articles.
Logged

Starver

  • Bay Watcher
    • View Profile

I have noticed that some of the "answers" type sites (or answers-areas of even wider and general messaging boards) have notably featured 'user submitted' content that were someone's copypasta of GPT answer to the question (to various degrees of usefulness, and with 'human answers' also present - which occasionally refered to the flaws in the AI ones). I found those through search-engine searches (the usual suspect, whether you consider that a good or less-good SE) which I am used to finding (and accepting) results where I'm sent to possibly-helpful answer-sites like this.

Where noted as GPT (i.e. not including anything just so helpful that no label or followup made any claim of GPTness in its origin), I found them generally of slightly depressed quality, occasionally way off. Perhaps even so way off that I shouldn't even have been seeing the answering of the question being asked, the GPT-answer having somehow clicked (just as wrongly) with my own attempt at query-fu. (It must be said that this was never a perfect scenario even prior to the last year-and-a-bit. Answers sites have/continue-to-have human error and inexpertise. As indeed elsewhere[1].)

One webcomic discussion I frequent even had a phase of "let's get GPT to comment on the webcomic!". Relying upon being given a text description (thus subject to the human ability to provide a feeder summary/give it all the relevent facts), naturally, as it's not good enough to 'read' it from scratch. Limited success, even given the baseline difficulties/pre-work involved. Not really proven viable, even amongst the technophilic "next big thing"ers.


To me, the trend of clear "the AI said this" blipped, and has not (or at least nowhere where I monitor) sustained itself at a high level of being shoehorned into current things (if a dated 'post', it now tends to be timestamped as 6-12 months old). What is still currently slipping through the various algorithms and forum interfaces without advertising itself (well or badly) is not intruding itself so much. Perhaps that's just because they're so much better, but I doubt it. There's fewer "I asked GPT, and this is its answer...:" items, proudly advertising the fact, so I can believe that this side of the fad has fallen out of favour even for the times the 'warning' or advertising statement is omited.

Unannounced "AI takeover" is another element (populating a 'busy site' with apparent interactions), but the most evidence I've seen of that is the dumb "spamvertising"[2] claiming to provide the next step in that direction (and I am inclined to believe that they're more scam-and/or-phishing without any substance behind them).



[1] I'm not perfectly happy with my own last few responses/non-responses to some help-seeking threads in this forum. Pondering replies or followups to be more helpful, once I've given others chance to fill in for my own misconceptions.

[2]
Spoiler: Redacted example (click to show/hide)
...an example seen several times (exactly the same), seemingly automated spammed with no AI element to the spamming. Typical of a whole swathe of clearly fire-and-forget spam/scam items, though.  This particular one rescued from the 'bitbucket', as it had been revoked/overwritten by the friendly anti-Spam bot within a very short time for pattern-matching against things that long ago stopped needing manual trashing. If they could actually do half of what they claim they can, of course, I'd have expected a much less easily trapped spam-posting!
Logged

King Zultan

  • Bay Watcher
    • View Profile

I've yet to really notice any AI related fuckery going on, maybe I'm not hanging around in the right places to see it.
Try searching for literally any information on the less-good search engines.
Which ones count as less-good search engines?
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

Which ones count as less-good search engines?
Off the top of my head, google, yahoo, bing, and duckduckgo all have this problem.
Logged

Starver

  • Bay Watcher
    • View Profile

Stick with AltaVista, it'll never let you down... ;)
Logged

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

I've yet to really notice any AI related fuckery going on, maybe I'm not hanging around in the right places to see it.
Its mostly been kept out of human spaces so far. Stuff I've noticed outside of what has already been mentioned:
Bhere is a decent chunk of 100% AI generated video's on youtube that use AI generated voices and scripts with stock images.
AI art is floating around too, especially in individual creative endeavors (someone writing a story, some tiny game, ect). (Although since its intentional for the most part it probably doesn't count as fuckery).
Occasionally in debates you see idiots just wholeass quoting GPT as their entire post without mentioning it.

But its still very early days.
Yeah, that's why I'd rather have the money and man-hours that would be spent on some kind of Orwellian ID system be spent on developing detection tools and crackdowns on AI-generated non-factual websites, than broad policy changes.

Sometimes reactive solutions really are the best solutions.
To be clear the orwellian system would probably just be you signing up for googleVerified or MetaHuman or some other service and using that to log into everything. If you don't sign up sure, that's your choice, but don't expect to be able to sign up for new websites.
developing detection tools
Ah, yeah, that's a pretty big difference between us. I don't think effective detection tools* are something that can exist against AI.
*In the context of "~20 second thing a human does that is then checked by an automated process to sign up for a service." Stuff like "take a live video of yourself to prove you are real" would work, but that seems even *more* orwellian.
Quote
"There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."
https://arstechnica.com/information-technology/2023/10/sob-story-about-dead-grandma-tricks-microsoft-ai-into-solving-captcha/
GPT can already solve captchas, and they can't make them much harder or actual people will start failing them.
The only reason captchas still work is because openAI put blocks in place so GPT won't get up to excessive fuckery.

Once a non-restricted multimodal AI on the level of GPT 4 is released captchas will be useless.
paraphrase: companies will stop investing in AI
I think we have a fundamentally differing view of the nature of the global capitalism.
Because I very much think they (eg. billionaires, hedge funds, multinational corporations) will happily toss trillions of dollars into a literal pit if they think it will end up with them being ever so slightly richer.
And I also very much think that a sizeable portion of them *do* think AI will make an outrageous amount of money.

So I don't see them stopping AI research as being remotely plausible, any more then I could imagine waking up tomorrow and hearing that Disney decided that copyright is bad and they are releasing all their characters into the public domain. It just ain't how they roll.

I am curious about at what point you think openAI/meta/whoever is going to call it quits and stop trying to develop new AI.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

Yeah, that's why I'd rather have the money and man-hours that would be spent on some kind of Orwellian ID system be spent on developing detection tools and crackdowns on AI-generated non-factual websites, than broad policy changes.

Sometimes reactive solutions really are the best solutions.
To be clear the orwellian system would probably just be you signing up for googleVerified or MetaHuman or some other service and using that to log into everything. If you don't sign up sure, that's your choice, but don't expect to be able to sign up for new websites. Once the infrastructure is there, what makes you think Russia, Iran, etc won't be using it to tighten their grip over the web without putting in massive amounts of effort, as the groundwork would be laid for them (remember, I'm Russian)? And that corporations, even in the free world, wouldn't be using this to have even more of an influence on the economy?
developing detection tools
Ah, yeah, that's a pretty big difference between us. I don't think effective detection tools* are something that can exist against AI.
*In the context of "~20 second thing a human does that is then checked by an automated process to sign up for a service." Stuff like "take a live video of yourself to prove you are real" would work, but that seems even *more* orwellian.
Quote
"There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."
https://arstechnica.com/information-technology/2023/10/sob-story-about-dead-grandma-tricks-microsoft-ai-into-solving-captcha/
GPT can already solve captchas, and they can't make them much harder or actual people will start failing them.
The only reason captchas still work is because openAI put blocks in place so GPT won't get up to excessive fuckery.

Once a non-restricted multimodal AI on the level of GPT 4 is released captchas will be useless.
I don't believe this is anything except a mere swing in the arms race between bots and captcha makers that has been going on since the 90s. Stands to mention something is being developed (likely kept secret to avoid AI spammers preparing for it effectively) that we can't quite grasp the concept of currently. AI isn't magic.
paraphrase: companies will stop investing in AI
I think we have a fundamentally differing view of the nature of the global capitalism.
Because I very much think they (eg. billionaires, hedge funds, multinational corporations) will happily toss trillions of dollars into a literal pit if they think it will end up with them being ever so slightly richer.
And I also very much think that a sizeable portion of them *do* think AI will make an outrageous amount of money.

So I don't see them stopping AI research as being remotely plausible, any more then I could imagine waking up tomorrow and hearing that Disney decided that copyright is bad and they are releasing all their characters into the public domain. It just ain't how they roll.

I am curious about at what point you think openAI/meta/whoever is going to call it quits and stop trying to develop new AI.
You have strawmanned me. I am well aware of how capitalism works, and I haven't said that corpos will stop investing in AI. 1) By "Moore's law is dead" I meant that we are reaching a point where physics prevents the exponential rise of computing power. 2) I was talking about "good enough" being good enough for general-purpose AI. Which I think is a point that will be reached and be open-source-runnable very soon. And this is what would both allow the detection of AI text (which I believe always lacks a certain spark to it) and eat up market share for "chatbox" AI. I feel GPT-6 would be mostly for research purposes or marketed to perfectionists... if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Robsoie

  • Bay Watcher
  • Urist McAngry
    • View Profile
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

https://arstechnica.com/information-technology/2023/10/sob-story-about-dead-grandma-tricks-microsoft-ai-into-solving-captcha/
GPT can already solve captchas, and they can't make them much harder or actual people will start failing them.
The only reason captchas still work is because openAI put blocks in place so GPT won't get up to excessive fuckery.

Once a non-restricted multimodal AI on the level of GPT 4 is released captchas will be useless.
That is pretty funny, but, have you noticed nobody seriously uses captchas like that anymore? They've been broken for years.
Logged

Starver

  • Bay Watcher
    • View Profile

That example is described all wrong in the article, anyway. The AI is not vulnerable to CAPTCHAs, as clearly it absolutely can deal with them (that kind, certainly) easily enough. It's the process parcelled around the AI (probably programmed in, fallibly, or else insufficiently taught through supplementary learning material given to a less sophisticated 'outer skin' of AI/Human mediation[1]) that fails by not forcing a processing failure and refusal message.

(We also do not know how many false-positive 'aborts' happened, as well as this case of false-negative non-abort. As in "that looks like a CAPTCHA", and thus goes "I'm sorry, I can't do that Dave", when the task was actually more like trying to decipher a badly scrawled birthday card message or similar...)



[1] Either 'on the way down', intercepting the request, or 'on the bounce back up' leaving the AI to identify it as a CAPTCHA and then intercept its honest reply of "This is a CAPTCHA, it says..." and switching it with the refusal message. The latter just needs to use the AI's own actual work to activate the interception and denial. Indeed, the framing question (and method of presentation) probably works because it skews the AI away from faithfully reporting the straightforward assumption that it is a CAPTCHA image in 'conversational reply' format, no matter what truths the 'little grey cells' amass internally at the backend of the requisite data-munging/-crosscomparison stages. The developer's solution might be as 'easy' as adding an additional request, per every question submitted, with a plain and sanitised question of whether this is a forbidden subject. Perhaps ignore the 'user question' answer (for purposes of catching the errors on the rebound, as that is an 'unblessed' output), whilst straight taking an honest answer to an honest question as the (main?) criteria. There remain holes in that scheme, but it reduces the multiplication of AIs and is no more falllible than the core already is to misidentifying (and likely misrendering the 'honest' response to the 'dishonest' question at the same time).
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

I suspect that the story had little to do with anything, and the captcha being edited into a real-world scene is what made the image processor not treat it as a normal captcha, because normal captchas never appear in real life.
Logged
Pages: 1 ... 24 25 [26] 27 28 ... 50