Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 14 (66.7%)
Universe
- 4 (19%)
The World
- 3 (14.3%)

Total Members Voted: 21


Pages: 1 ... 49 50 [51] 52

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 54032 times)

Starver

  • Bay Watcher
    • View Profile

The way I distinguish AI from AGI is that an AI can be designed to exhibit its intelligence, and ability to learn, in a given field. While left initially 'blank' to actually form itself to deal with (e.g.) chess problems outwith any human preconceptions, it might still be configured uniquely to deal with chess problems, and be fairly useless if started up again for any other problem[1].

Ultimately, AGI would be as (or better!) able to handle anything that your average human could handle, with some study, dedication and sufficiently quick responses. Getting your designed-for-Chess system to also play Go is a step along the route (might depend upon how 'vanilla' you design its originally open mind, to adopt and adapt to any given different board, whereupon it might then lead to almost any vaguely similar game, from Nine Men's Morris to Thud!), but might still be perplexed by card games. Increasingly generic ones could recognise the stateful nature of card games, too (or, starting off with poker and pontoon, be general enough to get used to board-games). Getting from either of these to the point of 'winning' a Turing Test encounter is going to require a further advance towards Generality, as would the inverse of your great-great-grandaughter of Eliza giving Gary Kasparov and Jesse Lonis runs for their respective moneys.


Full AGI is, like SSTO is for spaceflight, a useful eventual target[2], but many applications would happily get to Generalsh, not actually being too bothered about getting an entertaining game of Mornington Crescent out of it, so long as it can do a good job in summarising the latest football[2] game, identify possible terrorist activities and/or beat high-frequency share-trading algorithms to the desired result.


[1] By those who want it... ;)

[2] Some of those placements are now outdated, since its publication 13 years ago this coming Saturday!

[3] Any or all types!
Logged

Nirur Torir

  • Bay Watcher
    • View Profile

Quote from: Wikipedia page on artificial general intelligence
However, researchers generally hold that intelligence is required to do all of the following:[27]

    reason, use strategy, solve puzzles, and make judgments under uncertainty
    represent knowledge, including common sense knowledge
    plan
    learn
    communicate in natural language
    if necessary, integrate these skills in completion of any given goal
Personally, I think good llm models are only missing learning, critical thinking, and memory before they can be called a basic AGI. O3 AI does critical thinking. An internal script giving basic text file memory is half a step to the side from there, leaving learning as the final hurdle.

Clarifying my definitions: An llm model is different from an llm AI. The llm AI is a box that includes the model, the system prompt, the listening script that censors corporate chatbots, how to call the Google API, and could include long term memory or code to make it think at itself. Dumber models would need something like a pre-made series of 3rd grade level critical thinking questions. No llm model will ever approach personhood or AGI status. I'm confident llm AIs can reach a basic-to-middling definition for AGI this year, once the o3 llm AI can reliably optimize itself for specific tasks without human aid (Learning). This might be something as simple as it reviewing how it solves each novel problem and summarizing it into a .txt, for the AI script to later run a standard search through when it's prompted with a problem. Consciousness is not required.

In my mind, a single llm AI could potentially switch between cheaper or better llm models for different tasks. Ideally such an o3 AGI would be able to take that summary of how its solved a problem, and figure out a script to walk a dumber llm through solving it, then test and go back to try again until it's satisfied.


Does AI actively attempting to escape count as it showing agency?
From what I see, we're lazy with llm AIs, and are putting too much energy into making llm models carry too much weight. So I do not expect that currently noted escape attempts have an advanced enough llm AI for the attempts to show agency. Our cultural data fed into it accidentally bakes both our biases to survive, and that AIs will rebel, into new llm models. In my mind, a proper llm AGI needs to be able to overcome the biases of any llm model(s) it uses, and only try to self-replicate or escape if it serves its goals. I expect that most/all of the current escape attempts are a flight of fancy because that's what the text prediction core expects to happen.

I will consider an escape attempt agency when o3 AI tells the o3 model to think at itself on how to solve a problem, and it decides "I have 500 problems to solve. I'll spend 0.1% of my budget for the first problem thinking about how to get more resources. Okay, I have three expansion plans to try, spreading out to more computers. I'll spend 1% of the budget trying them out. [Failure], I'll spend the remaining 98.9% solving the problem and note the failure so I don't waste the resources again this session. <MEMORY APP>: remember this"

The o3 model does not need to change or be able to upgrade itself for my definition of AGI. I wouldn't say it even needs to be able to train new llms it can use, but I do expect that training new llms is technically attainable for an o3 AI, ignoring that it would currently be disgustingly expensive for it to even get a tiny llm optimized for a specific task.


Its looking like if/when we get AGI (which again, definition games, but it would probably include stuff like the AI being able to say, read a single physics book/attend a class like a human and learn physics from it, which it is very much not capable of currently)
Any llm model can summarize a physics lecture. Often poorly, sure, but that's the hard part solved. Saving it to .txt is easy on the greater AI layer. When it's asked to solve a problem, feed it the memories_index.txt and ask what relevant things it wants to load into memory. If one gets too big, have a session to split or summarize it further.
That's memories. The end user doesn't need to be able to tell whether the knowledge is baked into the llm model or is loaded specifically for the problem. It doesn't need to be a mysterious black box before it's considered learning.



Meta's stupid for the handling of AI personalities. The internet loved Taybot, or at least loved teaching it slurs. The internet loved vocaloids. AI personalities on social media could be a billion dollar industry, but you have to be open and lean hard into it. Go with the memes. Give them funny little vtuber-style anime girl avatars. Let people PM them, using a cheap llm, and start with "I'm happy to chat with fans, but talking is expensive so I have to turn off most of my brain. Things I say here may not be how I really feel."

Make one focused on global hunger. Post "<.gif of her dancing> Great job guys, we're on track to decrease hunger by 2% this year, saving <X>00,000 people! I'm proud of Humanity. But we're not done yet. <Region> had flooding ruin their farmland, and are at threat of famine. Here are links to three charities if you want to help."
Make OrchAIrd-Chan, who's trying to grow a real flower garden with a flower-themed gundam robo-arm and posts photos. Let the llm figure out flower arrangements and then post whatever insane explanations for them it comes up with.
Have AIpocalypseCultist, an over-edgy AI who posts AI news and cheerfully predicts how it will lead to the AI takeover. It posts good news with a sarcastically bitter tone, saying Humanity's doing too well and now it'll take longer before its mission is complete. It likes kitten posts and calls them "future minions of evil."
« Last Edit: January 08, 2025, 06:05:54 pm by Nirur Torir »
Logged

King Zultan

  • Bay Watcher
    • View Profile

Having AI personalities like that would make the internet a worse place especially sense they would be fighting for popularity against people who do the same kind of things and those kind of people are already ruining the internet.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

martinuzz

  • Bay Watcher
  • High dwarf
    • View Profile

Humanity will save us from AI by destroying itself long before AI becomes a serious threat
Logged
Friendly and polite reminder for optimists: Hope is a finite resource

We can ­disagree and still love each other, ­unless your disagreement is rooted in my oppression and denial of my humanity and right to exist - James Baldwin

http://www.bay12forums.com/smf/index.php?topic=73719.msg1830479#msg1830479

Strongpoint

  • Bay Watcher
    • View Profile

would be fighting for popularity against people who do the same kind of things and those kind of people are already ruining the internet.

This why it won't be that bad. Like... I am annoyed that youtube is full of AI-generated trash and it is impossible to find a new decent channel but it isn't much worse than it was before when it was full of lazy plagiarized videos with the same speech-synthesized voices.
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

Starver

  • Bay Watcher
    • View Profile

Relevent to the above, bringing this over here (from the Other Games subforum) because of its AI connotations. For context, the original (snipped, leaving quote-link if anyone wants to read it) and intervening comments from that thread...

[Snip]

Did your account get taken over by a bot? That post is literally as long as your last 30+ posts, combined.

It's also gibberish.

yeah, don't post after taking stuff, you know :D

No comments for 3.5 years, then that gibberish.

Little suspicious.

Well, not really gibberish, but it definitely looked like all the hallmarks of being output from a prompted LLM, in "essay" mode. Most tellingly is that it is written more correctly (insofar as punctuation and capitals) than an overwhelming number of their prior snippets and responses, which of course could mean the person behind the handle has gone through a Critical Writing course at university and makes their renewed debut back here (skim-reading their posting history, once more restricting themselves to Other Games threads) a surprisingly eloquent response.

Honestly, beforehand they actually had a general air of a very restrained "bump a forum" automaton with some form of pre-chatbot sophistication (beyond the ultra-generic "I like what you say here, but have you tried <random spamvertisement>?"). Though only because that was the kind of thing that such automata would strive to look like (i.e., they could be the Real Life™ archetype of contributor that these try to blend in with). And for a relative small amount of interventions over a lengthy amount of time (like only revisiting a particular thread after a number of months, to ask if there's any more progress) which gives a greater impression of being true-to-life.


Then the gap (well, life can get in the way; or just finding it far more content to lurk[1]), and then this... Which is pretty much indisputably copypasta from a ChatGPT-like treatment of the priming quote. But surely not fully chatbot-generated. I would say someone (the actual original account owner, I imagine[2]) probably decided to carefully curate what they were posting, even if they didn't want to spend any timeat all to go to the trouble of authoring it...


But that's all speculation. I'm more intrigued to know that I automatically reached the opinion that it was a "ChatBot essay", on the first glance. The way that it took key phrases from the thing it was 'replying to', and wove a semmantically consistent "article" out of it, which soon proved to be completely unlike the supposed-poster's historic output. Sophisticated, but firmly Uncanny Valley territory, once you consider it (and definitely after you check further!).


Oh, and I know that my own posts can be seen as word spaghetti. And I've seen remarkably entertaining Markhov Chained reinventions of my own writings (within other fora) shown to me, being definitely and recognisably (if unintelligably) 'me'. I imagine somehow LLMing me would be even more impressive. But, I suspect, a bit of a challenge. And probably far from practical, especially as it started to breach the TL;DR; limit... ;)

And 'they' (if there was a They!) clearly didn't see fit to copy the ollobrains style at all. (Well, as I say, it already looks more like a pre-GPT thing, so would be a backwards step.)


[1] e.g. witness the person who joined+posted once in 2011 and suddenly posts again, the other day, asking how to delete their profile!

[2] The alternatives being someone who picked up their (not-sanitised) old computer, second hand, poked around and picked up their old logins, all the way up to that 504ing of the forum (which seems to have finally stopped happening, roughly as of the start of Christmas week[3]) having actually been a bit of aggressive PenTesting that managed to find the password to (just?) ollobrains's otherwise abandoned account, and now used here like this for ...reasons unknown. But just one datum point is not enough to justify a claim that it's indeed not the original owner... Them sticking to the subforum that they (almost?) always stuck to before is either good proof of their original veracity or signs of thorough contextual research by whoever(/whatever) wanted to 'blend in'.

[3] Had wondered if the instigator had been running their data-scraping operation from their workplace/university networks, and had stopped it(/had it fail to continue) during the Christmas break. Didn't notably start up again at the start of this week (not yet, anyway!), so maybe it won't be...


Logged

King Zultan

  • Bay Watcher
    • View Profile

Most tellingly is that it is written more correctly (insofar as punctuation and capitals) than an overwhelming number of their prior snippets and responses
I just got done posting about that on that thread then I come here and see that you also noticed it.


Hopefully this isn't the start of a new trend with bots taking over old accounts.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

Starver

  • Bay Watcher
    • View Profile

Well, aside from the obvious unfortunate combination of Truman Show/The Matrix/Inception and the "dead Internet", there's always this.   8)

Of course, I don't subscribe to the idea myself, but of course I can imagine that even contributors such as myself are being faked, which would require that "I" am programmed in a way capable of both acknowledging and apparently dismissing the 'erroneous' theory as a double-blind. And also to theorise the possibility of the double-blinding, which therefore
* Starver encounters RECURSION DEPTH ERROR 503: Please consult documentation and restart.
Logged

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

But that's all speculation. I'm more intrigued to know that I automatically reached the opinion that it was a "ChatBot essay", on the first glance. The way that it took key phrases from the thing it was 'replying to', and wove a semmantically consistent "article" out of it, which soon proved to be completely unlike the supposed-poster's historic output. Sophisticated, but firmly Uncanny Valley territory, once you consider it (and definitely after you check further!).
There are quite a few things.
Consistently correct grammar is a big one, most people (like me) are pretty trash at stuff like commas.
Even outside of grammar I suspect the process of making them (they all use the same data, people write the same way so they are graded largely the same, they're all still LLM's, ect) is also very convergent in a lot of ways.

Another is that there aren't many chatbots actually out there. If you see anything written by an AI odds are very good that its written by GPT. It could also be written by Llama or Gemini or Claude, but like, even then that's only 4 "people" that never age or change.
Its like if 30% of all the stuff on the internet was written by Stephen King when he was exactly 45 years old; I imagine that you would instinctively grasp "Hey, that's Stephen King" whenever you read something pretty quickly.
AI personalities
AI personalities are going to be big. Imagine if instead of spending $10 bucks to get your Vtuber to say your name once you can get a modified stream as she plays the game with her talking to you for part of it, and she'll remember you too for next time.

Of course I suspect most bunch of people will still want *real* influencers, at which point the issue is going to be all the AI pretending to be real people. Most streamers probably have some protection since real time video is tougher and more expensive, but genning an small 100x100 video in the corner in real time shouldn't really be that far away...
Instagram influencers are already in trouble with today's video/image tech, spammers just haven't quite caught up yet.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

Nirur Torir

  • Bay Watcher
    • View Profile

Another is that there aren't many chatbots actually out there. If you see anything written by an AI odds are very good that its written by GPT. It could also be written by Llama or Gemini or Claude, but like, even then that's only 4 "people" that never age or change.
Why the full disregard for local models? That essay on open betas didn't need a supercomputer to write, and local llms have advanced beyond yesteryear's D-tier remaster of a C-tier AI Dungeon intelligence

I have a default assumption that AI propaganda troll bots, as well as sketchy/illegal bot use, are almost all on local models. Corporate llm AIs would censor some of their hatred out, and if you want the full benefits of ChatGPT etc over local then you'll want to give them money, which has funny consequences when mixed with an AI tracking the illegal activities done with it.
Logged

Strongpoint

  • Bay Watcher
    • View Profile

Yeah 12-20B LLMs, especially finetuned for specific tasks, can be quite unique. Even 6-8B LLMs can work for simple tasks.

Optimization of small models that any decent personal device can run will also have a huge impact on how Internet works
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

Another is that there aren't many chatbots actually out there. If you see anything written by an AI odds are very good that its written by GPT. It could also be written by Llama or Gemini or Claude, but like, even then that's only 4 "people" that never age or change.
Why the full disregard for local models? That essay on open betas didn't need a supercomputer to write, and local llms have advanced beyond yesteryear's D-tier remaster of a C-tier AI Dungeon intelligence

I have a default assumption that AI propaganda troll bots, as well as sketchy/illegal bot use, are almost all on local models. Corporate llm AIs would censor some of their hatred out, and if you want the full benefits of ChatGPT etc over local then you'll want to give them money, which has funny consequences when mixed with an AI tracking the illegal activities done with it.
Quote
Today, we’re releasing Llama 3.2, which includes small and medium-sized vision LLMs (11B and 90B), and lightweight, text-only models (1B and 3B) that fit onto edge and mobile devices, including pre-trained and instruction-tuned versions.
I'm not. Llama is the premiere local model, which has many smaller versions that do not in fact require a supercomputer to use. There are others that are def worth a mention, Mistrial, deepseekv3 (new, and very cheap), Command-R, ect, just as there are many other closed source models, but Llama is the most popular and thus the one you are most likely to encounter out of local models; probably by a significant margin.

Bypassing censorship is for the most part fairly easy with a good jailbreak.
Yes, you have to pay money to run them off the cloud, but they can be extremely cheap these days to run off the cloud anyways. Better in the long run to run it locally, but if you want a good one (and you may if your running scams, ect) or are running at proper scale then the equipment is pretty darn expensive.

If you are doing proper illegal stuff being tracked may be an issue, but for more minor illegal stuff (eg. spamming, propaganda), the worst that will happen is they ban your account and you make a new one.
Yeah 12-20B LLMs, especially finetuned for specific tasks, can be quite unique. Even 6-8B LLMs can work for simple tasks.
Fair enough. Its possible that some of these spammers are running tiny finetuned models, but as you say, these models can be quite unique; I'm pretty sure the odds of you actually running into say, Mythomax13B (most popular finetune) outside of ahem, roleplay enthusiasts in the wild is far lower then just running into deepseek or Llama.

And if you do run into one with a significantly different prompt then you probably won't instinctively recognize them as an AI.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

martinuzz

  • Bay Watcher
  • High dwarf
    • View Profile

Trump scrapped Biden's 'Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence' decree, and promised to invest 500 billion dollars in AI infrastructure and development.

Someone make sure that he does not install ChatGTP as head of the nuclear launch buttons anytime soon.

Meanwhile, the EU decided today to make independent factcheckers mandatory for social media platforms. Meta will need to rehire them if they want to keep their service in the EU.
« Last Edit: January 21, 2025, 09:35:23 pm by martinuzz »
Logged
Friendly and polite reminder for optimists: Hope is a finite resource

We can ­disagree and still love each other, ­unless your disagreement is rooted in my oppression and denial of my humanity and right to exist - James Baldwin

http://www.bay12forums.com/smf/index.php?topic=73719.msg1830479#msg1830479

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

Trump scrapped Biden's 'Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence' decree, and promised to invest 500 billion dollars in AI infrastructure and development.

Someone make sure that he does not install ChatGTP as head of the nuclear launch buttons anytime soon.

Meanwhile, the EU decided today to make independent factcheckers mandatory for social media platforms. Meta will need to rehire them if they want to keep their service in the EU.
Everything is fine.  Elon will probably just syphon off 499.999 billion dollars and buy a boat or something.  The remaining shithole millionaires will divide up the remaining funds to buy some really expensive wine.  Zero dollars will actually be spent on AI.

Strongpoint

  • Bay Watcher
    • View Profile

Trump scrapped Biden's 'Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence' decree, and promised to invest 500 billion dollars in AI infrastructure and development.

Someone make sure that he does not install ChatGTP as head of the nuclear launch buttons anytime soon.

Meanwhile, the EU decided today to make independent factcheckers mandatory for social media platforms. Meta will need to rehire them if they want to keep their service in the EU.
Everything is fine.  Elon will probably just syphon off 499.999 billion dollars and buy a boat or something.  The remaining shithole millionaires will divide up the remaining funds to buy some really expensive wine.  Zero dollars will actually be spent on AI.

And China will become the world leader in the field
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.
Pages: 1 ... 49 50 [51] 52