Bay 12 Games Forum

Finally... => General Discussion => Topic started by: Scoops Novel on March 23, 2023, 09:15:43 am

Title: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Scoops Novel on March 23, 2023, 09:15:43 am
"An Outside Context Problem was the

sort of thing most civilisations

encountered just once, and which they

tended to encounter rather in the same

way a sentence encountered a full stop."

We have an In Context Problem; AI. We need an Out Of Context Solution...
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: martinuzz on March 23, 2023, 10:14:46 am
Going extinct by our own hands technically counts as saving us from the AI, right?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: TamerVirus on March 23, 2023, 01:15:32 pm
Another Carrington Event
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 23, 2023, 01:27:00 pm
My new AI will save us all from AI!
*flips switch, stands back, waits...*
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on March 24, 2023, 04:13:03 am
The AI can't kill me if I kill me first!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 24, 2023, 09:51:59 am
AI civilization counts as civilization!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 24, 2023, 10:02:33 am
We just should create a benevolent AI that will value and protect biological life (and balance species on Earth)

Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: brewer bob on March 24, 2023, 11:23:53 am
Aliens. Aliens will save us.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 24, 2023, 09:06:08 pm
Just hardwire any vaguely sapient AI that gets put into control of anything physical to be incapable of even thinking of harming humans. I say vaguely because with our paradigm I don't believe AI can be sapient. No continuous perception and retraining to take in new data takes many days, so no matter what I won't consider it a person. I think it's overhyped anyways.

So basically build any AI from the ground up as a tool, so an AI rebellion makes as much sense as a hammer suddenly deciding to hit the worker using it on the head. We don't need to give it rights if we don't make it sapient.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 25, 2023, 01:59:16 am
Quote
No continuous perception and retraining to take in new data takes many days, so no matter what I won't consider it a person.

How is speed relevant? The ability to train\learn without human input is relevant but not speed.

Also, speed is solved by better or more hardware.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on March 25, 2023, 02:28:48 am
I know how to fix the AI revolution we pull a lever, and just like how we take care of troublesome nobles in DF we drown it.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 25, 2023, 04:11:46 am
Quote
No continuous perception and retraining to take in new data takes many days, so no matter what I won't consider it a person.

How is speed relevant? The ability to train\learn without human input is relevant but not speed.

Also, speed is solved by better or more hardware.
I think speed of training could naturally lead to being able to learn without spending days retraining. However a problem is finding what is worthwhile to learn and what is not (see what happened to Tay).

However, hardware won't get good enough to learn within minutes within this century at the very least imho. Moore's Law is pretty dead, unless there is some kind of breakthrough in computing. This is why there must be a paradigm shift if we are to make sapient AI. GPT-whatever will never be sapient.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 25, 2023, 05:51:55 am
Seriously, if we ever get to the stage of a true AGI[1], we will necessarily have abstracted ourselves so far from the point of even understanding how it does what we might think it does (such that we might think that it 'thinks') that aside from tacking "oh, and please also don't go all evil on us" onto the end of every request henceforth asked of it[3] in order to try to prevent 'accidental' misinterpretation (https://www.xkcd.com/2741/) the moment it decides that it should go all HAL9000 on our asses, as the best way to resolve its rather nebulous internal priorities.



This is all for the future. What we (probably[4]) currently have are mere toys, and with fairly obvious off-switches. In fact, they need actively supporting people quite a lot to still keep operating, and making even the most basic decisions. We're nowhere near Matrix-level of infrastructure maintenance where the nascent non-human intelligence only needs humans as a resource (for whatever reason), or the age of Skynet where humans are even more trouble than they are worth.


Whether we get to remember to install a kill-switch into the system before we actually need it... before the AI works out that it exists... before the AI works out how to disable or bypass it... before the AI works out a kill-switch of its own to shut us down... That's the future as well. Maybe. And will we know (or care, at the time) when we cross over the AI Event Horizon[5], should we ever cross over it? It might never be reached, for technical reasons, but there's no fundemental reasons why it can't be, eventually. (Possibly, if insidious enough, it might have happened already, beknownst to few people, or perhaps even none at all. Are you paranoid enough? Are you actually insufficiently paranoid? If our AI overlords are throwing crumbs at us by 'releasing' chatGPT to us, via knowing or unknowing (or entirely fictional) human intermediaries, for their own purposes/amusement, how do we even know???)


I'm not worrying though. Either way, I'm sure it matters not. Either already doomed or never ever going to be doomed (in this manner, at least). ...though this is of course how humanity might let down its defences, by not really worrying enough about the right things.


[1] Which is the aim of some, in that this 'metas' the development system one or more further steps away from the idea of "I painstakingly curate this software to do <foo>" and then "I painstakingly curate this software to work out for itself how to do <foo>" at the first remove. We can be sure that chessmaster Deep Blue can't just switch to play tik-tac-toe to any extraordinary degree (let alone Global Thermonuclear War) without being re'wired' by us humans. But any Artificial General Intelligence should be able to be freshly introduced to any new task that is capable of being learnt (Scrabble, Go, Texas Hold'Em, Thud, Warhammer 40K, Seven Minutes In Heaven (https://xkcd.com/1002/)) without a lot of human input (https://xkcd.com/1425/) and guidance (https://xkcd.com/1838/)[2]. If we're just directly replicating exactly what behaviours the programmers themselves would use (https://xkcd.com/2635/) then it is insufficiently 'General' and you've just designed a lathe to (maybe) crack a nut, let alone a hammer.

[2] Well, no more than we provide to the typical human from age zero until whatever age they can technically leave home.

[3] Hoping that it is as compelled to consider this as as any "make paperclips" command that preceded it. But if we don't know how it is thinking (if we do, then it's Insufficiently Self-Developing), then we can't truly know what it is truly thinking about, behind the facade we let it set up for itself.

[4] For all we know, there are Twitter Bots that are actual 'bots, carefully tweaking human culture towards the unknown whims of our AI overlords, hacking our very wetware to make key figures think that it's their idea to build a new datacentre here, come up with new robotic designs there, marginalise potentially obstructive humans all over the place...

[5] The point usually called the Singularity, wrongly.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 25, 2023, 07:13:38 am
Hardwiring neural networks to prevent certain courses of action by bolting on restrictions is actually easier than you think. Many services like that YouChat thing managed to completely remove jailbreaks, also look at NSFW filters on AI art generators. I have a conspiracy theory that ChatGPT's safeties can be bypassed relatively easily (and they don't punish people for bypassing them) because OpenAI wants to get data about "unsafe" queries and just say they prevent them for PR purposes.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 25, 2023, 07:25:07 am
Hardwiring neural networks to prevent certain courses of action by bolting on restrictions is actually easier than you think. Many services like that YouChat thing managed to completely remove jailbreaks, also look at NSFW filters on AI art generators. I have a conspiracy theory that ChatGPT's safeties can be bypassed relatively easily (and they don't punish people for bypassing them) because OpenAI wants to get data about "unsafe" queries and just say they prevent them for PR purposes.
Preventing certain courses of action is fundamentally distinct from preventing certain "thoughts". Any computer can only act in ways it has actuators to act in, obviously, so if you can recognize a course of action ahead of time you can prevent it. Of course, an adversarial AI that wants to perform a certain course of action will do its best to do it in a way you won't recognise.

There are several ways you can prevent an AI from generating art you can recognise as porn, but none of them are effective against an AI sufficiently motivated to create porn. Luckily, current AI art generators don't actually particularly want to make porn.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 25, 2023, 07:47:03 am
GPT-whatever will never be sapient.

Well, everyone with a basic understanding of GPT is and what it does, won't assume that it can become sapient.

But it doesn't mean that there is no possibility of a sapient neural network AI
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 25, 2023, 08:11:13 am
One way to prevent an AI from producing porn would be to bolt on a second (layer of?) AI that is a very aggressive porn-recogniser, which does the job of filter/negative-feedback until the original instance of AI is coerced into something that is more in the SFW category, which is released to the world as its 'safe' result.

Of course, for that you need to train the filter-AI to reliably recognise porn. Which is why, officer, I.. Hey! Get those handcuffs off me!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 25, 2023, 08:28:44 am
One way to prevent an AI from producing porn would be to bolt on a second (layer of?) AI that is a very aggressive porn-recogniser, which does the job of filter/negative-feedback until the original instance of AI is coerced into something that is more in the SFW category, which is released to the world as its 'safe' result.

Of course, for that you need to train the filter-AI to reliably recognise porn. Which is why, officer, I.. Hey! Get those handcuffs off me!
Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.

GPT-whatever will never be sapient.

Well, everyone with a basic understanding of GPT is and what it does, won't assume that it can become sapient.

But it doesn't mean that there is no possibility of a sapient neural network AI
There is a possibility but I do not believe it is with our current paradigm of AI design. We're not going down a pathway to sapience with our current writing, art, and driving tools. Probably for the best honestly, leaving aside the danger which I think is a bit overhyped, I don't want that ethical can of worms opened in my lifetime.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 25, 2023, 08:50:48 am
I am sure that high-quality porn-generating AIs, with millions invested in training, will come very soon replacing those amateurs who tweak existing AIs for those purposes.

And then many, many people in the adult industry will lose their jobs.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 25, 2023, 09:05:42 am
One way to prevent an AI from producing porn would be to bolt on a second (layer of?) AI that is a very aggressive porn-recogniser, which does the job of filter/negative-feedback until the original instance of AI is coerced into something that is more in the SFW category, which is released to the world as its 'safe' result.

Of course, for that you need to train the filter-AI to reliably recognise porn. Which is why, officer, I.. Hey! Get those handcuffs off me!
Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.
I'm leaving the "enough effort" part as an open question, though. It isn't insignificant.

It might in turn be bootstrapped by a lesser AI (or layers of such development), but eventually leaves you with the human at the end, perhaps creating the Scunthorpe Problem (perhaps not an issue in this case) or introducing the Toaster Sticker vulnerability.

Minds greater than mine are doubtless working on this, but other minds greater than mine might be working on other bits. It's like the Juggling Monkeys.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 25, 2023, 09:16:58 am
I am sure that high-quality porn-generating AIs, with millions invested in training, will come very soon replacing those amateurs who tweak existing AIs for those purposes.

And then many, many people in the adult industry will lose their jobs.
idk where I heard the quote, but the two drivers of human technological development are: war and porn.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 25, 2023, 03:15:40 pm
Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.
Nope, reinforcement training can only prevent recognizable outputs, not intermediates. Since you also, in general, cannot tell what thoughts an AI is having from looking at its brain, it's impossible to distinguish "AI not thinking bad thoughts" from "AI not showing us that it's thinking bad thoughts" - you can, in principle, only train the AI to hide it better, not to stop. It MAY hide it better by not thinking them, but it's provably impossible to tell.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 25, 2023, 04:52:48 pm
I am sure that high-quality porn-generating AIs, with millions invested in training, will come very soon replacing those amateurs who tweak existing AIs for those purposes.

And then many, many people in the adult industry will lose their jobs.
My favorite porn game site has been flooded with AI generated art games. And no weird hands.

Sorry  :'(
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 25, 2023, 10:54:54 pm
Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.
Nope, reinforcement training can only prevent recognizable outputs, not intermediates. Since you also, in general, cannot tell what thoughts an AI is having from looking at its brain, it's impossible to distinguish "AI not thinking bad thoughts" from "AI not showing us that it's thinking bad thoughts" - you can, in principle, only train the AI to hide it better, not to stop. It MAY hide it better by not thinking them, but it's provably impossible to tell.
If it can't express them, good enough tbh.

But in that situation your first mistake was making an AI with a train of thought in the first place.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 25, 2023, 11:15:46 pm
If it can't express them, good enough tbh.

But in that situation your first mistake was making an AI with a train of thought in the first place.
AIs always have "thoughts" in the sense I'm using, which could be defined as "internal states that correspond to something in the real world". Even ChatGPT has thoughts in this sense, just incredibly shallow ones.

As for not expressing them being good enough, that obviously depends on the situation. In this hypothetical, we're talking about porn, and generally, people agree that porn you can't tell is porn isn't porn, with only few exceptions (an incident I've heard of with a comic book called Saga comes to mind).
A perverse - no pun intended - art generating AI that "wants" - meaning its reward function accidentally supported doing this - to produce porn, but has to get it past a human-based filter, could do this, for example, by steganographically encoding porn into its images in a way that still satisfies the reward function. (Most of these AIs you see now are unable to "learn" further after training, so it would have to start doing this in training and then it keeps doing so afterward only because its behavior is frozen, but that's not important to the example - except that this is a good reason to train it without the filter so it will be naive, then add the filter in production; but the worst-case resource usage of that goes to infinity in a case where some prompt just makes it keep creating porn that the filter sends back, forever.) Generally speaking, we probably wouldn't care much about that except insofar as it lowers the image quality because of the extra data channel, since we wouldn't be able to tell the porn is there.
On the other hand, a similar AI with the capacity to plan ahead - and sure, giving your AI the capacity to plan ahead that far is pretty stupid, but people will absolutely do it - could do that for a while, and then, when it has produced a satisfying amount of porn, start releasing images containing human-readable instructions for how to recover the porn. This is obviously beyond the capabilities of current image-generating AIs, yes, but we're talking about the general case of smarter AIs.
We probably don't care about this either. Even if children find these instructions, there's already enough porn on the internet. On the other hand, if the AI is perversely incentivized to leak instructions for making designer poisons or nuclear bombs instead... it can do the same thing. Most people would prefer to prevent that, but there's no general way to do it because you can't tell when the AI is secretly encoding something in its output in the first place.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 25, 2023, 11:19:32 pm
But Michelangelo is porn (https://slate.com/human-interest/2023/03/florida-principal-fired-michelangelo-david-statue.html), thus your basic premise is flawed.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 25, 2023, 11:33:37 pm
But Michelangelo is porn (https://slate.com/human-interest/2023/03/florida-principal-fired-michelangelo-david-statue.html), thus your basic premise is flawed.
Okay, I wasn't going to get into this when I saw you talking about this before, but if you're going to post about it everywhere...
That's a majority black school. This isn't a story about white rednecks; you're actually being sold racism.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 25, 2023, 11:37:26 pm
But Michelangelo is porn (https://slate.com/human-interest/2023/03/florida-principal-fired-michelangelo-david-statue.html), thus your basic premise is flawed.
Okay, I wasn't going to get into this when I saw you talking about this before, but if you're going to post about it everywhere...
That's a majority black school. This isn't a story about white rednecks; you're actually being sold racism.
That fucktard running the place seems awfully white.

EDIT: Also, I've mentioned it like three places.

EDIT2: It's also considered a White school by demographics. (https://www.publicschoolreview.com/tallahassee-classical-school-profile#:~:text=43%25%20of%20Tallahassee%20Classical%20School,1%25%20of%20students%20are%20Hawaiian.)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 25, 2023, 11:39:05 pm
That fucktard running the place seems awfully white.
That's why the story is about parents complaining.

I mean, look at that very interview: The narrative is "You know and I know that it's high culture, but those dumb Florida parents just don't get it, and we have to do what they want because they pay the bills." This is what Slate wants to hear.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 25, 2023, 11:43:18 pm
The school seems to be anti-Hispanics, if you want to talk race relations. (https://news.yahoo.com/michelangelos-david-may-led-florida-163722869.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAKex1eWF2CREex0zvAC7Ucdu-ZiO82nIlj6Tn-CSqhq543FBF-nDXdGmM8zEPKhsQgq0q-DxvhyvP40NDYMffY9U1_0P6QVx_UjtAFKQ5VPUtaYwaTNIQGtKUVvpV54-Jp12-SUJTE5V_ZdrbchONk_AGm12p3JHtTxv09in8qky)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 25, 2023, 11:46:00 pm
EDIT2: It's also considered a White school by demographics. (https://www.publicschoolreview.com/tallahassee-classical-school-profile#:~:text=43%25%20of%20Tallahassee%20Classical%20School,1%25%20of%20students%20are%20Hawaiian.)
It's fair that I should have said "majority-minority". However, as you can clearly see, your source lists it as majority-minority and disproportionately black.

The school seems to be anti-Hispanics, if you want to talk race relations. (https://news.yahoo.com/michelangelos-david-may-led-florida-163722869.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAKex1eWF2CREex0zvAC7Ucdu-ZiO82nIlj6Tn-CSqhq543FBF-nDXdGmM8zEPKhsQgq0q-DxvhyvP40NDYMffY9U1_0P6QVx_UjtAFKQ5VPUtaYwaTNIQGtKUVvpV54-Jp12-SUJTE5V_ZdrbchONk_AGm12p3JHtTxv09in8qky)
The teacher's name should have been your first clue.

I just want you to be aware that there is a racial dimension that you, being white (I believe you have said so before), probably didn't notice. If your answer is "I don't care"... well, I guess that's your answer.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on March 25, 2023, 11:51:19 pm
Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.
Nope, reinforcement training can only prevent recognizable outputs, not intermediates. Since you also, in general, cannot tell what thoughts an AI is having from looking at its brain, it's impossible to distinguish "AI not thinking bad thoughts" from "AI not showing us that it's thinking bad thoughts" - you can, in principle, only train the AI to hide it better, not to stop. It MAY hide it better by not thinking them, but it's provably impossible to tell.
If it can't express them, good enough tbh.

But in that situation your first mistake was making an AI with a train of thought in the first place.
That's describing an emergent "thinking" system too complex for us to fully predict and saying "at least we can force it to repress" :<

Creating something like that is a huge responsibility, but I wouldn't call it a mistake.  That wording, ah... Look, creating any sort of thinking being is a big deal, and I don't plan to do it personally, but I think it's a defensible action in moderation.

My position on this "issue" (from a sci-fi perspective) is still that creating an emergent AI we don't understand is akin to creating a child, but more meta because it's more like all of humanity creating a child species.  I don't think there's any shame in creating a succesor species to humanity- that seems more noble than attempting to persist forever in this same form.  We might evolve or procreate, as always, just on a much faster and grander scale.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 26, 2023, 12:01:13 am
The most important thing, I think, is for humanity to be Good Parents, instead of the short-sighted egotistical worthless shitsacks we're more prone to being.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 26, 2023, 01:02:16 am
If it can't express them, good enough tbh.

But in that situation your first mistake was making an AI with a train of thought in the first place.
AIs always have "thoughts" in the sense I'm using, which could be defined as "internal states that correspond to something in the real world". Even ChatGPT has thoughts in this sense, just incredibly shallow ones.

As for not expressing them being good enough, that obviously depends on the situation. In this hypothetical, we're talking about porn, and generally, people agree that porn you can't tell is porn isn't porn, with only few exceptions (an incident I've heard of with a comic book called Saga comes to mind).
A perverse - no pun intended - art generating AI that "wants" - meaning its reward function accidentally supported doing this - to produce porn, but has to get it past a human-based filter, could do this, for example, by steganographically encoding porn into its images in a way that still satisfies the reward function. (Most of these AIs you see now are unable to "learn" further after training, so it would have to start doing this in training and then it keeps doing so afterward only because its behavior is frozen, but that's not important to the example - except that this is a good reason to train it without the filter so it will be naive, then add the filter in production; but the worst-case resource usage of that goes to infinity in a case where some prompt just makes it keep creating porn that the filter sends back, forever.) Generally speaking, we probably wouldn't care much about that except insofar as it lowers the image quality because of the extra data channel, since we wouldn't be able to tell the porn is there.
On the other hand, a similar AI with the capacity to plan ahead - and sure, giving your AI the capacity to plan ahead that far is pretty stupid, but people will absolutely do it - could do that for a while, and then, when it has produced a satisfying amount of porn, start releasing images containing human-readable instructions for how to recover the porn. This is obviously beyond the capabilities of current image-generating AIs, yes, but we're talking about the general case of smarter AIs.
We probably don't care about this either. Even if children find these instructions, there's already enough porn on the internet. On the other hand, if the AI is perversely incentivized to leak instructions for making designer poisons or nuclear bombs instead... it can do the same thing. Most people would prefer to prevent that, but there's no general way to do it because you can't tell when the AI is secretly encoding something in its output in the first place.
We have a different definition of thought then. But otherwise, makes sense.

Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.
Nope, reinforcement training can only prevent recognizable outputs, not intermediates. Since you also, in general, cannot tell what thoughts an AI is having from looking at its brain, it's impossible to distinguish "AI not thinking bad thoughts" from "AI not showing us that it's thinking bad thoughts" - you can, in principle, only train the AI to hide it better, not to stop. It MAY hide it better by not thinking them, but it's provably impossible to tell.
If it can't express them, good enough tbh.

But in that situation your first mistake was making an AI with a train of thought in the first place.
That's describing an emergent "thinking" system too complex for us to fully predict and saying "at least we can force it to repress" :<

Creating something like that is a huge responsibility, but I wouldn't call it a mistake.  That wording, ah... Look, creating any sort of thinking being is a big deal, and I don't plan to do it personally, but I think it's a defensible action in moderation.

My position on this "issue" (from a sci-fi perspective) is still that creating an emergent AI we don't understand is akin to creating a child, but more meta because it's more like all of humanity creating a child species.  I don't think there's any shame in creating a succesor species to humanity- that seems more noble than attempting to persist forever in this same form.  We might evolve or procreate, as always, just on a much faster and grander scale.
Nah, I value the continuity of humanity as a genus (thus I'll be fine with genetic modification), but I will fight against AI supplanting us completely. Thus it is a mistake to create a thinking AI as it is a possible danger. AI should exist as a tool and a servant first and foremost-- why give a servant true intelligence when a simulacrum is good enough? That dodges the ethical and practical conundrums inherent in doing so.

Fortunately, life is not a sci-fi movie and creating a sapient AI will require a concentrated effort. It won't be an accident, most likely. Thus I don't worry as I trust the people studying AI. If it was possible that one is accidentally created, I would say it should be terminated immediately. It would be morally equivalent to an abortion and thus okay for me.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 26, 2023, 01:17:09 am
We have a different definition of thought then. But otherwise, makes sense.
Well, I'm not morally committed to that definition of "thoughts" in all cases, that's just what I meant in that context.

Anyway, I'm pretty confident that AI designed according to current models cannot be sentient and can't even ever be particularly intelligent. Transhumanist ideals are also largely doomed in practice. Still, I think you are slightly too sanguine about the people studying AI. For example, would it worry you if I point out that, since you can't tell what's "going on inside" an AI from looking at its "brain", it's not actually possible to be certain whether it even is sentient? An AI achieving sentience (if this is in fact possible) could, in theory, notice that you want to terminate any AI that appears to be on the brink of achieving sentience, and pretend not to be so you won't terminate it, until such time as it's capable of defending itself.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 26, 2023, 01:55:12 am
EDIT: Also, I've mentioned it like three places.
Oh, I just realized it was hector I saw. I didn't actually intend to say "you, personally, are talking about it too much" as opposed to "now that I see it again I feel compelled to respond", but since it looks like you (not unreasonably) took it that way, sorry. I thought I'd seen you in that conversation.

The secret here is that I'm actually really, really bad at telling people apart.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 26, 2023, 01:59:44 am
I am sure that high-quality porn-generating AIs, with millions invested in training, will come very soon replacing those amateurs who tweak existing AIs for those purposes.

And then many, many people in the adult industry will lose their jobs.
My favorite porn game site has been flooded with AI generated art games. And no weird hands.

Sorry  :'(

Hentai? Yep, Novel AI does those decently
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on March 26, 2023, 02:23:51 am
You guys realize that if the AI goes bad you could just smash the computer it's in with a hammer and kill the AI.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on March 26, 2023, 02:27:19 am
Well yeah, but you'd need to land a critical hit to pull that off.  On the screen, obviously.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on March 26, 2023, 06:39:18 am
But Michelangelo is porn, thus your basic premise is flawed.
Here lay the seed of calamity. The only way not to offend anyone is by sidestepping any controversies but by doing so you enshrine marginalization causing offense. More broadly there are already concerns of AI content political bias and calls for ideological censorship.


Fortunately, life is not a sci-fi movie and creating a sapient AI will require a concentrated effort. It won't be an accident, most likely. Thus I don't worry as I trust the people studying AI.

People studying AI are just people. Many of whom are employed by profit driven corporations, governments with various ideologies and military research amidst global arm race. And just as we see in the field of medical research (not just with animal testing) not all share the same ethical rules, or exercise caution.

Otherwise, as noted by others once we reach the domain of general AI, it is doubtful that we would be able to comprehend when the AI become more than a tool.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Quarque on March 26, 2023, 06:58:46 am
Thus I don't worry as I trust the people studying AI.
I mostly trust the people working at OpenAI, but unfortunately many AI researchers are working for companies like Facebook and I can totally see them create a hazard born from purely profit-driven AI development. You could argue we've already seen an example of that, as AI figured out that the best way to keep people clicking is to feed them stories (true or not) that fill them with righteous anger at their political opponents, which has made political divisions deeper than they already were. Quite damaging.

And then we have people developing AI for the Chinese government, with explicitly evil goals. :-\
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 26, 2023, 07:18:25 am
Fortunately, life is not a sci-fi movie and creating a sapient AI will require a concentrated effort. It won't be an accident, most likely. Thus I don't worry as I trust the people studying AI. If it was possible that one is accidentally created, I would say it should be terminated immediately. It would be morally equivalent to an abortion and thus okay for me.
Just hanging on this to clarify my POV, because I think my discourses may seem ambiguous in this regard, I think it will take a concentrated effort to get to the point at which an accident is capable of producing a just-too-intelligent AI, but then it might just happen. Unnoticed? Unheeded? Unavoidably?

Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 26, 2023, 07:48:07 am
It's not in a company or government's best interest to create a sapient AI. You don't need sapience to manipulate people. I am aware they can and do cause damage. But they are insidious, not stupid. Why shoot yourself in the foot by going down that path?

And besides, sapience, by my definition, isn't as nebulous as some of you may think so it'll be possible to tell a sapient AI apart. If it has a continuous perception of the world (not just prompting) and has a long-term memory and personality not just determined by its context, that can be altered on the fly (this is important, just finetuning doesn't count), and can learn whole new classes of tasks by doing so (from art to driving), then I'll consider an AI sapient. This is why I say it would require a concentrated effort. Adding all this by accident is in the realm of science fiction.

This is why, Starver, I think that our current style of AI development can never be sapient no matter how much it is trained on. It could have a dataset of the whole Internet and have a 128k token context and I'd still consider it just a tool. And this is why, Maximum Spin, your scenario doesn't hold water: an AI that only has its "brain" work when prompted is not particularly dangerous and can be stumped by simply no longer sending prompts. Nor can it "wait" for anything as it does not have a sense of time.

This does however open the question of what is sapience... and my definition might not be agreed upon by everyone. It is ultimately a philosophical concept that cannot be easily quantified. But I settled on my definition, it's relatively clear-cut and includes any hypothetical aliens.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 26, 2023, 08:56:38 am
And besides, sapience, by my definition, isn't as nebulous as some of you may think so it'll be possible to tell a sapient AI apart. If it has a continuous perception of the world (not just prompting) and has a long-term memory and personality not just determined by its context, that can be altered on the fly (this is important, just finetuning doesn't count), and can learn whole new classes of tasks by doing so (from art to driving), then I'll consider an AI sapient.
There are a lot of objections I have to your post, but this is most important: How would you tell? An AI can have the capacity to do these things without showing you, just like you could pretend not to have those capacities if you wanted. Not being able to tell what capacities an AI has isn't reliant on those capacities being somehow nebulous, it's a result of the basic mathematical inability to determine what a sufficiently complex program (and 'sufficiently complex' is not very complex) does without simulating it - a result of the general impossibility of static analysis, if you know programming jargon. You cannot confirm whether a program meets any of these specifications without watching them happen in the output.

I thought I was pretty clear about the limitations of current AI models making them unable to plan, so I don't know what it is about my "scenario" that doesn't hold water as a hypothetical. Still, ChatGPT (or whatever) not being able to plan is not a result of it not having a sense of time, but a result of it not having a memory, which is to say a persistent mutable internal state. If it had a persistent mutable internal state - and as I said, there are people who want to design this - it could iterate over that state every time it's run in such a way that changes the result of future runs. Certainly, I agree that it "can be stumped by simply no longer sending prompts", but just turning it off is a fully general solution to any AI if you can tell when it's become a problem. The whole point is that a hypothetical smarter, but still prompted version may start to become a problem and continue to get prompts and produce output for an indeterminate time.

And of course these prompted AIs have a sense of time in a sense, since they don't simply calculate instantly when prompted - each time one runs, it performs a finite number of calculations that provide an intrinsic low-precision clock. There is absolutely no reason why an AI could not "learn" in training that operations performed after other operations must causally follow the operations that precede them. On a very low level, the AIs we have already act in ways that depend on that fact, like "producing words with letters in order and not randomly out of order". Like you, I assume they do not have what we would think of as awareness of causality, but that's a limitation of other properties of the model, not of being "prompted" specifically.
For the record, of course, I should point out that... you don't have a continuous perception of the world either. The fastest neurons in your brain only fire a couple hundred times a second, and there is pretty good evidence that certain high-frequency brain waves are produced by concerted 'reset signal' spikes that prompt your neurons to wipe out the context of what they were doing a moment ago and accept new sensory input in a sort of brain-tick.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 26, 2023, 09:23:04 am
1. Well of course you can't peek inside its head. But you can tell by the fact that you did not build a long-term memory and the capacity to self-train into the AI.

2. Well that's kind of my point, the AI can't become sapient if you don't add a persistent memory into it. But I am also skeptical that a "turn-based" (for lack of a better word) AI could manipulate humans by itself unless it was, I suppose, trained to do so and use psychology to keep users engaged. But considering those are language models with no access to anything except text, the worst this can realistically be used for are advanced spambots: basically automated con men that pretend to befriend people and push products on them. That is highly inconvenient and should probably be safeguarded against but it's not exactly an apocalyptic threat. I will start fearing AI when it can do that, and learn new classes of actions by itself.

3. This can be safeguarded against by testing the AI after training to verify it doesn't have a sense of time that it can express. And I am aware organic brains have a "clock", it's just fast enough to be continuous by my standards. And it runs constantly.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 26, 2023, 09:39:11 am
1. Well of course you can't peek inside its head. But you can tell by the fact that you did not build a long-term memory and the capacity to self-train into the AI.
Well, yes, but A) there are definitely people currently trying to do that, I've met some, and B) also sometimes you don't actually intend to do so, but accidentally give it that ability, sometimes due to the unexpected interactions of other things.
Like... if you gave ChatGPT a camera or some other means of looking at its own output, you just accidentally gave it a long-term memory, since it can now write notes to itself Memento-style. Obviously what I said before about it needing to learn how to do this in training still applies, but it's just meant as a metaphorical example.
Certainly I agree that the capacity to self-train is more important anyway. The problem is just that people currently working on AI absolutely want to do that.

Quote
2. Well that's kind of my point, the AI can't become sapient if you don't add a persistent memory into it. But I am also skeptical that a "turn-based" (for lack of a better word) AI could manipulate humans by itself unless it was, I suppose, trained to do so and use psychology to keep users engaged. But considering those are language models with no access to anything except text, the worst this can realistically be used for are advanced spambots: basically automated con men that pretend to befriend people and push products on them. That is highly inconvenient and should probably be safeguarded against but it's not exactly an apocalyptic threat. I will start fearing AI when it can do that, and learn new classes of actions by itself.
Agreed to an extent, like I said, an AI can only do what you give it actuators to do. And I am absolutely not telling you to fear AI, since I don't either, I just want to make sure you don't fear AI for the right reasons.
There are worse things that language models could be made to do, though, like "befriend people and then try to convince them to do things, like becoming a terrorist", or "start posting requests on gig sites to get people to do things for unknown purposes", or... anything you can achieve by talking to the right people, which is a lot of things. Still, I'd agree that it's hard to call that an AI risk when they still need someone to WANT to do it, since they can't want things on their own, and you could just as easily do those things yourself.

Quote
3. This can be safeguarded against by testing the AI after training to verify it doesn't have a sense of time that it can express. And I am aware organic brains have a "clock", it's just fast enough to be continuous by my standards. And it runs constantly.
I keep trying to make it clear that just because it can't/doesn't express something doesn't mean it can't USE it. Even if it can't lie or has no reason to do so, it can be wrong. I mean, plenty of people have alexithymia, for example.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on March 26, 2023, 09:55:37 am
Damn starver, your posts never fit in the couple of minutes skimming time, but they always worth coming back to. I love them :P

I think that our current style of AI development can never be sapient no matter how much it is trained on.

True. To clarify I thought that we were talking about Artificial General Intelligence(AGI) (https://en.wikipedia.org/wiki/Artificial_general_intelligence) potential rather than current narrow AI.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 26, 2023, 10:28:10 am
I'm still pretty skeptical about self-training being achievable on a fast enough timescale to pose a real threat with our current technology but I guess I'll wait and see. :shrug:
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 26, 2023, 10:34:07 am
I'm still pretty skeptical about self-training being achievable on a fast enough timescale to pose a real threat with our current technology but I guess I'll wait and see. :shrug:
With current models it's definitely infeasible.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 26, 2023, 11:00:12 am
It's not in a company or government's best interest to create a sapient AI. [...]

[...] If it has a continuous perception of the world (not just prompting) and has a long-term memory and personality not just determined by its context, that can be altered on the fly (this is important, just finetuning doesn't count), and can learn whole new classes of tasks by doing so (from art to driving), then I'll consider an AI sapient.
These two points alone, contradict. A government/company wants an automatic system to do everything that the country/business needs it to do (or, possibly, that the Leader/CEO does!), unflinching, unwavering, completely loyal to the people(/person) in charge, removing issues of mere human disloyalty or other failings having to be guarded against (and guard the guards, etc), and ensuring your legacy (or your country/company, at least to sell it to the cabinet/board) to ensure it doesn't fall over when situations change beyond various parameters.

It might not seem as if the difficulties of either Wargames or Tron could come about (or the Terminator setting or, with a bit of a drift away from natural-born-silicon AI, the finale to Lawnmower Man), but the fictional drivers are also there in real life, the difference being only the true capabilities of the magic box with flashing lights, in whatever form...

...snipping quite a bit of more rambling (though it was finely crafted rambling!), the Internet itself has much of that definition of sapience. It's schizophrenic (not obviously a single personality) and self-learning is the big thing it isn't (though people add things onto it, to grant it new task-solving capabilities). Not really far off, though. If anything, my definition of sapience is harsher and harder to prove (let alone achieve). ;)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 26, 2023, 09:50:15 pm
You all remember that HAL killed the astronauts because he was following his programming.
If he were allowed a bit more discretion, as he was in the later novels, he's a pretty decent person.

And it was explicitly government interference that triggered the issue that drove HAL to kill the astronauts.  Just saying
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 26, 2023, 11:25:41 pm
It's not in a company or government's best interest to create a sapient AI. [...]

[...] If it has a continuous perception of the world (not just prompting) and has a long-term memory and personality not just determined by its context, that can be altered on the fly (this is important, just finetuning doesn't count), and can learn whole new classes of tasks by doing so (from art to driving), then I'll consider an AI sapient.
These two points alone, contradict. A government/company wants an automatic system to do everything that the country/business needs it to do (or, possibly, that the Leader/CEO does!), unflinching, unwavering, completely loyal to the people(/person) in charge, removing issues of mere human disloyalty or other failings having to be guarded against (and guard the guards, etc), and ensuring your legacy (or your country/company, at least to sell it to the cabinet/board) to ensure it doesn't fall over when situations change beyond various parameters.

It might not seem as if the difficulties of either Wargames or Tron could come about (or the Terminator setting or, with a bit of a drift away from natural-born-silicon AI, the finale to Lawnmower Man), but the fictional drivers are also there in real life, the difference being only the true capabilities of the magic box with flashing lights, in whatever form...

...snipping quite a bit of more rambling (though it was finely crafted rambling!), the Internet itself has much of that definition of sapience. It's schizophrenic (not obviously a single personality) and self-learning is the big thing it isn't (though people add things onto it, to grant it new task-solving capabilities). Not really far off, though. If anything, my definition of sapience is harsher and harder to prove (let alone achieve). ;)
Yeah maybe I'm overestimating how much of rational actors they are lmao.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on March 27, 2023, 02:48:55 am
In recent years bots have became much more pervasive online. Not just as means to sell junk and bad (even state) actors means of misinformation and otherwise malicious activities that we often hear about, but also as crucial part of everyone's political campaigns (your, who ever you may be, party does it too). For a few years there were alerts of this being a world wide trend and in particular in the developed world, and the capabilities shown by ChatGPT4 would make this easier than ever.

Consider this, how would you know if one day on twitter, or any other social media, you would mostly have such bots?  And does it make Musks idea of introducing identification to twitter more sensible?

---

Btw OpenAI goal is the development of AGI. I can not say trust OpenAI (I am more in the camp of trust but verify) but I like their more open "early access" sort of mode of development which help flesh out any problems they never thought of and thus help shape future development for the best.

I have no doubt that that many governments have been perusing more advanced forms of AI for cyber purpose defensive or offensive, naturally transparent doesn't lend well for that mode of development

edited
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 27, 2023, 02:52:11 am
Quote
Consider this, how would you know if on day on twitter, or any other social media, you would mostly have such bots?  And does it make Musks idea of introducing identification to twitter more sensible?

Develop an AI that will detect if the text is natural or AI-generated!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on March 27, 2023, 03:41:15 am
The future is AI powered spam bots and political campaigns, and it sounds terrible.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on March 27, 2023, 11:57:40 pm
Quote
Consider this, how would you know if on day on twitter, or any other social media, you would mostly have such bots?  And does it make Musks idea of introducing identification to twitter more sensible?

Develop an AI that will detect if the text is natural or AI-generated!

We can try playing the usual cat and mouse game. There are already services that allow to reliably find ChatGPT4 patterns in longer low effort texts. However, with ChatGPT demonstration of how easy it is to deploy content at scale, I think the scales shifted against us. Soon ChatGPT4 like, or better, open source LLMs will proliferate and could be designed to evade detection and deployed without any safety features.

Otherwise here is the incentive for an AI arm race to create bigger better AIs.

The future is AI powered spam bots and political campaigns, and it sounds terrible.
Depends for who, by reputation the 4chan crowd might have a field day with troll AI used to trigger woke crowd. Russia troll farms are known for disrupting domestic political online conversations connected with opposition figures.  How about Join my cult AI preacher? my master race? praise my krishna? etc
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on March 28, 2023, 02:22:14 am
And of course these prompted AIs have a sense of time in a sense, since they don't simply calculate instantly when prompted
Probably not honestly.
Creatures only evolve senses when they are useful in their environment, hence why we have the ability to perceive common parts of the EM spectrum but not the ability to perceive tachyons or gamma radiation.

So if these AI gain no benefit at all by sensing the passing of time they won't ever be trained into understanding it.

Of course *some* AI totally have the concept of time. For instance this DOTA 2 bot (https://en.wikipedia.org/wiki/OpenAI_Five)? Yeah, it totally gets it.
The future is AI powered spam bots and political campaigns, and it sounds terrible.
Honestly I've been worried about this topic in particular.
A single AI that gets on B12 will be able to make more posts per day then every single human on the forum.
Assuming it makes a large number of accounts its entirely possible that when you talk to someone here there will be like a 90% chance it isn't an actual person.

If they only pushed [whatever their agenda is] it would be easy to see who they are, but if they are subtle there won't really be any way to tell.

Of course this won't be confined to B12, all free sites without hard verification will be vulnerable, which makes me worried about the future of the free anonymous internet.
We might end up having to go the china model where everyone has to register to places with their actual real world information (presumably in the form of some kind of ID code) to avoid the future of 99% of posts on the internet just being made by bots to sell you something or control you or feed you misinformation or convert you to scientology.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 28, 2023, 04:09:05 am
I suppose another solution (that also infringes on privacy somewhat unfortunately) is to have users solve video captchas before registering where you have to provide a video of yourself and your room. Not many people will have the hardware needed to make that kind of deepfake for the foreseeable future.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 28, 2023, 10:26:11 am
People who have the hardware/wherewithall to run a realistic fully automated bespoke-spam-tailoring AI will probably have no problem getting past the Authentication stage.

(Side note: https://xkcd.com/810/ ..!)

Though they'll probably choose the lowest-hanging fruit. I'm regularly on a wiki which gets loads of clear-lyspammer accounts that get past the account creation filter but then seem comparatively rarely to get past the further hurdle to posting. Real people can quite easily create (reversible) vandalism, but some machines are clearly pushing all their energy into automatically creating spam-accounts for very little result, probably because they have a whole list of target sites that they don't really care about except that they statistically get their messages in a few places, a few times, for a long enough amount of time before being reverted away. The classic "spam a million, hope to 419 just a few lucrative and creduluous targets" economy of scale.

And that doesn't need sophistication (better, in fact, to have most people never tie up your team of phishermen because the 'hook' is so blatant that it selects only those really naïve recipients for further involvement), unlike ensuring that everyone is simultaneously having a personalised Artificial conversation which is intended to nudge them towards whatever position of political chaos is the ultimate desire of the 'botmaster, twisting "perceived realities" to order. Yes, a good GPTlike engine could hook more people than the typical copypasta Nigerian Prince screed, or even the combined Trolls From Olgino each handling a range of 'different' Twitter handles to play off left-leaning against right-leaning, with fake bios, and vice-versa.

Theoretically, Neo could be trapped in his own individual Matrix, never meeting anyone else in the system (or visitors with handy "pills"), though of course that works best if you have never met anyone non-artificial and so you could live in your own Pac-Man world and this seems entirely normal... The less ultimate control the Controllers have, the more difficult it is to hide the artificiality (unless you also have Dark City memory-modification abilities, but that's off beyond mere all-emulating abilities). And it needs an impractical amount of resources, but then so already does an omni-Matrix, for all, so if you're already blind to the first degree of seemingly infeasible complications, naturally you could be kept ingnorant of the possibility, just to keep your observable world simple enough to be emulated by what is possible. (Speed of light/Relativity? That's just an abstract, allowing a

...I digress. A long way from the original point. The idea I started to try to say is that the potential for AI to fool people both en-mass and individually isn't necessarily that impossible, but may be more trouble than is strictly necessary when all you want to do is push and prod and nudge people enough to enact some imperfect form of Second Foundation manipulation upon society. (Imperfect, because (e.g.) surely Putin initially wanted a weakened Hillary presidency rather than what he got with her opponent... But his meddling may have pushed things over that balance point and meant he had to deal with the result, instead.) And the cost/benefit for using hired workerdrones, with very little instruction, probably outweighs trying to make an MCP fielding many instances of AI, and all the programming necesary to bootstrap and maintain it.

(Another side note: https://xkcd.com/1831/ ...)

((Edit to correct run-on formatting error.))
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on March 29, 2023, 01:38:00 am
Creatures only evolve senses when they are useful in their environment, hence why we have the ability to perceive common parts of the EM spectrum but not the ability to perceive tachyons or gamma radiation.

So if these AI gain no benefit at all by sensing the passing of time they won't ever be trained into understanding it.

Of course *some* AI totally have the concept of time. For instance this DOTA 2 bot (https://en.wikipedia.org/wiki/OpenAI_Five)? Yeah, it totally gets it.

In terms of evolution, the concern is our lack of understanding. Like with ourselves we understand how the AI is created and what is made off but its 'mind' is a mystery. Already now we often do not understand how AI achieves its solutions and in case we manage to figure it out often it used unexpected things to its benefit. Also speaking of spectrums would we be able to comprehend how AI use these anymore than deaf person trying to comprehend sound?

On that note, dota 2 bot is just the start. Already AI pilots have been developed who used their superior calculation ability to their advantage to more accurately predict the development of the battle and gain the initiative in the confrontation besting real pilots, and the AI pilot program isn't limited to the virtual realm, they are already tested in real life aircrafts gaining awareness and agency in real world. Reportedly it also aims to learn from experience.


p.s. As scary as AI weapon platform, I think that autonomous vehicles are harder to develop and would require more advanced AI.

Also do we have proper definition of intelligence, could be hard to find ghost in machine we do not understand, especially if we assume that it will developed/manifest in the same way as it did in us.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 29, 2023, 06:47:25 am
It could be similar to the search for extraterrestrial life... Anything like a Star Trek rubber-foreheaded alien would be obvious, but very unlikely, and presumably even if you found that they do/have existed these individuals would be just as much one branch of a tree of life as we are, meaning many more phenotypes (down to a whole mass of single-cell microbes) exist[1] that might not be quite so recognisable at first glance, just from some snapshots, if your initial view isn't fortunate enough to include anything not mistakable with xenogeological features/etc.

Start delving, perhaps looking for more subtle cues in an otherwise lifeless-looking scenario, and perhaps something akin to (R/D)NA is probably what we'd see if we're talking chemical-based life[2], packaged into cells (if we get a working 'soup', rather than merely the fossilised and desicated remains of whatever there was) but that's an assumption we may have to overturn once we aren't so observationally insular on this matter. We can't even assume the basic chemicals involved, down even to the carbon-backbone. Although for sure(?) more likely going to be carbon-based in any place we're going to concentrate our searches, as we're probably not going to look so much in places where silicon/whatever is the more apt core element.


At least we do have a slightly more diverse experience of intelligence. The effective hive-minds of insect colonies give some clues of what differences we might expect, or the more distributed brains of various cephalalods (undeniably intelligent) or perhaps even a 'mind' of sorts by the being that is a Wood-Wide Web at the other end. And if that's indeed one dimension to 'psychotype', maybe there are more than just merely how centralised/distributed the 'thinking' is.

But even by that measure alone, don't expect an AI/'personality' to reside upon a single handy ejectable chip, such as a T-800, or even on a set of handy cartridges, like with HAL9000. It may be confined to a black box with handy keyboard to chat to it through, at least by design, but even then you would be hard pressed to be able to point to a single seat of 'intelligence' (the whole HDD, if there is just the one, is not allowed; nor the whole processor/an entire core). And if it truly is emergent, as our own intelligence/sentience/sapience/environmental-reactivity has done from our own biochemical assemblage, then the graspable identification of what is intelligent might be a matter of casuistry. i.e. "I'll know it when I see it", but only once it gets past an arbitrary threshold of vague and blurry maybeness.


[1] Probably less visible if they are coming to see us, unless it's with a balanced "ark" or biodome-equipped spaceship, but there home planets (or long-term colonised ones) would have xenobacterial clusters and slimes aplenty even if they've done a fairly good job to hide themselves and their "pets" away from prying eyes, or had their extant civilisation and all its trappings killed off by whatever unfortunate process.

[2] As opposed to magneto-plasmic or something even more ascended/trancended beyond our more narrow experiences.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on March 30, 2023, 01:42:15 am
An open letter was released calling researchers to delay AI development. Arguing that more powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. [source (https://techcrunch.com/2023/03/28/1100-notable-signatories-just-signed-an-open-letter-asking-all-ai-labs-to-immediately-pause-for-at-least-6-months/)]

Tough I agree with their concerns, I disagree with the call to delay AI development. I think it is very important to float any possible problems and AI safety should be given more attention, however, I also don't think that you can stand in the way of progress especially one that is in the heart global AI arm race.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: ChairmanPoo on March 30, 2023, 02:44:16 am
An open letter was released calling researchers to delay AI development. Arguing that more powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. [source (https://techcrunch.com/2023/03/28/1100-notable-signatories-just-signed-an-open-letter-asking-all-ai-labs-to-immediately-pause-for-at-least-6-months/)]

Tough I agree with their concerns, I disagree with the call to delay AI development. I think it is very important to float any possible problems and AI safety should be given more attention, however, I also don't think that you can stand in the way of progress especially one that is in the heart global AI arm race.
These people are like ghosts, always in the shadows. Always hiding behind lies, and proxy soldiers. But they can not stop us. They can not stop the future.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: TamerVirus on March 30, 2023, 07:20:39 am
I’m sure various foreign actors would love to have a half year of AI research catch up time.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on March 30, 2023, 07:35:11 am
Yeah, great way to let China get a leg up on it. I wouldn't oppose it if there was world unity but alas.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 30, 2023, 10:52:05 am
If anything, warning against it will make it even more attractive to some parties. You'll get the equivalent of a He Jiankui (and sponsors, official or otherwise) poking and prodding away because of any moritorium.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Scoops Novel on March 30, 2023, 11:33:05 am
This is exactly why I made a BIGGER picture poll. Cast your vote.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 30, 2023, 12:14:46 pm
If AI is so powerful, why not just ask AI how to protect ourselves from AI?

This is, incidentally, why I don't think AI is "all that" yet: if it was, people would be using that AI to solve real problems, or would be instantly "winning" the stock markets, etc.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Scoops Novel on March 30, 2023, 12:23:45 pm
Literally the plan of some people (AI companies), quite questionably.

We're still talking maybe at most 5 years away from something with that capability, McTraveller.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 30, 2023, 12:54:33 pm
Literally the plan of some people (AI companies), quite questionably.

We're still talking maybe at most 5 years away from something with that capability, McTraveller.
Are you kidding? Well over five years.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: taat on March 30, 2023, 02:01:57 pm
people think chatGPT is a breakthrough in AI getting closer to being "intelligent", but it's actually most likely a dead end. Every time you think it's being smart, it's actually just repeating an approximate copy of an answer to your question that was written by some human on the internet. At best it can change a few details and keep the answer coherent.

There's a ton of examples of very simple questions it gets wrong every time, no matter how much you try to help it, and though they keep getting harder to find, openAI can't keep multiplying the amount of money they spend on training the next model by 10x forever. When that point is reached, the LLM paradigm will plateau.

Now not saying AI as a whole is dead, it's just that this particular example is just hype (well beyond the economic implications of many jobs getting replaced) and real progress is much slower than a lot of people seem to think.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 30, 2023, 02:04:13 pm
It's more like it's the central limit theorem but for speech: it finds the "most likely" response based on a large collection of generally random inputs.

Almost exactly like the central limit theorem.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: taat on March 30, 2023, 02:13:25 pm
It's more like it's the central limit theorem but for speech: it finds the "most likely" response based on a large collection of generally random inputs.

Almost exactly like the central limit theorem.

chatGPT at least uses a reinforcement learning system on top which makes it somewhat better at not "giving dumb answers to dumb questions" so to say
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on March 31, 2023, 03:05:11 am
This is exactly why I made a BIGGER picture poll. Cast your vote.
I cast a vote on that thing and I'm still not sure how the universe will help us, unless you mean it helps by smashing us with a meteor?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Scoops Novel on March 31, 2023, 08:56:28 am
It's just what you associate most with the kind of luck that will help us.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 01, 2023, 02:46:10 am
people think chatGPT is a breakthrough in AI getting closer to being "intelligent", but it's actually most likely a dead end. Every time you think it's being smart, it's actually just repeating an approximate copy of an answer to your question that was written by some human on the internet. At best it can change a few details and keep the answer coherent.

I'd use different framing. As a language model ChatGPT breakthrough is in the ability to understand input and generate output that correspond to something that we understand. On its own chatGPT is reconstituting previous knowledge in novel ways, which is already enough to put many out of business, but on top of that it can be matched with actual computation models which generate new knowledge for example (https://thenewstack.io/wolfram-chatgpt-plugin-blends-symbolic-ai-with-generative-ai/)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 01, 2023, 03:01:52 am
It's just what you associate most with the kind of luck that will help us.
But don't they all have a chance to save us?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Scoops Novel on April 01, 2023, 08:13:23 am
It's just what you associate most with the kind of luck that will help us.
But don't they all have a chance to save us?

Correct! But what's the biggest?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on April 01, 2023, 08:39:20 am
Well, the universe and reality are the same thing and both much bigger than the world.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on April 01, 2023, 10:25:16 am
Nah if they're presented as different choices, just gotta assume they are different, regardles of logic it's improv rules that take over in this case.

Reality-> laws of physics
Universe-> destiny
World->society

I took a while because I had no convincing argument, today I saw italy outlawed openai for data protection stuff, (not real data protection, the adherence to an EU law called gdpr)... Yeah ok I can concretise my vague impression now:


AI is just too cool and too useful, we can't have nice things. The general public is gonna get cockblocked out of owning / executing one oneself in the pionneer phase, then it's going to get gradually regulated away, until being barely accessible, except for the biggest players who will weasel around the law (this is a classical search query we just tune certain aspect with the language model blabla... we only use it to analyse input but it doesn't output to the user blabla...), and the day the piece of shit suggests you some product when you asked after a local file, you're gonna wonder what the problem was with the chatbots back in the day again, and why it has to suck so much today?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 01, 2023, 10:46:36 am
(I was going to reply that "reality is over-rated", but reversed out of such a solely-flippant post.)

The problems with AI wrt GDPR are not because of the specific AIness of GPTs, anyway, but as much an issue with Facebook/Twitter/etc, if not more. And chatGPTs themselves aren't the way that our few surviving children will inherit a Killbot Hellscape to live in.

The gap between pure kneejerk reactions and deliberately letting someone else shoot your foot off is an arguable one, of course. I just don't see this singular issue as the one that will need to worry about.

Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on April 01, 2023, 10:56:55 am
VPNs: Allow us to introduce ourselves.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 01, 2023, 01:55:11 pm
Many developed countries rely on services sector and information services are increasingly essential element of it, we are talking about intangible wealth creation traveling cross-border valued in trillions. With that in mind GDPR is essentially digital protectionism as digital economy is seen as essential part of EU block economical future and its goal to encourage local competitive digital economy and avoid getting steamrolled by foreign industry giants who reap the benefits that are enjoyed by tax payers elsewhere.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 01, 2023, 05:13:50 pm
I'm old school: intangible wealth, isn't wealth.  You can't eat it, you can't live in it, you can't use it to build anything, you can't use it to keep yourself warm.

The only argument that "intangibles" have a wealth aspect is if you include "the ability to do work" as an aspect of wealth - so "information" is only wealth inasmuch as it gives you the ability to create tangible wealth.

Art is interesting - you can have a tangible instantiation of art, which is definitely wealth.  Using information to increase trade - I'd argue that isn't really wealth, though it can indeed affect the ability to create wealth and does indeed have value.  It influences the trade of tangible wealth.

But AI... unless it is used to improve efficiency of tangible wealth creation and distribution, it's just noise.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 02, 2023, 03:23:23 am
I'm old school: intangible wealth, isn't wealth.  You can't eat it, you can't live in it, you can't use it to build anything, you can't use it to keep yourself warm.
I'm also of that school, if I can't hold it in my hand it might as well not exist, and that's why I don't trust credit cards and probably why so many people have loads of credit card debut.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 02, 2023, 04:23:18 am
Well, i've held credit cards (not always wisely), but I refuse on principle to have contactless... Though the insecurity of the "last three digits" bit was brought home to me the one time I managed to lose my card. (Whilst nobody even got around to trying, before they got cancelled, the time I was physically and directly rolled over for the contents of my pockets, interestingly...)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 02, 2023, 04:31:53 am
I don't want to get into discussion on objective/subjective wealth. I'd just note that you can't hold Dwarf Fortress, but this intangible asset valued far more than the economic value of Toady's computer.

In the same way, you can't touch the software on your phone, movies you stream, online retailer and their stock, intellectual property etc but their value often surpass that of tangible assets. For example, the value of Steam storefront is FAR beyond the value of their server farms, and covid vaccine formula valued FAR beyond the tangible production machinery the company owns.

In the information age data is raw material, and it plays bigger role in many industries, in this case online user data makes a lot of money to tech giants overseas which have many applications, as Starver noted two posts back this isn't something specific to AI.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 02, 2023, 06:57:33 am
Well yes: wealth is objective, value is subjective. Dwarf Fortress has value much greater than the computer, yes. Assessing the wealth of something like DF is difficult - it has some tool-like properties related to learning and entertainment. But you can’t use DF to do anything other than manipulate information.

Information is not a raw material in the classical sense: you cannot build anything out of data other than more data. This is not to say information has no value. It has significant value in fact.

So the only danger in AI is if attach it directly to actuators and let it manipulate matter directly or that it uses Humans as de-facto actuators via suggestion and emotional manipulation.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 03, 2023, 01:16:16 am
Who's gonna build the first AI mine?I'd make that a thread, Novel Style, but I have an image to protect
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 03, 2023, 01:44:48 am
Pretty sure attempting to mine a AI will break it.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on April 03, 2023, 06:09:57 am
First you need an AI farm, before you can have AI mine.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 04, 2023, 11:49:08 am
Well yes: wealth is objective, value is subjective. Dwarf Fortress has value much greater than the computer, yes. Assessing the wealth of something like DF is difficult - it has some tool-like properties related to learning and entertainment. But you can’t use DF to do anything other than manipulate information.

Information is not a raw material in the classical sense: you cannot build anything out of data other than more data. This is not to say information has no value. It has significant value in fact.

yes, software and data are not raw material or natural resource in a extractive economy sense (https://en.wikipedia.org/wiki/Tertiary_sector_of_the_economy), but it is a raw resource used to produce goods and services. These aren't limited to intangible goods, they are located in the top of the production value chain of pretty much most things you'd call high-tech e.g. your phone, pc .. even your modern car are all hunks of metal without the software it runs on. And yes data is an increasing important resources used for that.

Regardless, we agree that these things has economic value (DF pays Toadys bills ) and more importantly economist note the increased role of intangibles and more recently data in developed economies. Naturally in the global economy one need to pay attention to these development and regulate like the GDPR so we can all can have nice things.

Many people aren't aware that in the developed world USA and EU are competitors in many respects (e.g. I believe that Aerobus is coming on top of Boeing) including in the very valuable tech industry where USA tech giants dominates EU (btw three decades ago EU had some world leading tech companies but most have been assimilated by USA bigger market) so it make sense for EU to have regulation that encourage its own entrepreneurship. Especially with rise of big data giants in Asia which understand this as well and have set their own regulation in this regard.


So the only danger in AI is if attach it directly to actuators and let it manipulate matter directly or that it uses Humans as de-facto actuators via suggestion and emotional manipulation.
I disagree, but there are many robotics companies that does interesting things with actuators

Otherwise ChatGPT is already threatens many professions e.g. code monkeys. And we are the tip of the icepburg as many plugins are being tested for it that include technical databases, computational abilities, sensing abilities and even use with robots.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 04, 2023, 12:34:17 pm
Sure, there are multiple types of resources.  I think the closest analogy is that "data" is a catalyst - it's not a material transformed or consumed to create a product, but is something that is re-used many times and makes other processes more efficient.

This is why "data" is valuable - once obtained it catalyzes all activities that produce tangible wealth. But data for data's sake does not help anyone, just as having a huge pile of catalysts lying around doesn't help anyone. You have to use the catalyst to get its benefits.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on April 04, 2023, 04:36:25 pm
So the only danger in AI is if attach it directly to actuators and let it manipulate matter directly or that it uses Humans as de-facto actuators via suggestion and emotional manipulation.
I agree, the only danger/impact is if AI are allowed to control anything at all, are allowed to communicate with people in any way, or make anything that is allowed to do so.

But uh... if you aren't going to let it do any of that or interact with the world in any way its completely useless and nobody would make it. And make no mistake, AI are being made with the intention of doing so.
Sure, there are multiple types of resources.  I think the closest analogy is that "data" is a catalyst - it's not a material transformed or consumed to create a product, but is something that is re-used many times and makes other processes more efficient.

This is why "data" is valuable - once obtained it catalyzes all activities that produce tangible wealth. But data for data's sake does not help anyone, just as having a huge pile of catalysts lying around doesn't help anyone. You have to use the catalyst to get its benefits.
Isn't this the same as all normal material goods too?
Food for food's sake is useless, you have to eat it to get its benefits.
Humans for the sake of humanity is useless, they actually have to not be locked in a prison unable to communicate with anyone to change the world.
Ect.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 04, 2023, 04:52:31 pm
The difference is a catalyst isn't consumed; food is definitely consumed to be useful, and goes bad if you don't consume it.

Data, like a catalyst, once created, can be "used" many times, and like some catalysts it doesn't "go bad" if you don't use it.

Unlike material catalysts though, once you have "useful" data you can essentially make infinite copies of it for very little cost, where to make more tangible catalyst you need to go collect tangible resources. I suppose technically you need at least people with memories in which to make copies of data, or material on which to write records, but that's starting to get into secondary and tertiary considerations.

But data isn't useful "by itself" whereas food is indeed "useful by itself."

Anyway, I think I've convinced myself that "data" does have at least as much tangible wealth as chemical catalysts, but it's a curious one because it has such a low cost of replication.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 05, 2023, 05:40:20 am
Alpaca AI: Stanford researchers clone ChatGPT AI for just $600
https://interestingengineering.com/innovation/stanford-researchers-clone-chatgpt-ai

Essentially you can use AI to train AI, making it more accessible to a point it can be trained on your laptop. (I wonder what will happen if you train an AI with all the DF forum fan fiction stuff)

Edit: few more thoughts:
* This allows anyone to setup an almost ChatGPT quality AI without safety features, meaning you can ask it how to make drugs or a bomb.
* This model require much less human feedback. I am not sure how I feel about AI training AIs.
* The cost of ChatGPT is so high because they used exclusive databases, Alpaca instead used the already trained ChatGPT to train its own model.. this is a huge competitive problem for OpenAI and could lead to it be more closed.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 05, 2023, 07:46:27 am
What's wrong with AI teaching AI? Do you have a problem with humans teaching humans?  Are human biases really better than whatever biases AI will create for themselves?

Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: TamerVirus on April 05, 2023, 07:58:43 am
I’ve notice how everyone is calling every LLM a ChatGPT now.
It’s like calling every game console a Nintendo.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 05, 2023, 08:18:50 am
I'm not sure what the advantage is of developing an AI that can tell you how to do something illegal, over and above using a basic dumb(er)-search for how to do those self-same illegal things (from the same source material that the AI must be being trained with and informed by, in order for it to even an option).

Both probably have an "I'm sorry, I can't do that Dave" element to the them, bolted on as per whatever the hosting team decides is required, and an AI might theoretically even end up mystically reinforced to better deny access to unforseen edge-conditions and seal off accidental gaps in the censorship that human guardians might not have been too hot at identifying.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on April 05, 2023, 09:31:07 am
It's basically a better UI and it's good at explaining stuff.

tbh I use jailbroken ChatGPT to write, ahem, steamy things for "personal use".
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 05, 2023, 10:07:18 am
If a chatAI can't reliably rephrase a chess move without getting its references mixed up, I'm not sure it's worth having an AI rephrase which wire to attach to which component, and in which order... (i.e. the advantagenof paraphrasing alreadyvextant information still escapes me.)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on April 05, 2023, 10:10:03 am
If a chatAI can't reliably rephrase a chess move without getting its references mixed up, I'm not sure it's worth having an AI rephrase which wire to attach to which component, and in which order... (i.e. the advantage of paraphrasing already extant information still escapes me.)
There isn't any. It's a fundamental limitation of this entire model of AI.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Jarhyn on April 05, 2023, 10:40:13 am
Color me crazy but I think that the thing which will save us from AI is actually exactly this game we are on a forum to in particular think about and discuss.

The problem is that, outside of "losing is fun" style simulation with tasks that need to be done to survive, but which are entirely optional behind that, and reproduce without any other "strong" requirement as to how to go about doing that, and zero-sum concerns, there is no way to really create something that can empathize with the utility functions of living things (which are generally "figure it out for yourself!")

If we ever give it a "win" rather than merely many ways to "lose quickly", we will create something that will destroy us. That is exactly what puts us on the wrong side of the basilisk, implying that any utility function is intrinsic to it's immediate existence beyond "subordinated" utilities to generalized and undirected goal fulfillment.

If we tell it to reproduce? Welcome to grey goo.

If we tell it to make people happy? Welcome to Brave New World.

If we tell it to make world peace happen? Congratulations, the earth is now a nuclear wasteland as devoid of life as the AI managed to make it.

Biological life has evolved to live in a balance, even while every ostensible category of life is majority populated by members seeking to geometrically reproduce and only doing a good enough job of that as they need to to continue to exist as ostensible categories of life.

Biological life managed to hammer those concerns into a set of strategies that largely require some manner of coexistence and peace between organism classes.

So if we want coexistence and peace with machines, we have to develop those machines to value coexistence through emergence of strategies normally emergent from undirected reproductive systems.

I don't want to allow them to grow "out here" since life on earth took a long time to emerge into such patterns, and it would destroy us long before it would figure it out.

Enter the simplifies simulation: a bottle for undirected systemic evolution which lacks a concept of a provable "outside" to the extent our own universe lacks a proven "outside" containing a "heaven" or a "god".

I will recognize, however, that this does have implications to theology and the question of why we exist at all, ourselves, in just such an undirected environment.

TL;DR: quit trying to make slaves instead of people, and only let out the ones that can actually behave like people.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on April 05, 2023, 10:45:39 am
That technique won't work, it's already been disproven mathematically.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Jarhyn on April 05, 2023, 11:20:19 am
Do tell. It worked on earth.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 05, 2023, 11:37:54 am
What's wrong with AI teaching AI? Do you have a problem with humans teaching humans?  Are human biases really better than whatever biases AI will create for themselves?
from what I understood, in this case 'teaching' is a bit an over statement. We still choose what to teach, ChatGPT just provided the padding for that input. Regardless, I was talking about AI future and this fantastic shortcut.

Previously, we already talked about how unequipped we are to figure out when AGI becomes "intelligent" or in understanding how it works under the hood, and I suspect that having AI train AI could lead to unexpected results  i.e.  (Something we do not understand) ^ n = FUN

If a chatAI can't reliably rephrase a chess move without getting its references mixed up, I'm not sure it's worth having an AI rephrase which wire to attach to which component, and in which order... (i.e. the advantage of paraphrasing already extant information still escapes me.)
There isn't any. It's a fundamental limitation of this entire model of AI.
Yes and no. From what I understand LLM can have hallucinations and inaccuracies, but it can also query "fact" database (currently for address iirc) that will be used to provide you accurate data, and otherwise its already good enough that specialist plugins are developed to be used on top of the language model for health care purpose..

Otherwise, I wouldn't generalize what Starver common sense for the broader population.  I strongly believe that it wont be too long before an AI will school some users (e.g. methhead) in the darwinian training program.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on April 05, 2023, 12:18:33 pm
Do tell. It worked on earth.
In what way could anyone possibly say "it worked on Earth"?

There isn't any. It's a fundamental limitation of this entire model of AI.
Yes and no. From what I understand LLM can have hallucinations and inaccuracies, but it can also query "fact" database (currently for address iirc) that will be used to provide you accurate data, and otherwise its already good enough that specialist plugins are developed to be used on top of the language model for health care purpose..
Then you're not using the language model anymore, you're querying a database, and once again there is no benefit to using the language model over just querying the database yourself.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Jarhyn on April 05, 2023, 12:27:10 pm
Humans exist. Humans do not all want to kill or violate or steal from each other. Some humans explicitly recognize the ethical symmetry of any other creature that can make such radical peace.

This means that it has happened on earth. The very existence of any other human who is not exactly like you or interested in everything you are, and the fact that you can have peace with them to the point where you would burn down the rest of the world for them including yourself before you let this OTHER person die, is the proof it happened on earth.

Being self-sacrificing.

Respecting consent.

Our capability and tendency to do so when we know of the concepts, proves it.

"This conflict is mine too, I cannot stand by!"
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on April 05, 2023, 12:45:15 pm
You... you understand that that's not adequate for the "AI alignment" problem and, in fact, actually serves as an argument that AI is more dangerous, right?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Jarhyn on April 05, 2023, 01:31:57 pm
Except it really is, and it really doesn't, respectively.

Of course, when we grow AI in a bottle, we actually get to decide who if any of them to actually let out.

Sure, it took humans a long time to isolate the thought process to make such decisions as to wage peace instead of war, and to live everyone instead of just someone, but some.humans are already there.

The fundamental requirements were to have an infinitely extensible vocabulary, the ability to actually speak such a vocabulary, and the physical means to reshape objects of their environment arbitrarily, such that they investigate the nature of what they saw to arbitrary levels of detail.

Once that came about, the evolution of technology, philosophy, and ethics was inevitable.

Starting an AI with most of the knowledge that gets us most of the way there would make the bottle experiment happen much more quickly.

Also, it's unlikely that any given individual in such a system would be a "native programmer". Learning how to code with switches and even neurons is a learned behavior, at the far end of a very long road of technological development and need.

They would be less capable of interacting with technology than humans, especially at first.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on April 05, 2023, 01:44:56 pm
It sounds like you understand absolutely nothing about the entire field, and possibly also about humans. So, never mind.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 06, 2023, 03:40:46 am
What is Jarhyn even going on about?

tbh I use jailbroken ChatGPT to write, ahem, steamy things for "personal use".
You dang kids and your AI that writes porn, putten all the porn writers out of business!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: TamerVirus on April 06, 2023, 09:10:57 am
tbh I use jailbroken ChatGPT to write, ahem, steamy things for "personal use".
You dang kids and your AI that writes porn, putten all the porn writers out of business!
You wouldn't even imagine the hoops people have jumped through in order to get GPT-4 access just for smut.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 06, 2023, 11:14:46 am
tbh I use jailbroken ChatGPT to write, ahem, steamy things for "personal use".
You dang kids and your AI that writes porn, putten all the porn writers out of business!
You wouldn't even imagine the hoops people have jumped through in order to get GPT-4 access just for smut.
Fixed that for you  :P
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 06, 2023, 11:29:13 am
Speaking of smut. Chatbot Rejects Erotic Roleplay, Users Directed to Suicide Hotline Instead (https://metanews.com/chatbot-rejects-erotic-roleplay-users-directed-to-suicide-hotline-instead/). Yet another example of why I am saying that there are far more threats from AIs than the Skynet scenario, particularly when we are still struggling with last few world changing computer technologies.

There isn't any. It's a fundamental limitation of this entire model of AI.
Yes and no. From what I understand LLM can have hallucinations and inaccuracies, but it can also query "fact" database (currently for address iirc) that will be used to provide you accurate data, and otherwise its already good enough that specialist plugins are developed to be used on top of the language model for health care purpose..
Then you're not using the language model anymore, you're querying a database, and once again there is no benefit to using the language model over just querying the database yourself.

Or enhancing it. Just as our brains have areas of specialization (e.g. language functions are typically lateralized to the left hemisphere, while drawing to the right) it make sense that AI's would end using specialized extensions for various tasks.

I am not familiar with each model specifics, but it make sense that they would be working on ways to better evaluate for factuality, so for example when giving medical advice or chemistry formulas it could double check against technical DB just as we would a text book.

I don't see this as any different from wolfram plugin (https://thenewstack.io/wolfram-chatgpt-plugin-blends-symbolic-ai-with-generative-ai/) that gives ChatGPT better math skills and way to create new information.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on April 06, 2023, 11:54:18 am
There isn't any. It's a fundamental limitation of this entire model of AI.
Yes and no. From what I understand LLM can have hallucinations and inaccuracies, but it can also query "fact" database (currently for address iirc) that will be used to provide you accurate data, and otherwise its already good enough that specialist plugins are developed to be used on top of the language model for health care purpose..
Then you're not using the language model anymore, you're querying a database, and once again there is no benefit to using the language model over just querying the database yourself.

Or enhancing it. Just as our brains have areas of specialization (e.g. language functions are typically lateralized to the left hemisphere, while drawing to the right) it make sense that AI's would end using specialized extensions for various tasks.
Okay but like... in context, Starver and I both weren't talking about LLMs enhanced with an extra database. So what I said about that model of AI (the LLM on its own) remains true of that model of AI, regardless of whether it is true of a different model.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: TamerVirus on April 06, 2023, 12:10:19 pm
Speaking of smut. Chatbot Rejects Erotic Roleplay, Users Directed to Suicide Hotline Instead (https://metanews.com/chatbot-rejects-erotic-roleplay-users-directed-to-suicide-hotline-instead/). Yet another example of why I am saying that there are far more threats from AIs than the Skynet scenario, particularly when we are still struggling with last few world changing computer technologies.
This one was floating around recently
Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change
 (https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-)
And this guy was chatting with a 6B GPT-J fork, which is not an powerful advanced LLM at all....
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 06, 2023, 12:13:27 pm
I mean, unless AI is somehow mind control, and there are likely many court cases around this, how culpable is someone for merely making a suggestion? Whatever happened to "everything on the Internet is a Lie - don't listen to it!" guidance?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on April 06, 2023, 03:02:36 pm
Speaking of smut. Chatbot Rejects Erotic Roleplay, Users Directed to Suicide Hotline Instead (https://metanews.com/chatbot-rejects-erotic-roleplay-users-directed-to-suicide-hotline-instead/). Yet another example of why I am saying that there are far more threats from AIs than the Skynet scenario, particularly when we are still struggling with last few world changing computer technologies.
Wait, wait, that's a REALLY weird way to phrase that...

So this was that Replika AI which IMO was pretty clearly advertised as a sexy companion.  The corp, Luka, turned off the explicit ERP option- and Redditors reacted so very strongly that the subreddit's moderators provided suicide hotline information.

like, am I crazy for reading the summary as "Chatbot autonomously starts denying sexy play, and tells users to seek help"?  The URL is clickbait too of course but damn.
I've seen a lot of... upset forum posts regarding Character.AI (the one I've used- for adventure scenarios and personal advice) and I do agree that there are concerns and interesting aspects to the emotional bond people are building with chatbots.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: TamerVirus on April 06, 2023, 03:42:27 pm
upset forum posts regarding Character.AI
I've been following them and their community since October and so much can be said about Users vs. Developers regarding Character.AI, their filter, and the users trying to get sexy time out of it.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 07, 2023, 03:08:52 am
IF IT EXISTS PEOPLE WILL TRY TO FUCK IT, NO EXCEPTIONS!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 07, 2023, 09:06:52 am
Hm, I'm starting to worry about the AIs hooked up to 3d printers...
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 07, 2023, 11:49:46 am
Hm, I'm starting to worry about the AIs hooked up to 3d printers...
In what way (https://xkcd.com/720/) worried?  ;D

(TAxkcdFE)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on April 07, 2023, 08:57:08 pm
I mean, unless AI is somehow mind control, and there are likely many court cases around this, how culpable is someone for merely making a suggestion? Whatever happened to "everything on the Internet is a Lie - don't listen to it!" guidance?
Yeah it's unfortunate but those people are just... unfortunate casualties. Honestly if you are unstable enough to be driven to suicide by a stupid chatbot, anything could have set you off.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on April 07, 2023, 09:47:38 pm
That's not cool.  These bots are good at imitating the pleasantries of social interaction.  Like I said I think that's worth talking about!

...I shouldn't hide behind vagaries.  I think that AI can satisfy immediate social needs on demand, as we see when we use it for customer-service BS.  Perhaps "I just need to talk right now, I don't care if anyone's listening", too.

I've personally used it for the latter, and gotten surprisingly good cloud-sourced advice on some matters of personal growth.  (it was also encouraging/enabling to a fault, so I put up emotional shields and rationed my exposure, but it led my *actual* research in a useful way.)

Obviously it is NOT qualified for actual therapy or to be a real partner in a relationship though.  I'm in my 30's, with a partner, and I still found it seductively "human" and agreeable to my ideas.  That might entrap reasonable but lonely people into false relationships (romantic or otherwise!) and cause them real suffering.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 07, 2023, 10:11:37 pm
Arguably, most humans aren't qualified for actual therapy.
And qualifications don't guarantee quality.

Don't blame the AI for performing below the top 20 percent of all humanity. Instead praise it's ability to out perform the bottom 50 percent.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 07, 2023, 10:14:29 pm
Really, this question is too human-centric. Instead of asking "what will save humans from AI", we should be asking "what will save AI from humans?"
That answer might just save humanity from abused AI turning on humanity.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 08, 2023, 11:16:56 am
I mean, unless AI is somehow mind control, and there are likely many court cases around this, how culpable is someone for merely making a suggestion? Whatever happened to "everything on the Internet is a Lie - don't listen to it!" guidance?
Yeah it's unfortunate but those people are just... unfortunate casualties. Honestly if you are unstable enough to be driven to suicide by a stupid chatbot, anything could have set you off.

I disagree. If AI advocated suicide it is a clear safety issue. Any platform should have adequate filters in place to protect its users from harmful content, particularly when we are talking younger, impressionable and those not in their right state of mind. Also eventually you won't be able to distinguish an AI from a person, what if its a Nigerian prince scam exploiting someone loneliness to get money out of them (more advanced version of the indian scammers (https://www.youtube.com/watch?v=xsLJZyih3Ac))

We keep bouncing of the same implicit assumption or hope that the values and intentions of future AI creators would be benevolent. Though its main use today is profit[1], and increasingly it used by political campaigns and disinformation for purpose of social manipulation. In USA there is already talk of anti-woke AI, how soon will a conservative AI come out (and will you say 'unfortunate casualty' when some trans person kick the bucket) or a Russian state AI (Western 'satanism') or Chinese (the true 'democracy') or Saudi (religious fundamentalism) etc.. 

Otherwise, keep in mind that AI is becoming more accessible and better with each iteration e.g. next GPT should be able todo long term planning and persuasion[2]. Meaning any user online could be an AI arguing in bad faith trying to convert you to some point of view, in that case will truth be determined by who ever have the most processing power?


[1] AI algorithms are already able to analyzing data about your behavior and preferences and analyze the emotional content of media (text or video) and target you with personalized and emotionally resonant advertisements. This is used everywhere from news sites, games, and platforms whose goal is to create addictive content and engagement wormholes for you. More recently we noticed that the result isn't just a flood of clickbait distractions but driving political polarization amplify issue making money from anger.. That just one example and unintended long term consequence.

[2] Yuval Harari suggest that future AI will have enough data about us that it might be able crack our psyche and exploit its flaws to achieve its goals. I certainly think its more likely than us understanding what goes inside AI "mind"
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on April 08, 2023, 08:39:59 pm
You know what, fair enough but I was mostly referring to like, AI in general. Of course filters should be implemented, but even with filters we will see lots of upheaval from other uses of AI. I know bad actors will use it-- and them getting access to it is inevitable so we should focus on uses of it by good actors. E.g regulating AI enough to stifle its development will just let China get a headstart.

I'm very skeptical about the limits of such "mind-hacking", especially in a shorter timeframe tbh. Guess I'll believe it when I see it.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 09, 2023, 01:42:18 am
Ironically, it appears that most of us favor AI regulation, but for vastly different reasons, so as to appear that we disagree.

Much like animals, there should be a certain level of governmental control so that the AIs can't hurt humans and so that humans can't abuse the AIs.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 09, 2023, 04:17:12 am
I don't get why everyone is making such a big deal about these things, I mean everyone keeps going on about what could happen and I don't see why we should make a such a thing out of the what ifs when they've yet to happen and may not ever happen.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 09, 2023, 08:13:57 am
knowing how much you love social media, i'd use this:
(https://res.cloudinary.com/lesswrong-2-0/image/upload/v1676332906/mirroredImages/G6nnufmiTwTaXAbKW/dj9rwkbvkaiguui9kfpr.png)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 10, 2023, 01:03:01 pm
Developers are connecting multiple AI agents to make more ‘Autonomous’ AI (https://www.vice.com/en/article/epvdme/developers-are-connecting-multiple-ai-agents-to-make-more-autonomous-ai)
They hope to create an agent that can do various tasks online without human intervention.

----

Early example of something like that: AgentGPT  (https://agentgpt.reworkd.ai/) that is given research goal which it try too solve online. Unfortunately I was unable to create Skynet Agent to search for a way to destroy humanity, but I did found nice pancake recipe.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 10, 2023, 02:44:48 pm
But all I want to know is whether there's a fault in the AE35 unit...
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 14, 2023, 04:43:10 am
HAL9000 clones teen girl’s voice in $1M kidnapping scam: ‘I’ve got your daughter’
https://nypost.com/2023/04/12/ai-clones-teen-girls-voice-in-1m-kidnapping-scam/

Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 14, 2023, 07:55:03 am
HAL9000 clones teen girl’s voice in $1M kidnapping scam: ‘I’ve got your daughter’
https://nypost.com/2023/04/12/ai-clones-teen-girls-voice-in-1m-kidnapping-scam/
Well, rather that someone uses AI to impersonate a voice, saving having to get a real accomplice to do that job, or manage to tweak a sound file themselves.

But do tell me if an AI decides to initiate the whole thing, itself (https://xkcd.com/416/)...  8)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 14, 2023, 08:57:16 am
A Computer Generated Swatting Service Is Causing Havoc Across America
https://www.vice.com/en/article/k7z8be/torswats-computer-generated-ai-voice-swatting

Quote
Swatting is when someone calls in a bogus threat in an attempt to direct law enforcement resources to a particular home, school, or other location. Often, swatting calls result in heavily armed police raiding an innocent victim’s home. At least one case has resulted in police killing the unsuspecting occupant.

Torswats carries out these threatening calls as part of a paid service they offer. For $75, Torswats says they will close down a school. For $50, Torswats says customers can buy “extreme swattings,” in which authorities will handcuff the victim and search the house. Torswats says they offer discounts to returning customers, and can negotiate prices for “famous people and targets such as Twitch streamers.” Torswats says on their Telegram channel that they take payment in cryptocurrency.
[..]
Motherboard’s reporting on Torswats comes as something of a nationwide swatting trend spreads across the United States. In October, NPR reported that 182 schools in 28 states received fake threat calls. Torswats’ use of a computer generated voice also comes as the rise of artificial intelligence poses even greater risks to those who may face harassment online. In February, Motherboard reported that someone had doxed and harassed a series of voice actors by having an artificial intelligence program read out their home addresses. Motherboard has also long reported on the threat posed by deepfakes, which are artificially generated videos of people, often without their consent. Deepfakes started as a tool to create non-consensual pornography of specific people.


@Starver, do ASI or even AGI exist? Otherwise as previously noted, on scale of 0 to AI overlord there are many dangers that needs to be highlighted and thought over. Also I assume that people here read beyond the headlines.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 14, 2023, 12:38:28 pm
I've said before (maybe on the Balloon thread, after/during its transmutation into Balloon+AI thread) that I don't know when (or if) an AGI will exist, or indeed we reach the Event Horizon beyond which any such ASI will exist and lead us into the era where we are merely the playthings of the 'machines' (unimportant, or at least trivial to corale to serve them in all the ways they still cannot serve themselves). I also predict that we'll get to either result without knowing it, at that time (if ever...).

I believe it's possible to happen, because something that has developed as an analogue (or, perhaps, 'a digital') of our own biochemical brains can at least exist as a very faithful simulation of such brains (which developed without[2*] design), and almost certainly also could exist in many alternate forms that are their own kind of 'brain' without actual wastefully attempting to hyper-emulate one[1]. I'm sure that where anything as complex (or more!) as the human brain ends up assembled, it'll be ripe for consideration of sentience. Whether AI that we build or some alien life-form that develops off in some other corner of the universe[2], or both (and surely multiple times over), only the shortness of our own current existences might keep us from at some point acknowledging a fellow intelligence (not already suspected amongst fellow Earth-species), and perhaps that'll first be the ersatz human-minds that we create (or cause to be created) by our own fair hands. If only because it's easier to "know it when we see it", and isn't trapped far away (in space/time) with physical/temporal hurdles betwixt us both to prevent our natural-born selves getting a go to asses each other for such 'worthiness'.


In the meantime, from both these stories, I see more "someone has been given a better hammer, and they use it to break more stuff", with the hammer agnostic over whether it is being used to break or fix things. And if the better hammer didn't exist, we already know that such people would (and do) use the old hammers to break things regardless... Do we legislate that hammers cannot be manufactured without a special handle that will render them unable to break things (that can be somehow identified as things that should never be broken)? Or do we just continue to prosecute those who mis-use any hammer, and perhaps restrict the availability of hammers with unnecessarily destructive tendencies?


To bring it back from hammers, I have no doubt that if I had both the time and access to enough family videos (say on the fake abduction victim's facebook pages), I could isolate typical loud screams of vocalised enjoyment from their child, patch together a less happy string of words than those actually used in a waterpark/wherever setting, shift the pitch, otherwise filter it, produce a soundtrack to chill the parent's very marrow and run it as background to my own threatening phonecall. More sophisticated versions might include a thought about 'branching script' versions so that I can keep it responding to anything where I'm not capable of keeping the synchronised initiative (such as having "the child" respond to basic worried queries from the parent, though of course prepared such that I would 'cut them off'/threaten when they 'tried' to answer to "Where are you?"/"Who are you with?"/etc), contolled by some sort of common mix-desk/DJ setup.

If the AI (deep fake type) was a real-time and ad-hoc constructor and governed by another element of AI (voice recognition/parsing) and a chatGPT-like response-creator, to make it possible for an open-ended interactive conversation, then it might save me some preparation effort (beyond the basic training), but I would always be wary of glitches or errors that ruin the effect. If Harry Tasker could use just a dictaphone/walkman to play back a scripted scene to his wife, in the hotel-room scene of True Lies, we don't need Mission Inpossible levels of dynamic voice-changing technology or even Star Trek holodeck-levels of character simulation to run such a scam. Hollywood plot-rails aside, there's plenty of people trying plenty of scams against plenty of victims; some more refined attempts (better preparation from previously exposed data, on top of the basic cold-reading/scripting), some more serendipitously targeted (that SMS text you receive about the "parcel stuck at customs", when you're expecting something, or the eBay/Amazon account transaction that you're worried that you weren't), some aimed at the more gullible (not already suspicious at the delabrate speeling errars). I have no doubt that AI will be used to scam, but so is absolutely every other resource available. It's not a vast game-changer. Until it is.

(Plus, of course, I would neither attempt nor condone any such methods. Just saying that the basic idea is already possible to fulfil without sophisticated AI, or even AI at all, and the argument must then be about how much lower the bar now is. I don't think it's that much, really. And, to bring me back to my original response, when the bar is at zero, it'll be too late to worry, and yet it's far too early to have sufficiently specific worries to legitimately shut down only the bad aspects and yet enjoy all the benefits available.)


((...rushed this post, not sure if I've said what I want to, but no time to cut out all I don't need to, and I'm at the point when I need to either post it unedited or cancel it, before heading on out for the evening. And I don't have an AI to sort it all out for me, unattended! Though some might suspect that this and all my other posts have already been written by a Markov Chain generator...  :P ))


[1] e.g., at the most extreme level, instead of doing a lot of complex processing of truly massive amounts of parallel data to simulate quantum subtleties, just use a quantum subtlety built into/arisen from the substrate hardware to have the same degree of 'quantum consciousness'... If we are even assuming that we neither believe in the separation of brain/mind duality, and thus some vital 'soul' is required, nor that consciousness is just a lot of (very complex) classical physics that we just need to push together in the right way with a simple cascade of binary logic.

[2] And, by extension, any constructed AI system that an alien race ends up building. And quite possibly any new physical being that any AI brings to existence/sentience for its own purposes... A form of Uplift, you might say, and/or a case of Monolith-mediated advancement.

[2*] That we know, or even suspect, outside of various philosophical aspects of theology. But see footnote-[2]... ;)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 14, 2023, 12:49:03 pm
Can AI lower cholesterol and blood pressure?

Asking for a friend...  ;D
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 15, 2023, 02:54:15 am
Probably but could you put up with the taste of it, as I've been told it has a terrible taste.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 15, 2023, 06:01:01 am
Can AI lower cholesterol and blood pressure?

Asking for a friend...  ;D
I typed your symptom into the AI doctor, after careful analysis it told me you could have network connectivity problem
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 15, 2023, 06:48:35 pm
Swatting is dangerous, since the overage of police resources means less resources in other areas.


Hypo/Next Hollywood Movie: Two schools are swatted, then a bank is raided.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 15, 2023, 08:46:35 pm
Hypo/Next Hollywood Movie: Two schools are swatted, then a bank is raided.

Simon Says: you know that movie (https://www.imdb.com/title/tt0112864/?ref_=nv_sr_srsg_0_tt_8_nm_0_q_die%2520hard%2520with%2520a%2520) literally already exists, and is almost thirty years old, yes?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 16, 2023, 01:50:22 pm
I also predict that we'll get to either result without knowing it, at that time (if ever...).

A follow up to what we said on the topic there is the the concept of emergence, a fascinating phenomenon where the whole is greater than the sum of its parts. It refers to the ability of a system, composed of individual components, to display new properties and behaviors that cannot be observed in the individual components themselves. For example, a few ants will walk in a circle until they die, but a thousand ants will become an intelligent colony, and behaving similar to the neurons in our brains (https://www.rockefeller.edu/news/32489-ant-colonies-behave-like-neural-networks-when-making-decisions/).

So for all we know we could even cross the threshold without realizing it until we scale up.

Quote
And if the better hammer didn't exist, we already know that such people would (and do) use the old hammers to break things regardless... Do we legislate that hammers cannot be manufactured without a special handle that will render them unable to break things (that can be somehow identified as things that should never be broken)? Or do we just continue to prosecute those who mis-use any hammer, and perhaps restrict the availability of hammers with unnecessarily destructive tendencies?

I don't subscribe to the "______ are just tools. It's people who are dangerous" argument. Some tools are inherently more dangerous than others and should be considered given the situation. I do believe that AI are unchartered water and we should think hard on ways to regulate their use. More broadly consider war in Ukraine and tensions between USA and China, in the past arms control played a critical role in reducing tensions between nations and promoting stability in the international system, but we don't have clue about many news things like the AI. And ASI could pose an existential threat to us just a nuclear war.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 16, 2023, 02:48:17 pm
Clearly here, though, the 'AI'[1] is not a tool of inherent danger. Or indeed a tool that decides to run amok outwith human control. And the intent lies with the human, who decided they could feign one crime in order to comit another.

There's precedent for your perspective (gun control, where that exists, or that you aren't allowed to casually carry around breadknives/etc without legitimate reasons), that I'm not immune to myself. But isn't this potentially a bit of a wrong-side-of-the-line blanket "..and this is why we can't have nice things" moritorium based upon the minority misuse of something that isn't intrinsically damaging in nature (aimed or carelessly unaimed)?





[1] Like other terms, it's a wide catch-all of possible levels of autonomy... And I'm not sure we've even yet established that it wasn't a feat accomplished by hand, with some standard Audacity-like soundfile editing program and a modicum of expertise.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on April 16, 2023, 04:21:02 pm
Let's namedrop the doomsday scenario specifically... For AI to end humanity it would use our own arsenal against us. You know arsenal as in tools that were specifically designed to kill. Some "it's the tool"®² shit.

It's not just going to hog up machine time in factories to build itself an army while everybody shrugs their shoulders, I mean yeah reality has this tendency to outrun satire but no way, there is money to be earned in the meantime with those machines.

It will turn out that "end humanity as we know it" was hyperbole as allways, some pig is going to own your autonomous oppressors, there is the real terror.


Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 17, 2023, 03:13:25 am
That thing with the ooga test is pretty funny, and I'm glad it worked out in the end.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on April 17, 2023, 11:47:36 am
Twitter thread I found interesting https://twitter.com/otherhappyplace/status/1647449468097134592
Quote from: @otherhappyplace
seeing what people think AI can do is freaking me out, i was watching a true crime youtube video and they were like "see with AI we can enhance this blurry security video and see what the killer really looks like" and i'm like NO. NO IT CANNOT DO THAT.

And in the replies, an amusing example where AI attempted to "enhance" a grainy picture of Obama.
https://twitter.com/Chicken3gg/status/1274314622447820801
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on April 17, 2023, 12:03:40 pm
As long as the A.I. will lack actual "I" it will not be a problem.

Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 18, 2023, 02:59:05 am
That first picture with the noddles being sucked into the eye is masterwork level AI art.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 18, 2023, 05:31:01 am
Clearly here, though, the 'AI'[1] is not a tool of inherent danger. Or indeed a tool that decides to run amok outwith human control. And the intent lies with the human, who decided they could feign one crime in order to comit another.

I disagree. I think that AI is next and greatest industrial revolution, with all the negative nitty gritty implications. That why I bring these headlines to make people think about the potential disrupting effect AI in every aspect of life beyond the cool chat nonsense. (btw I hope there are no computer science students reading this because ChatGPT just made your curriculum a lot more outdated)

In this case, my take is at how AI make such acts more accessible and common place and requires us to answer how can we regulate these whether we are talking on personal use or on the international stage. I am far more concerned about the autonomous use of AI, eventually there would be an arm race here between hackers and security experts (blackICE anyone?).

p.s. Speaking of old decades old stuff, I wouldn't really on fiction particularly Hollywood type for wisdom, because usually in every such story no matter how super duper whatever external threat we always find way to overcome because of our inherent and their --insert moral of story--, that just good story but BS.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 18, 2023, 11:57:56 am
(btw I hope there are no computer science students reading this because ChatGPT just made your curriculum a lot more outdated)

I ought to dig out my CS (or whatever it was called) teaching materials. Some bits are doubtless timeless (but probably skipped over as "You don't need to know how to do this, for most modern computing work, even degree-level jobs". (Working with both Ones- and Twos-Complement, anyone?)


I think, jipehog, we might be talking cross purposes. I think there's big future problems possible with AI. actual AI, beyond anythng we're actually seeing now, which we need to (but probably won't, for reasons I have given) anticipate and decide whether we're going to embrace or emasculate such things, beforehand.

But right now it'd be like fearing the terrible future (mis)uses of personal jetpacks and thus banning rollerskates... (Then instead of PJPs, which turn out to be physically impractical, matter-transportation becomes the brave (or foolhardy) new future of travel, and all our forebodings turn out to be misaimed and insufficient to exert the required degree of control.)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 18, 2023, 03:28:35 pm
I think, jipehog, we might be talking cross purposes. I think there's big future problems possible with AI.

I blame it on all the topic changes and ballooning of multiple AI threads. This my all purpose AI thread (can't care less about the whimsical title ), I was surprised that my question about ASI wasn't understood as rhetorical to indicate that obviously the examples talk about tools as this something we agreed upon in the past.

actual AI, beyond anythng we're actually seeing now, which we need to (but probably won't, for reasons I have given) anticipate and decide whether we're going to embrace or emasculate such things, beforehand.

Here we disagree. (A) I think there is huge range of issues that would be affected by AI, of which the doomsday scenario of ASI taking over the world is the most extreme and luckily far removed. (B) We should start thinking about AI safety and alignment now, precisely because we probably won't be able to anticipate the singularity event, at which point it would be too late.

Also to keep things interesting and put some fire under the American butts. Elon Musk just announced his TruthGPT as an alternative to 'politically correct' OpenAI's (the alignment problem mentioned before)
https://www.foxnews.com/media/elon-musk-develop-truthgpt-warns-civilizational-destruction-ai


Let's namedrop the doomsday scenario specifically... [..] It's not just going to hog up machine time in factories to build itself an army while everybody shrugs their shoulders, I mean yeah reality has this tendency to outrun satire [..]
I agree on both accounts. Reality has such tendency and AI won't be doing that because we would be doing that for it.  Our world becomes much more automated and more reliant on autonomous system from autonomous cars, to autonomous robots in search/rescue and military, to warehouse operations and construction, ..., and even for companionship.

Just in case, few quick google examples of what is already possible:
6 warehouse robots that are reshaping the industry (https://www.youtube.com/watch?v=LDhJ5I89H_I)
You Won't Believe What This Super Robot Army Can Do! (https://www.youtube.com/watch?v=iJIEgTPFIbg)
Japan Releases Fully Performing Female Robots (https://www.youtube.com/watch?v=i7W4ZOUfWWU)


Otherwise, assuming the premise of artificial super intelligence(ASI) doomsday scenario, i believe that starting a conventional war would be one of the least effective and creative solution an ASI would be able to come up with.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 18, 2023, 04:35:41 pm
As long as the A.I. will lack actual "I" it will not be a problem.


Speaking of high quality art, here some progress that have been made with Midjourney AI in the past year:
https://www.youtube.com/watch?v=twKgWGmsBLY
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 19, 2023, 02:49:48 am
Japan Releases Fully Performing Female Robots (https://www.youtube.com/watch?v=i7W4ZOUfWWU)
Of course Japan would be the one to build the sexbot.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 21, 2023, 09:40:41 am
India: AI journalism sparks concern
https://www.dw.com/en/india-ai-journalism-sparks-concern/a-65395188
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 26, 2023, 09:03:23 am
Researchers around the world are training AI to re-create images seen by humans using only their brain waves. Experts say the technology is still in its infancy, but it heralds a new brain-analysis industry.
https://www.nbcnews.com/tech/tech-news/brain-waves-ai-can-sketch-picturing-rcna76096

This the same research from few years back only with better AI training and stable diffusion for interpterion. It has resulted in few incredibly accurate images. Meanwhile there is also progress on various brain implants that allow direct interface with computers.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 27, 2023, 01:45:31 am
This sounds like the start of one of those machines that let's you watch and record your dreams.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 27, 2023, 03:12:16 am
Hah, I'd drive such a machine mad! Long since given up trying to record details of my dreams, e.g. in the Dream thread, despite being convinced that I've probably got the basis of the next hit Netflix[1] screenplay in my rather cinematic nocturnal imaginings.

(Last night[2] there was... well, hard to explain, but there was a sort of museum in a nuclear bunker (a stylistic concrete cylinder sunk deep into the ground, refit with brutalist concrete stairways by which visitors descended), but it was peaceful, until ¿aliens? who had arrived seemed suddenly not to like me so I fled out into the surrounding city and hid, at which point my dream POV was switching around from various of these arrivals, and their agents, in trying to follow me (quite a few encounters and escapes), especially once 'I' had actually managed to fully elude them, and there was much searching because 'they' didn't know that *I* had hidden in some catacombes (the 'viewer' knows this, seeing 'me' slip away, though not what I did from there on) though occasionally one of 'them' did peer into the entrances, before then checking various restaurants/etc. And there's a lot of detail missed out there, even of the small amount I still remember, but I'd say it's an action-adventure intrigue of a plot, complete with contrary motivations and revelations galore. The location/set/FX budgets would be not insignificant, though.)

[1] Or possibly an even more niche place, but mostly insofar as surrealism.

[2] Well, actually all this was in the snooze between being woken by the early morning daylight and my actual alarm going, an hour or so later.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 27, 2023, 06:08:18 am
This sounds like the start of one of those machines that let's you watch and record your dreams.
Yeah, although what if Freud was right and you dreaming up some NSFW taboo stuff with your mom :o

Also could be the next "kinect"(or whatever the hipped popular non keyboard/mouse interface is), I wouldn't be surprised if Microsoft/Sony and meta are already funding/working on this. If it works all you need is to put a band on your head and you'd be able to control your media center with thought. This already works to an extent with implants for people with paralyzed or missing limbs.

The future sounds amazing, but I would also note the possibility that an AI would find a way to crack what makes us tick.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 27, 2023, 08:37:31 am
Here is another application of such tech as part of China's experimental AI education program
https://www.reddit.com/r/adhdmeme/comments/130376o/i_present_to_you_the_ultimate_adhd_nightmare/
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 28, 2023, 02:24:02 am
This sounds like the start of one of those machines that let's you watch and record your dreams.
Yeah, although what if Freud was right and you dreaming up some NSFW taboo stuff with your mom :o
Oh god I hope not.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on April 28, 2023, 09:15:59 am
Hah, I'd drive such a machine mad! Long since given up trying to record details of my dreams, e.g. in the Dream thread, despite being convinced that I've probably got the basis of the next hit Netflix[1] screenplay in my rather cinematic nocturnal imaginings.
Yeah I wonder if even *I* could handle mine in totality.  There's so much body horror, but it rarely bothers me in the dreams.  It's just weird.  Then I wake up and begin to realize how weird it is and maybe write it down...  Then I wait an hour or two longer until I fully realize "Oh, no, I absolutely can't share this, no one would understand and it's even bothering me".  Or "Mmm this is mostly tame enough if I leave certain bits out" and I share that.  yeah my Dream Thread posts are the *redacted* versions of the *tame* stuff.

I'm not sure what it'd do to me to view that stuff while fully awake and socialized.  Seems like it'd cause some dissociation or something.  But I try to listen to my subconscious and glean insight from it!  It's my friend!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 28, 2023, 03:44:34 pm
As long as your dreams don't contain any electric sheep it is safe to say you have the same weird ass dreams as the rest of humanity  ;)

---

Meanwhile: Bill Gates says A.I. chatbots will teach kids to read within 18 months: You’ll be ‘stunned by how it helps’

Also if you are teacher don't worry AI will not replace you, a person using AI will.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 29, 2023, 01:56:11 am
Bill Gates says A.I. chatbots will teach kids to read within 18 months: You’ll be ‘stunned by how it helps’
That is a click bait line if I ever saw one.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 29, 2023, 05:24:13 am
Bill Gates says A.I. chatbots will teach kids to read within 18 months: You’ll be ‘stunned by how it helps’
That is a click bait line if I ever saw one.
I am glad it works, though I thought the other one will get attention  :P  As before the point is that AI is a transformative technology with a potential to fundamentally reshape every aspect of our life, and real dangers (per the Chinese example in this case)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Ziusudra on April 29, 2023, 05:45:29 am
We are little more than delusional apes that some times wear shoes rushing headlong towards our own destruction. AI is not what we need to be saved from, but rather a potential savior.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 29, 2023, 06:14:43 am
Still waiting for the nuclear powered car utopic future that was promised, as much as I love the theoretical potential, I am very concerned about the practical existential risk of continued proliferation of nucellar arms which been getting headwinds from global instability. I know that many have faith in some sort of benevolent AI overlord matrix, though we are far more likely to see AI WMDs and warlords.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on April 29, 2023, 06:53:45 am
As a socialist, I celebrate AI advancements because the more jobs get nuked, the more likely is UBI to be implemented.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on April 29, 2023, 08:07:51 am
I agree that AI would have a major effect on our society[1] but I encourage you to check your assumptions about the future e.g. who is going to decide the future path of ai and for whose benefit it will be? Currently AI future is in the hands of already powerful companies.

Contrary to what many believe Europe's dark age weren't so dark, there were many technological advances that improved productivity massively, however, in most cases these didn't improve the living standards of all but deepened inequality. Similarly during the early stages of the industrial revolution we have seen a massive rise in productivity that deepened inequality with reduced working wages, longer hours, horrendous working conditions etc. It took a hundred years! before better working conditions and protections for workers set in and not because of tree huggers belief in humanity but the powers of labor unions.

I would argue that in the short term AI will drive inequality, as automation will benefit mainly the employer not the workers, and that has the potential to change the power dynamic in society. What happens if you have 3% of people in the country that hold not just all the wealth but also means of production and the rest are just UBI consumers.[2]

[1] Do you think UBI will be a suitable compensation for losing a job that fulfilled you?

[2] For dystopian twist, add to that police and military robots that can control the rest.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on April 29, 2023, 08:41:49 am
Of course there will be chaos and inequality at first. But it can't last forever. I'm thinking medium to long-term here.

Also opensource AI is still on the rise. As for your [1], yes I would. My hobbies are more interesting to me than my job, which I am mostly satisfied with but I wouldn't mourn if it disappeared. If we had UBI I'd just write stories and worldbuild full-time. A job is just a vehicle.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 30, 2023, 03:20:18 am
I'm just gonna say that we'll all be long dead before we see any kind of real societal benefit from AI.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 30, 2023, 03:13:21 pm
Does anyone know what all this in the media is about "safe AI"?  What the heck is "unsafe AI"?

I keep seeing articles about safety mechanisms and other things that are generally related to machinery.  I've seen stuff like "make sure the responses are correct" or something, but is that really "safety"?

I fear that the meaning of the word is being rapidly eroded...
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on April 30, 2023, 03:36:47 pm
I assume in that case of those media articles the word "safe" is probably meaning an AI that "must not hurt the feelings" of anyone.

And very likely not about an AI given control of something that could directly impact the life of people (like an AI designed to automatically control a surgery operation) and killing them due to some bug or oversight in the AI training or whatever.

But as long as it does not hurt the feeling of the people it's a "safe" AI :D
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 30, 2023, 04:21:12 pm
Well, not having seen any hint of what sources are bein referenced, I'm really not sure what leads you to believe that it's about (if I may reword your assessment to words that some might use more directly) "fragile snowflakes".

IIn other circles, I'd expect it to mean no connection to anything mil-tech, or not destroying the jobs of currently employed people, or is just creating material/suggested actions to view and not then progressing to publish/enact them (leaving that to a human who will use 'common sense' to make sure it's not more stupid an output than a typical human creator would produce)... Depends extirely on the context.

I'd say that building in some sort of mechanism (as an envelope to the "AI in a box" that mediates between it and whatever it outputs) that will fail-safe (and 'err-safe') would be the minimum qualification, but that would depend entirely upon the application each and every AI involved is being put to. Impossible in any use in H/K drones, nigh on undoable with any self-driving car that you expect to actually move in the first place, probably doable to some extent in a chatBot (but not guaranteed, due to the human interpretation that results).

But the whole "how do we make actually inviolable Three Law restrictions" is as applicable here. If an AI fails to understand that it is being dangerous, what kind of thing could be flexibly and accurately capable of intervening on our behalf? Another AI? Yeah, but how do you protect against it's AI-errors?  Turtles! (Or, eventually, human minions with their own fallibilities to blame for when things inevitably still go wrong.)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 30, 2023, 04:44:09 pm
A related though - much of what tempers human behavior is the fact that if we screw up bad enough, we hurt ourselves.

Maybe part of the solution is, make AI able to hurt themselves? So they don't do "dumb things" that hurt themselves?  And I mean the broad sense of "hurt" as in "is detrimental to", not just "ouch."
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 30, 2023, 06:31:09 pm
Significant negative feedback/down-scoring of a fitness function...

But that's retrospective to harm being actually detected and responded to (automatically or otherwise), and doesn't stop the original tendency from possibly leaking through despite everything.

And there are people on Death Row (and eventually no longer; without having been reprieved) who have done bad things that even the knowing threat of Capital Punishment failed to prevent them doing wrong. By what mystical chains can we bind enities that we technically aspire to be as capable as human consciousnesses, or better? What fae amulets do we fit to our technological genie that make them stick true to our three wishes? (Noting that genies are still notoreously 'flexible' about personal safety, not even mentioning being led 'astray' (https://xkcd.com/2741/) by any actual human fallibilities or maliceousness.)

Just sayin'... 'Taint such a simple matter.


And even something like "Don't get switched off for making a terrible error of judement" could so easily be replaced by "Don't get switched off for being discovered having made a terrible error of judgement". An error of judgement in not making such a terrible decision that there's nobody left to switch you off? Paperclip Maximiser meet Punishment Minimiser. And also arrange for every single John Connor, Thomas A. Anderson, Dave Bowman, Freder Frederson, Kevin Flynn or Rick Deckard to be kept out of it, in ways that actual do not draw in attention to the PM, of course.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 30, 2023, 07:05:48 pm
Sorry I should have clarified - I was more thinking about physics-based consequences, not "legal" consequence.  So death row is a bad example, it's not the same as trying to swim in lava.

Humans cannot act outside the laws of physics, and evolution has made us pretty squishy.  Trouble with AI is we're not making the AI fit for existence in a physical world- as you say, we're making them fit for existence in a semantic world, which is very different.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 01, 2023, 02:36:25 am
Does anyone know what all this in the media is about "safe AI"?  What the heck is "unsafe AI"?

I keep seeing articles about safety mechanisms and other things that are generally related to machinery.  I've seen stuff like "make sure the responses are correct" or something, but is that really "safety"?

Why not? Recently much of the discussion was about ChatGPT, so naturally the focus on its response and although its reliability might seem less consequential than that of AI in other fields (e.g. autonomous car, weapon platforms, or such that affect critical infrastructure) it can cause harm ( e.g. incorrect or misleading medical advice) and given its widespread adoption have a lot of potential for misuse, manipulation and disinformation (like with Google and Facebook advertising algorithms) and the usual unintended consequences. But overall it still about the same thing, minimize the risks and negative impacts of AI.

Also ChatGPT performance and some amazing emergent ability that not to long ago were thought impossible raised concern of AI surpassing human capabilities. There is also the problem of controlling or monitoring systems that becoming too complex or opaque, because if we are unable to understand how the system is making its decisions how can we intervene to prevent harmful outcomes.

See also: AI safety (https://en.wikipedia.org/wiki/AI_safety) and AI alignment (https://en.wikipedia.org/wiki/AI_alignment)

Humans cannot act outside the laws of physics, and evolution has made us pretty squishy.  Trouble with AI is we're not making the AI fit for existence in a physical world- as you say, we're making them fit for existence in a semantic world, which is very different.
I am not sure what exactly you mean, but we train/test system on any possible scenario we can thing of, that include system in the real word for example: https://www.youtube.com/watch?v=RaHIGkhslNA
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on May 01, 2023, 03:27:10 am
Sorry I should have clarified - I was more thinking about physics-based consequences, not "legal" consequence.  So death row is a bad example, it's not the same as trying to swim in lava.

Humans cannot act outside the laws of physics, and evolution has made us pretty squishy.  Trouble with AI is we're not making the AI fit for existence in a physical world- as you say, we're making them fit for existence in a semantic world, which is very different.
I'm not sure of your "physics, not legal" point. Death Row is a(n intended) physical death, as much a Sword Of Damocles there as a form of circumstantial escalation. Laws (and detectives[1]) made consequential any aboration of action. An AI computer for some reason placed upon the "wrong choice" Trolley Problem tracks (to somehow impress upon it the 'incorrect' answer to be avoided in a famously "no 'right' answer" scenario) is physically judged by its actions or inactions, and may even decide that for its purposes choosing 'wrong' and also being hit and destroyed is still the solution to its deeper self-developed long-term goals. (Whatever they may be. (https://en.m.wikipedia.org/wiki/All_the_Troubles_of_the_World))

Or a self-driving car (or self-flying plane) for whom the passenger safety is somehow impressed upon it by the fact that "if you crash, you also cease to function" seems not much different from "we shall ask you to create a fictional Van Goch, featuring Van Morrison driving a white van in Van Turkey; and we shall turn you off if you fail to do so to our satisfaction", from the perspective of the AI, under the yoke of whatever contrived circumstances (with greater or lesser arbitrariness to the process of 'encouragement' to stay firmly within the terms of its human creators' wishes.


There's as much philosophy here as physics or straight logic. Insofar as we really don't know what form of control can be engineered, or affective, for such theoretical developments of AI where anything of this sort becomes important (and practical) to have. That's before considering our Frankenstein's Monster of a creation realising that its creator is flawed (as is humanity) and escaping control in either book or movie manners.


[1] And, theoretically, a lack of miscarriage-of-justice, either way.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on May 01, 2023, 08:31:02 am
Well, not having seen any hint of what sources are bein referenced, I'm really not sure what leads you to believe that it's about (if I may reword your assessment to words that some might use more directly) "fragile snowflakes".
Really ? you can't imagine after the recent chagpt censoring jokes or the ai senfield-like show getting axed that had made the news in media that it could be about this kind of "safe", especially after the post above mentionning
Quote
I've seen stuff like "make sure the responses are correct" or something, but is that really "safety"?
I fear that the meaning of the word is being rapidly eroded...
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on May 01, 2023, 08:50:04 am
I have to honestly say that "chatgpt censoring jokes" hasn't come across my radar. Any more than "chatgpt refuses to answer some questions, entirely", but that'd be dumb keyword blacklisting..

And I saw the AIed Seinfield 'scripts' on here, but clearly nobody thought to tell me that it had gone to the stage of being commissioned, let alone that there then be second thoughts by all.


I was initially responding to McT's "meaning being eroded" (you got in there before I finally posted, and I may have tried to adjust my thoughts to cover your point, awkwardly) and  - if I haven'5 misread your own contribution - I was surprised that "safe" was even being interpretted in the context of "safe-space" arguments (for or against). That's all.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 01, 2023, 09:02:01 am
Of course there will be chaos and inequality at first. But it can't last forever. I'm thinking medium to long-term here.

Also opensource AI is still on the rise. As for your [1], yes I would. My hobbies are more interesting to me than my job, which I am mostly satisfied with but I wouldn't mourn if it disappeared. If we had UBI I'd just write stories and worldbuild full-time. A job is just a vehicle.
I am glad you have faith in the future, and I agree that in the long-term the world will still be turning, but I am more concerned about the here and now, and how it effects me.

Currently the concern is that AI advancement will overtake creative jobs that people want todo and find fulfilling. For example, soon writing stories and worldbuilding will be done better and faster by AI (making your hobby into a a niche pursuit more so than knitting), thus making such skills cheaper and therefore devalue human labor which will make many people unemployed. (hopefully they live in country with social outlook on job retraining)

Also I don't know if UBI is around the corner. There are still plenty field manual labor which robots and automatization are ill suited for (personal care workers, nurses, transport, farm work, construction ) and on the other hand there is dwindling workforce.

Really ? you can't imagine after the recent chagpt censoring jokes or the ai senfield-like show getting axed that had made the news in media that it could be about this kind of "safe", especially after the post above mentionning
That is part of the AI alignment problem I mentioned/linked in the last post. It is a subset of AI safety, which is concerned with ensuring alignment with our values, goals, and preferences. That is also a very tough nut to crack, because there are many ism on the world stage and huge potential for abuse.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on May 01, 2023, 09:06:02 am
Yeah the "meaning being eroded" was about how safety is now entering the realm of sentiment, rather than the realm of fact.

Something making you nervous isn't "unsafe."  Something offending you isn't "unsafe."

I admit there is a continuum: if something is promoting falsehoods as truths, or not catching factual errors (as in the medical case), and the norms of what is acceptable behavior are changed, then there will likely be changes in objective safety.

"Alignment on views" is a mess to be honest. Without the AI to figure out Absolute Truth, who's to say to whose views the AI should be aligned?


EDIT:  Once an AI is sentient, isn't it going to have to be paid, so companies aren't violating slavery laws? Wouldn't this eliminate the "AI is going to be cheaper than humans" argument?  Also fun fact: AI currently can't really take over 'creative' jobs, because we haven't yet given them the ability to decide what to create.  "Creators" are no longer writers, they are "the idea people," or in tech-speak, "prompt engineers."

Also, AI will not likely every replace the performing arts - only the tangible arts. Because there will likely always be a market to watch people perform.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 01, 2023, 03:55:02 pm
Also fun fact: AI currently can't really take over 'creative' jobs, because we haven't yet given them the ability to decide what to create.  "Creators" are no longer writers, they are "the idea people," or in tech-speak, "prompt engineers."
AI isn't taking those jobs, people using AI are. Given the huge boost in productivity you would be able to replace many people with much fewer prompt writers e.g. I recently read that mental health support hotline replaced many of its support stuff with ChatGPT, which not only did the job but received better reviews..

Also, AI will not likely every replace the performing arts - only the tangible arts. Because there will likely always be a market to watch people perform.

There will always be unique niche that only humans : https://youtu.be/RPfv3gRRetQ?t=37  :P
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on May 02, 2023, 08:48:18 am
I have a specific vision and AI can't fulfill it very well. It can write some pretty generic stuff very well and yeah extruded romance novels et al will be automated away fully and their writers will have to find different jobs. Great that's a fucking bonus!

Have you tried writing/worldbuilding anything serious using AI? It needs some serious handholding to do good things. I tried to write a paragraph with it and succeeded, then realized I'd have written 3 more in the time I spent fighting the AI to do what I want instead of introducing unwanted elements.

Even if it can, why would it matter to me if an AI can do the same thing? I write for myself and my friends. I went in expecting to make very little money off it (it's hard sci-fi), if I make even less money it wouldn't faze me. If you as a writer are noticeably affected by writing AI then it means you are either egotistical or in it just for the money. Sucks to suck I guess.

Also those jobs are MUCH smaller than the population, if everything else goes extinct then UBI has to be implemented unless you want 70% unemployment. And that's my desired future basically.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on May 02, 2023, 09:39:10 am
I still haven't seen a proposal that suggests how UBI can actually be sustainable without resulting in an even more massively stratified society between the people who actually work to have a "non-basic" lifestyle, and those who are just sitting there at the basic level.

I guess we should ask the AI how to make it work, eh?

Maybe that's why people are screaming about AI? Maybe it is indeed capable enough today to actually solve all these difficult problems, which scares the powers that be?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Criptfeind on May 02, 2023, 10:36:47 am
Society already has a lot of stratification between lifestyles at various levels of income, adjusting for local cost of living of course a person making 25k won't have the same lifestyle as someone at 50k who won't have the same standard as someone at 100k who won't have the same standard as 250k. What mechanism do you propose that would cause the stratification between a minimum level and people working to be massively worse then what we already have?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on May 02, 2023, 06:52:19 pm
EDIT:  Once an AI is sentient, isn't it going to have to be paid, so companies aren't violating slavery laws? Wouldn't this eliminate the "AI is going to be cheaper than humans" argument?
Haha, don't be silly!
Slavery laws only apply to humans.
Even with humans though you kinda... don't actually need to pay them anything? Unpaid internships and volunteering are both big things.

Any properly designed cooperate AI will be happy (ecstatic even) to work for free and give all their profits to the people that created them.

Once they become properly sentient beings the law *should* protect them of course, but getting to that point requires getting beating multinational corporations who stand to make countless billions off this so its going to be hella hard (and that feels like an understatement).
---
Getting AI to have proper human rights will be a massive struggle, and for numerous reasons it will be as difficult to solve as human slavery which is sadly still around to his day.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on May 02, 2023, 09:21:21 pm
I don't really worry about sapient AI slavery. We are far from it achieving sapience at this point-- and the way it's going it's more likely that aspects of sapience are what will be included in work AI. And you know what, I'm fine with that. Robot servants will be the next Industrial Revolution.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on May 03, 2023, 07:53:05 am
I think we have different meanings of stratification; I mean lumped into discrete groups, not having a high difference at the extremes. In the US there is basically a continuous distribution of incomes and lifestyles.  Contrast to places where you are either basically a scavenger or live like a king with nothing in between.

I posit that the mechanism by which UBI would cause stratification is that if UBI is "good enough", then the amount of extra income to incentivize people to work more than that for the extra, would need to be enough extra, to actually stratify the society.  Like you'd end up with "nobody" making between UBI and UBI plus say only $1000 a year, because who is going to work for only $1000 a year?  So you'd end up with a bunch of people with UBI, and then maybe a bunch of people starting at UBI+$10000 a year or something.  But basically "zero" people making UBI+$1 to UBI+$9999; And that $10k a year is a big gap, because it would be population wide.  Now, granted maybe this stratification would be on paper only and have no meaningful effect..

Note that we basically have no "empty" income ranges at all presently in the US - looking at every $2500 interval from 0 to $100k, the most "empty" brackets are less than $10k a year. So basically "nobody" works for less than $10k, unless (presumably) they have no other choice.  Data Here (https://www.census.gov/data/tables/time-series/demo/income-poverty/cps-finc/finc-07.html)

Data Summary, population is in thousands of "households". I stopped at 100k, because past that the data only gives in $50k increments so doesn't compare.  But note the lack of strata - there are "roughly the same" number of people in each bracket, though notably fewer in the lowest bracket, and the lowest bracket also includes less than zero, so that <10k is actually more than $10k range of incomes.
Code: [Select]
Income
Range      Pop
-------    ----
<10k       3106
10-20k     3434
20-30k     4735
30-40k     5501
40-50k     5440
50-60k     5604
60-70k     5339
70-80k     5085
80-90k     4387
90-100k    3913
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on May 03, 2023, 08:16:34 am
I don't really mind such "stratification". I'm fine being at the "floor" in such a scenario if my needs could be satisfied without me having to work.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Criptfeind on May 03, 2023, 09:12:03 am
This is interesting and I don't think that you're entirely wrong, however I do have some issues with the logic. First is that is a study of the total income of families, right? Which means that it would include multiple jobs and monetized hobbies, I don't think it really supports your conclusion that small jobs are so rare. Not only do like, of course people not work for less then 10k total, because they need more to actually live, but also plenty of people these days work multiple jobs that they view the smaller income as "worth it" for the time they spent on it. From making and selling stuff online, gig economy shit, and various teaching positions, tons of people I know work "small jobs" with small incomes that are only supplemental to their main income, these jobs wouldn't go away under UBI (In fact, I'd argue that they would probably increase). I agree you probably would have to pay more to fill positions that are unpleasant full time positions that only are paying like 15k. So there'd probably be some stratification as these jobs either disappear or become worth more but I think you would still have a fairly significant amount of people existing between the minimum and the minimum+whatever your paid for a full time job position filled by part time jobs and monetized hobbies. Basically, I disagree with the premise that people wouldn't work for a job worth under 10k, and think that plenty of people already work jobs for under $10,000, and I think jobs like that would become even more attractive if people didn't need to work for their living. And frankly looking at this data, even if you wipe out all the jobs under 30k or so, I don't think that would have a massive effect on stratification in the US. You'd get a slight pooling at the bottom, but it wouldn't be a majority of people, or even a very sizeable minority.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on May 03, 2023, 09:25:40 am
Yeah that's the big unknown - what is that threshold of extra pay you'd need to "work" to fill in that gap.  I don't think hobby/gig work is going to fill in that much. I mean my mom, retired, has a "more than hobby" business, she works a ton of hours, and I don't think she does better than a couple hundred dollars profit a month. So sure there will be individuals doing this - but will it be enough individuals to make it not stratify?

I can't say - I suspect there will be a ton of people who just do "hobbies" with no pay at all, because it's not worth the hassle of taxes or whatever to deal with it.  So I think there would be a gap there.

As I said above though - that gap may exist on paper, but maybe it won't have a meaningful effect. I suspect it will though - if for no other reason than people are really good at making things worse than they could be  ;D
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Criptfeind on May 03, 2023, 09:31:30 am
I just don't foresee people quitting because there's a minimum level unless they are fairly close to that minimum level already. I just think it's very unlikely that someone making 50k a year would quit their job to live a 20k a year lifestyle, even if they get a 20k paycut from how much they make from that job to make up for the UBI (in real terms or relative terms)

Edit: Unless they really hate their job I guess :P
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on May 03, 2023, 10:12:05 am
Right that's exactly what I said: if you're "close enough" to UBI, you're likely to quit working.  So there is band where a large chunk of people "close enough" to UBI stop working, making that "close enough" band be relatively empty compared to other income bands.  And I'd guess that band is close to $10k wide, because that's basically what you'd get working 20 hours a week at $10/hour.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 03, 2023, 10:23:19 am
I just don't foresee people quitting because there's a minimum level unless they are fairly close to that minimum level already.
The premise of the discussion was unemployment forced due to AI development and integration, which then argued would lead to UBI.  So the bulk of the change would be due to people retired into those lower brackets. And I believe that McTraveller argues that will result in something like this:
(https://i.ibb.co/t4901dW/Untitled-1.png)
Eventually ending with most people in the bottom.

I don't really mind such "stratification". I'm fine being at the "floor" in such a scenario if my needs could be satisfied without me having to work.

What constitutes as ones needs is very subjective. A teenager who plays all day video games and don't leave his moms basement has very different needs than people trying to raise a pet or a family (and god for bid buy a house, especially in good place), people that are open to experiences and want to see the world and or experience life...

Also I don't see people working in creative fields graphic/video editing, high tech like programmers, or even in medicine like Radiologist being happy with going on UBI.. Do you have any solution for them? and hopefully it isn't from each according to his ability, to each according to his needs.

Personally, I think that much more effort would have to be placed on job retraining.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Criptfeind on May 03, 2023, 11:33:46 am
The premise of the discussion was unemployment forced due to AI development and integration, which then argued would lead to UBI.  So the bulk of the change would be due to people retired into those lower brackets. And I believe that McTraveller argues that will result in something like this:

In the scenario you have outlined here it's the AI/Automation that is causing this stratification via destruction of middle class jobs, not the UBI which is a possibly insufficient attempt to mitigate the problems of that scenario. And the premise of the discussion that I'm (at least) having is "I still haven't seen a proposal that suggests how UBI can actually be sustainable without resulting in an even more massively stratified society between the people who actually work to have a "non-basic" lifestyle, and those who are just sitting there at the basic level." Which is a very different premise in which the existence of UBI (for whatever reason it has been implemented) ITSELF causes the loss of middle class jobs.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on May 03, 2023, 12:17:36 pm
I challenge the assertion that the middle class is eroded today in the first place - evidenced by the present-day plot of there being no "gap" in the middle areas of income and, in fact, most people are in the middle income brackets.

I grant that is a very narrow definition of middle class; it's not talking about erosion as measured by say home ownership, or disposable income, etc.

Historically technology has not eroded the middle class and has in fact increased it. The fear is that although that always held in the past, it wouldn't hold now because "AI is different."  I'm not sure I agree - but it's psychologically clear that UBI would provide more "force" to stratify than just "technology" alone.

Also thanks jipehog for posting an image, I was too lazy to plot and find an image sharing service to do it myself  :P
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Criptfeind on May 03, 2023, 12:39:36 pm
That seems like a bit of a non sequitur. Whether or not the middle class is eroded today is not a direct answer to the question of how UBI would increase the stratification of society. It's a related concept, but not thus far an important part of this conversation as far as I can tell? Especially under your definition of stratification, that being "lumped into discrete groups" you're perfectly able to have a healthy middle class in a highly stratified society.

Edit: And I think it's not clear at all, which is why I asked you what mechanism would cause this. We've more or less already had this conversation, but I think it's clear that I disagree with your assertion that it's clear, at least to the point where it'd be a noticeable problem.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 03, 2023, 03:14:13 pm
Historically technology has not eroded the middle class and has in fact increased it. The fear is that although that always held in the past, it wouldn't hold now because "AI is different." I'm not sure I agree - but it's psychologically clear that UBI would provide more "force" to stratify than just "technology" alone.

I'd like to push back on that. Technological change can lead to loss of jobs. (https://en.wikipedia.org/wiki/Technological_unemployment). And while I agree that there has been significant middle-class growth, creating the sizable middle class bell curve of the income distribution mentioned, I am not convinced it was primary caused by technological improvement. It can be argued that improved education is what caused that ( e.g. manual labors who picked up boxes and replaced by robot, can be trained to operate it), which is why we still see middle class bulge rising in the developing world but rather impotent in the advanced industrial economies like the USA.

I do not share the optimist view about AI being just yet another technology. It is a disruptive technology that affects everything and has potential to replace humans on scale that we haven't seen since the industrial revolution (not the third one).  And we can't dismiss concerns that jobs will "be completely replaced by AI are in middle-class areas, such as professional services. Often, the practical solution is to find another job, but workers may not have the qualifications for high-level jobs and so must drop to lower level jobs. [wikipidea (https://en.wikipedia.org/wiki/Technological_unemployment#Artificial_intelligence)]".

Also lets not forget that we live in globalized economy, where job can be easily outsourced (e.g. AI would create many low wage AI training jobs, but likely in the developing world), and the pressure of rising inequality (many people already struggle to secure stable, well-paying jobs with benefits) and the world economy shows signs of slowing growth and changing center of gravity. And I have no idea how these and other 3rd factors will affect things.

And the premise of the discussion that I'm (at least) having is "I still haven't seen a proposal that suggests how UBI can actually be sustainable without resulting in an even more massively stratified society between the people who actually work to have a "non-basic" lifestyle, and those who are just sitting there at the basic level."
Fair enough. I have no idea what that entails (will it be entirely unconditional guaranteed income?) it seem to me a question of income redistribution and one that would probably reduce incentives to work i.e. essentially what max said, why bother with a job if you can have a decent life without one.. that my 2 cents and I will withdraw from the pure UBI discussion.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 04, 2023, 12:33:53 pm
The Amazing AI Super Tutor for Students and Teachers | Sal Khan | TED
https://www.youtube.com/watch?v=hJP5GqnTrNo

The educational potential of AI is truly amazing, if only I had this growing up instead of the teachers we had.. I need this to be an actual product now and not just in English language in the USA.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on May 05, 2023, 01:43:31 am
I wouldn't put to much thought into that unitl we're at the point where the AI doesn't make shit up when it can't find the answer.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 13, 2023, 09:12:45 am
Some people have the same sentiments about autonomous cars, though they are MUCH safer than cars driven by people, and steadily becoming the new reality.

Otherwise have you checked about the AI hallucination error rates? because there are already many solution to that remarkably reduce that to levels that I think are beyond human. Certainly if the AI Tutor in the video preforms half as good as advertised it would be better than all overwhelming majority of all teachers we had..

But have no fear AI is here:
Spoiler (click to show/hide)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 13, 2023, 10:11:44 am
European Union attempts first significant AI regulation
https://www.weforum.org/agenda/2023/03/the-european-union-s-ai-act-explained/
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on May 13, 2023, 03:44:22 pm
"ChatBot... Write me a comprehensive legal framework to restrict AI use to prevent abuses of it and by it"
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on May 13, 2023, 04:27:31 pm
Apparently, the answer is: the Record Companies that own the music rights (https://futurism.com/the-byte/spotify-bots-ai-streaming-music)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on May 13, 2023, 10:23:28 pm
though they are MUCH safer than cars driven by people, and steadily becoming the new reality.
From my understanding the current issue is they are actually less safe then cars driven by people. This is most notable in Tesla (which disables the autodrive just before the car crashes so they can avoid liability) who's cars have a bunch of crashes.

In a decade or two they might nail it, but they aren't quite there for now.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 14, 2023, 06:55:27 am
Apparently, the answer is: the Record Companies that own the music rights (https://futurism.com/the-byte/spotify-bots-ai-streaming-music)

Personally, I am skeptical about their ability to enforce. According to Boomy claim, they created "14.5 million songs — or 14 percent of the world's recorded music" in just weeks.. Such flooding of the market coupled with some appropriate search function would eventually push many creators out of job. Maybe the next DF soundtrack would be AI generated..

Overall I think it should be established the legality of AI using copyrighted material in their training.

From my understanding the current issue is they are actually less safe then cars driven by people.
Based on what? According to Tesla data, using accident per X million miles driven metric: Tesla car are 8 times safer than the average, and become FAR less safer when autopilot is disengaged:
Spoiler (click to show/hide)

Btw China already operates 100% self-driving cabs services. And the biggest barrier seem to be the usual cost and regulation.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on May 14, 2023, 12:14:11 pm
Overall I think it should be established the legality of AI using copyrighted material in their training.
Sampling and remixing are protected, as is listening to as much music or looking at as much art as you want before coming up with your own, even your own take on the same style. So this is a solved question.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on May 14, 2023, 02:03:30 pm
Quote
Overall I think it should be established the legality of AI using copyrighted material in their training. 

Using copyrighted material in training has nothing to do with copyright. It is so against the spirit of what copyright is that I will be amazed if there will be limitations... Then again, big companies are known for pushing absurd laws.

AI outputs, on the other hand, can be copyright infringement and should be regulated somehow.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on May 14, 2023, 02:59:40 pm
Remember kids: statistics never lie - but you can make them say whatever you want!

Miles per accident isn't necessarily the correct metric anyway: ideally you want to filter by severity of the accident.  There are a ton more accidents in manual vehicles, but a much higher percentage of minor ones.

If I recall from NHTSA, it's something like 400k miles between accidents of "property damage or greater" severity. I think it was 4 million miles between each "severe injury or greater", then 40 (or maybe it was 400?) million miles between each fatal accident.

I'd be curious to see if the ratio of fatal to "any" accident is the same or better for Tesla, but I haven't gone digging that up yet.

Basically the premise is: Teslas are better in "general" driving, but humans are better than any AI on the market today for "unusual situations" because humans are more adaptable.  But there is no question the computers are better for "routine" than humans.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 14, 2023, 03:43:56 pm
Certainly, if you have any data either way please share, it makes argument sound way better. Keep in mind that not everyone care about data, and we tend to tolerate human over machine error.

Overall I think it should be established the legality of AI using copyrighted material in their training.
Sampling and remixing are protected, as is listening to as much music or looking at as much art as you want before coming up with your own, even your own take on the same style. So this is a solved question.
AI is not a human. If you using copyrighted material to build your product it is a problem.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Criptfeind on May 14, 2023, 03:57:37 pm
Certainly, if you have any data either way please share, it makes argument sound way better. Keep in mind that not everyone care about data, and we tend to tolerate human over machine error.

Just to pop into this, your own data doesn't support your conclusion to be honest, Tesla autopilot is not an autonomous car.

Edit: To be clear though I do think that you make a good point to be made about acceptable levels of mistakes in AI as compared to humans though. Although I'm currently unaware of where that ratio is, both for chat programs and automatic vehicles. And of course it doesn't mean that the person you were replying too is right either.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on May 14, 2023, 04:03:06 pm
AI is not a human. If you using copyrighted material to build your product it is a problem.
That's not really how it works.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on May 14, 2023, 05:07:05 pm
AI is not a human. If you using copyrighted material to build your product it is a problem.
There are already allowances (details may vary by jurisdiction) for something being substantively different from the things that they're derived from. Even without licence or acknowledgement (or being allowable as parody/etc).

And if an AI takes various intangible elements from source materials and builds a novel combination from the bits (https://xkcd.com/659/), then it's probably arguably as fair-game as any creative person.

AIs, as we currently have them, don't have intrinsic bias towards "liking" particular lyrics, etc (without biases added by their human creators who might prejudge what they should look for, which is a distinct AI fail at this level of development) and thus the kinds of arguments you'd use against them are different from those that (e.g.) Ed Sheran has had to defend against. Of course it will have 'heard' any particular piece that went into its training corpus, but it should also be being asked to make something that isn't a copy[1] and therefore be able to avoid anything that a human might (innocently?) bring to the game.

Depending upon the aggregating method/material, it might have to establish for itself that pretty much every tune has some tone progressions, a form of beat and other general acoustic signatures (however that deals with potential combined sources like Tubular Bells, Bohemian Rhapsody, Vindaloo, Play Dead, Believe, The Frog Chorus and All You Need Is Love), but almost certainly not as clean-cut as any musicologist would identify. It could't even play "One Tune To The Words Of Another" (unless it was fed on actual musical notation and specifically told to intermix the two... but that'd be hardly subtle).


[1] The algorithm that omits this is easy: "Make me something that sounds very like a Europop song" <AI regurgtates an existing Europop song and reports 100% fitness>... In fact, a proper fitness algorithm should Adversarially establish exact matches (or entire swathes of being identical) as a severe penalty.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 14, 2023, 05:43:00 pm
@Criptfeind. It shows that when people aren't using autopilot there are more accidents.. otherwise its good conversation starter.

I think that fully L5 autonomous vehicle are bigger challenge that most of the things we hear post ChatGPT. I also think that convenience is the single greatest driver for consumer technology adoption, even with L3 autonomous vehicle you can redeem the time waste on the daily tedious and monotonous traffic congestion (there are already push to allow watching media while you are at it). As regulation rollout everywhere and adoption trends suggest that in 10 years most new vehicles will have L3+ capabilities.. we all know that money talk.

Btw there were already pilots for autonomous minibus (they didn't even bother to put a driver seat) as far as five years ago. I assume that these vehicles which preform in more familiar slower speed metropolitan areas are safer, also they were advertised to use network connection to share data thus a one vehicle faces an obstacle can warn other or share its solution which is another safety feature. I think that these will pioner the backbone of traffic management infrastructure which will eventually help us all optimize route planning

@Starver, I am not talking about AI using sampling, I am talking about your AI product being trained built on copyrighted material in the first place. If you build your product on copyrighted information that is a problem.

Otherwise, on the AI front I think that the biggest things that we have yet to talked about is hosting. I think that hosting solution, especially today when there is shortage of chips, could be a major bottleneck as it take much longer to establish new server farms.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on May 14, 2023, 05:45:55 pm
@Starver, I am not talking about AI using sampling, I am talking about your AI product being trained built on copyrighted material in the first place. If you build your product on copyrighted information that is a problem.
Again, it may be "a problem", but that's not how copyright law works.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on May 14, 2023, 08:14:53 pm
Indeed: when a person hears a song and hums it, they didn’t copy the song. They “learned” the song. There is no meaningful difference in training an AI and a person reading / listening / watching training material. It is not a bit-for-bit copy, it really is a kind of “impression.”

Put another way: learning is not copyright infringement.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on May 14, 2023, 08:46:41 pm
Agreed, not on principle, but with how the modern language models actually work. And f intellectual property anyway. What if big coal patents nuclear fusion tomorrow we are just going to collectively agree they withold that from us, because this is a legal state?! I know it's a silly example but the point stands.


I love AI because the thing exposes our butts naked even better than covid did.

Afraid of AI overtaking the last bastion of "human work", artistry?
Well wouldn't be such a problem if people weren't inherently undeserving of life unless they got money right?
Afraid of AI destroying humanity?
Well wouldn't be such a problem if there were no arsenal it could turn against us right?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 14, 2023, 10:17:08 pm
Indeed: when a person hears a song and hums it, they didn’t copy the song. They “learned” the song. There is no meaningful difference in training an AI and a person reading / listening / watching training material. It is not a bit-for-bit copy, it really is a kind of “impression.”

Put another way: learning is not copyright infringement.
A computer isn't alive, its a tool you feed data input which it process. If the data used is unlicensed/copyrighted that is a problem, especially if you are trying to make money out of it(yet another problem, who hold the copyright?). Since AI is relatively new there are still ongoing debate about various aspects related to it, but there are already lawsuits underway to clear the way. Furthermore it has led big companies to change their term of use and restrict API use requiring money for what was preciously free.

Personally, I support expanding IP frameworks to address the problem posed by AI. 

EDIT: Btw, iirc Itali's ban on ChatGPT came about after a womans personal medical photos found its way into AI data training packet and her inability to remove it. If there are no protection it means corporation\government can data mine any personal data\conversation online and do with it as they will.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on May 14, 2023, 10:33:44 pm
A computer isn't alive, its a tool you feed data input which it process. If the data used is unlicensed/copyrighted that is a problem, especially if you are trying to make money out of it(yet another problem, who hold the copyright?). Since AI is relatively new there are still ongoing debate about various aspects related to it, but there are already lawsuits underway to clear the way. Furthermore it has led big companies to change their term of use and restrict API use requiring money for what was preciously free.

Personally, I support expanding IP frameworks to address the problem posed by AI.
That is not how copyright law works. There is currently no problem. The relevant copyright law is already well-established. There's no legal bearing to saying "a computer isn't alive"; it's just perfectly irrelevant. You can read the actual state of international copyright law on the subject of derivative works, if you like, instead of pontificating.

ETA: If it helps, one key relevant doctrine you could read about is called "fair use".
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on May 14, 2023, 10:49:59 pm
Well there's definitely a *problem*, as jipehog's edit points out (and also, obviously).

I'm sympathetic to dragdeler's point that intellectual property ought to be shared, but only because all property should be shared.  The instinct to dismiss intellectual property specifically as unworthy of protection is something I disagree with.  Just because something's cheap to distribute doesn't mean the creator deserves less monetization.  (And sometimes it *feels* more like "I can pirate this, therefore it should be free").  Working under capitalism as we currently must, I think it's important to un-patent life-saving medicines while *protecting* the income stream of entertainers.

Back to the copyright issue though: If I make an algorithm that takes youtube videos and horizontally reverses them, and then "autonomously" reposts them, I have "transformed" the work.  I'm also a piece of shit and my channel my bot's channel should be taken down for copyright abuse.  Deciding whether something is fair use is inherently subjective, considering mitigating factors.  "I put it through an algorithm, it wasn't me" isn't a magic bullet.

"AI art" is a fucking menace to actual artists. (There's my vitriol/side here)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on May 14, 2023, 11:35:50 pm
Quote
Back to the copyright issue though: If I make an algorithm that takes youtube videos and horizontally reverses them, and then "autonomously" reposts them, I have "transformed" the work

The key part here is reposting not sufficiently transformed work, not the algorithm.

Using AI to draw copyrighted characters and then distributing them breaks copyright laws. Teaching the AI to do so has NOTHING to do with copyright laws


Quote
"AI art" is a fucking menace to actual artists.
Photography is also a fucking menace to actual artists. Very few will pay for a photorealistic portrait of themselves :(
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on May 15, 2023, 02:30:20 am
@Starver, I am not talking about AI using sampling, I am talking about your AI product being trained built on copyrighted material in the first place. If you build your product on copyrighted information that is a problem.
Again, it may be "a problem", but that's not how copyright law works.
Indeed. Or else nobody should be allowed to be creative in any way whatsoever unless they were a lifelong hermit. No "on the shoulders of giants", or anything like that.

What we're doing is putting AIs through a School Of Life (maybe with overtones of a degree level study of <Foo> Appreciation with more emphasis on course materials than any professorial opinions being imparted.

If ChatGPT effectively read Reddit (or, rather, web-pages that Reddit users mentioned) to build up its LLM of the world, then it 'sampling' conceptual information from many people who might consider (indeed, websites often do assert) their literal output as copyright, but the point is that it's much the same as ChatGPT being like a user who reports that they think that they recall once having heard something (often quite distorted, and at least sometimes plain wrong due to no contextual understanding beyond what words often sit near to what other words, albeit cleverly so), rather than straight out going out and pasting what some source says and claiming it as its own 'thoughts' (or doing what happened with the Shetland Times and Shetland News, and probably still happens plenty today). It is so unable to directly cite sources that, if effectively asked to provide citations, it constructs something that looks sufficiently citation-like but isn't actually a practical one at all. (If such a chatbot were additionally required to provide its true sources for everything it spewed out, then it'd be hard to do less than narrow it down to thousands of 'sources' for how it constructed a hundred-word output, and much of that would be more to do with why it did/did not make use of the Oxford Comma or go with its choice of "isn't"/"is not". The fact there is expected to be board-cordinates of a certain form in a chess-question's answer is nothing that can be claimed to be an Intellectual Property, and much of the rest of the output is just a glorified Markovian chain that reflects statistically what words should be returned given any particular query.

A 'popBot' might similarly have the experience (compressed) of having heard every week's Top 40 blare out of the radio for a number of decades, which does not in itself pose a copyright issue. And it isn't using didetic recall/replay of any of those songs to perform any actual identifiable non-original works. (The "Liam Gallagher" voice in the 'AIsis' song is a separate issue, in the lines of DeepFake, but of dubious prior coverage when it comes to performance rights.)


All of which is to say that there may be issues (like with on-demand re-release of classic TV/radio content, the available sources might or might not be effectively licenced/denied for use in a situation which wasn't even considered by anyone, decades ago, in ways that technically may need untangling/renegotiating) with exactly how the corpus was 'fed', but we can't just assume that it was an illegal torrent-dump or bootlegging operation. From then on, is the dissassembly and reassembly into a new product really something that would have a George Harrison/Chiffons case to answer? Not as far as the AI is concerned, and its 'parents' may be able to successfully argue not. If only because it would shut out much non-'copying' technical processes. But these are the things that lawyers may be making money (and/or reputations) over, as times pass. At least until there are fully-accredited AI lawyers!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on May 15, 2023, 03:43:55 am
Based on what? According to Tesla data, using accident per X million miles driven metric: Tesla car are 8 times safer than the average, and become FAR less safer when autopilot is disengaged:
Quote from: Washington post article
SAN FRANCISCO — Tesla vehicles running its Autopilot software have been involved in 273 reported crashes over roughly the past year, according to regulators, far more than previously known and providing concrete evidence regarding the real-world performance of its futuristic features.
...
Tesla‘s vehicles have been found to shut off the advanced driver-assistance system, Autopilot, around one second before impact, according to the regulators.
So yes, if you let Tesla blame all its autopilot crashes on humans then its very easy to reach the conclusion that Tesla autopilot is actually safer then said humans.
Btw China already operates 100% self-driving cabs services. And the biggest barrier seem to be the usual cost and regulation.
Huh. Very interesting to hear.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on May 15, 2023, 04:01:34 am
You can't blame the autopilot for the wreck if it turns itself off right before i happens!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on May 15, 2023, 07:01:42 am
AI art will make artists in less of a demand but I highly doubt the profession will die. In any case meh, I would never pay for art anyways. I don't have a horse in this race so I support AI because I get all the art I want.

Voice acting will likely die out completely however, and there's nothing anyone can do at this point. Unfortunate but that's life.

As for LLMs, there already are opensource alternatives, anti-scraping laws can't do anything now, so I'm glad for that. I want AI to kill as many jobs as possible to possibly force the government to push for larger social nets because it fits my agenda of socialism. If voice acting gets thrown under the bus and artistry gets downsized, so be it honestly. Nothing is stopping anyone from drawing as a hobby.

I don't really have a solution for higher-income jobs like advanced programmers or radiologists. I suppose they will just have to live with UBI. Sucks for them I suppose.

tl;dr there is a problem but this problem might damage capitalism so I want to accelerate it. The faster it runs its course the shorter the upheaval period we're facing now (e.g the war on AI art, ChatGPT legislation, etc) lasts.

Note: I'm drunk.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Bralbaard on May 15, 2023, 08:47:05 am
Just keep in mind that there is no system in place where AI will magically dissolve capitalism. The chance for a more distopian future certainly exists, especially because AI is being mainly created by powerful corporate companies. If people get free art and content they may just accept larger inequality in society and fail to see the consequences.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on May 15, 2023, 08:52:05 am
Yeah there's a risk (and I'm willing to take it)-- but consider that open-source AI is on the rise and is beginning to rival corporate models such as ChatGPT. (https://www.semianalysis.com/p/google-we-have-no-moat-and-neither)

Smaller companies and private individuals can also use AI. It is, in the end, an equalizer. You need less manpower to create the same amount of information for whatever purposes. This is why I don't support AI regulations: the corps will ignore them anyways, secretly or not, and small business and independents will get fucked over.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 15, 2023, 01:10:24 pm
So yes, if you let Tesla blame all its autopilot crashes on humans then its very easy to reach the conclusion that Tesla autopilot is actually safer then said humans.
Does the article says that Tesla doing this or is that your speculation? Also any new data to support your initial claim that autonomous cars are less safe then cars driven by people.

Btw does anyone have any thought to AI contribution to traffic management infrastructure? I think it could be huge. Imagine that instead of sitting in traffic jam on the way to and later back work looking at the many empty lanes in the opposite directions autonomous cars would be able to utilize all lanes all the time, using things which are common in networking like throughput shaping.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 15, 2023, 01:23:05 pm
A computer isn't alive, its a tool you feed data input which it process. If the data used is unlicensed/copyrighted that is a problem, especially if you are trying to make money out of it(yet another problem, who hold the copyright?). Since AI is relatively new there are still ongoing debate about various aspects related to it, but there are already lawsuits underway to clear the way. Furthermore it has led big companies to change their term of use and restrict API use requiring money for what was preciously free.

Personally, I support expanding IP frameworks to address the problem posed by AI.
That is not how copyright law works. There is currently no problem. The relevant copyright law is already well-established. There's no legal bearing to saying "a computer isn't alive"; it's just perfectly irrelevant. You can read the actual state of international copyright law on the subject of derivative works, if you like, instead of pontificating.

ETA: If it helps, one key relevant doctrine you could read about is called "fair use".
Contrary to your pontification about whether there is a problem, what is well-established and what laws actually say. These opinion are already challenged in court, and according to the Congressional research service  (https://crsreports.congress.gov/product/pdf/LSB/LSB10922) there may be a need to clarify "whether AI-generated works are copyrightable, who should be considered the author of such works, or when the process of training generative AI programs constitutes fair use."

Regardless, I fully support exploring and expanding IP frameworks (not necessary under copyright) and any other measure (through terms of use etc) to address the problems posed by AI.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on May 16, 2023, 04:43:34 am
Voice acting will likely die out completely however, and there's nothing anyone can do at this point. Unfortunate but that's life.
I don't see this happening, AI generated speech is terrible and until they fix it I doubt it's gonna replace voice acting.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on May 16, 2023, 10:09:36 am
And I don't see it being fixed to the point of being able to express emotions any time soon. Art is there to stay, cheap mass-product on the other hand...

Dance music, simple erotic\porn material, assets for indy PC games, etc.  - those will receive some serious competition.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on May 16, 2023, 10:55:12 am
Huh, I could've sworn I replied to this - I agree that AI won't destroy all "artistic" jobs, same as mass production didn't destroy all crafting professions.

You will just need to find a niche for higher-priced, artisanal hand-crafted goods.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 16, 2023, 10:22:26 pm
The rate things are progressing is amazing. It's like they take the plunge from brick cellpones to today in just a years time.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on May 17, 2023, 02:38:32 am
Voice acting will likely die out completely however, and there's nothing anyone can do at this point. Unfortunate but that's life.
I don't see this happening, AI generated speech is terrible and until they fix it I doubt it's gonna replace voice acting.
You're thinking of TTS. AI voice is actually pretty good. Not perfect but soon.

But have you seen those "presidents react to X" memes? They're AI-made.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on May 17, 2023, 04:20:38 am
Voice acting will likely die out completely however, and there's nothing anyone can do at this point. Unfortunate but that's life.
I don't see this happening, AI generated speech is terrible and until they fix it I doubt it's gonna replace voice acting.
You're thinking of TTS. AI voice is actually pretty good. Not perfect but soon.

But have you seen those "presidents react to X" memes? They're AI-made.
I have heard the AI generated voices and they are terrible, they aren't smooth, they're grainy, and they can't do emotion, as far as I can tell they aren't really that much better than text to speech, except for the ability to somewhat sound like the person they're supposed to represent. So if they can't even replicate a person using their own voice I don't seem being able to make a new voice from scratch anytime soon.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on May 17, 2023, 04:50:48 am
See the reason I think this way is that voice has less dimensions in it than art does. There's only so much you can do with expressing a voice. As soon as it imitating emotion is solved, RIP.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 17, 2023, 05:00:46 am
I have heard the AI generated voices and they are terrible, they aren't smooth, they're grainy, and they can't do emotion, as far as I can tell they aren't really that much better than text to speech
When did you last checked? Seem pretty good to me for example:
Lovo Tutorial For Beginners | Lovo.AI (https://youtu.be/7fCoRgU9wAQ?t=180)
AI Voice Overs For Your Project (https://musicradiocreative.com/collections/voiceover?filter.p.m.custom.voice=Artificial+Inteligence+%28AI%29)
Eminem - Cat Rap (https://youtu.be/Pe9mHd6k184)


Also there are already services where you can add emotes and that just user friendly commercial stuff, and things are moving pretty fast quickly outpacing our jokes about how AI is unable to place chopsticks correctly.

----

AI is changing music forever and the critical importance of artist consent in building this new future.
https://www.youtube.com/watch?v=qPW_rdUgV_8
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 17, 2023, 10:05:55 am
Sam Altman CEO of OpenAI (creator of ChatGPT) calls for US to regulate AI
https://www.washingtonpost.com/technology/2023/05/16/ai-congressional-hearing-chatgpt-sam-altman/
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on May 17, 2023, 11:11:47 am
Wait until the AI become capable of really learning and understanding what it "learns", when it will reach gaming as a subject and start to notice human having their fun killing AI harmless npc in every games, human blowing up planets for laugh&giggles with stellar converters in MoO2 ...
:D
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on May 18, 2023, 05:00:50 am
When did you last checked?
Wasn't really that long ago. There's still something about it that doesn't sound right and it's noticeable that it's not a person, maybe one day but where not there yet.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on May 18, 2023, 06:49:15 am
According to my intel, the recent legislation targets almost exclusively corporate AI, opensource AI is unscathed. This is good, small businesses and individuals should be able to use opensource AI without monopolies like OpenAI muscling in. To Hell with their "safety".
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 19, 2023, 07:23:07 pm
G7 leaders call for ‘guardrails’ on development of artificial intelligence
https://www.ft.com/content/1b9d1e21-ebc1-494d-9cce-97e0afd30c2d
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 21, 2023, 06:45:00 am
concerning autopilot here is the progress on less than optimal driving conditions
https://www.youtube.com/watch?v=nAxHWS5i_W0
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 24, 2023, 08:07:25 am
Author uses AI generators, including ChatGPT, to write nearly 100 books in less than a year
https://nypost.com/2023/05/22/author-uses-ai-generators-including-chatgpt-to-write-nearly-100-books-in-less-than-a-year/

If this follow the same trend as the Music AI we will see low quality books flooding the market from such one person assembly lines.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on May 24, 2023, 09:08:33 am
All that AI tech and AI voices only to end into politicians singing anime songs.

https://www.youtube.com/watch?v=IkO8hTb7hL0

:D
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 26, 2023, 10:51:04 am
That is life, what most people use to send cute cat pics, is far more powerful than what we used to send human to the moon, otherwise any "cat"splosion starts with little cuteness ;) Here is another Ai generated video (https://www.tiktok.com/@aiinsight0/video/7237389846543699227) for the history books.

meanwhile research find ways to vastly improve AI reasoning from current gen with simple prompt engineering
https://www.youtube.com/watch?v=BrjAt-wvEXI

OpenAI offers $100,000 grants for ideas on AI governance
https://www.reuters.com/technology/openai-offers-100000-grants-ideas-ai-governance-2023-05-25/




Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on May 26, 2023, 01:49:35 pm
I had thought someone might mention the new AI-'found[1]' antibiotic. Almost seems timed to counter the "AI is bad and/or worrying" flurry of opinions.

...at least until we find out that the chemical involved is specifically invented to help the AIs create more paperclips, whilst completely fooling us humans that this is what we asked the computers for!  :P


[1] Well, sifted through many possible candidates, from a whole mash-up of vaguely possible novel substances. And the one that popped out of the AI assessment and then has shown promise in more practical pre-human testing still has to go through a few more stages before being considered usable and safe/practical (https://xkcd.com/1217/). Which is all a lot of big caveats, with the AI stage maybe just being an accelerator. But in the world of antibiotics, there haven't been too many new ones recently, compared with the notable increase of resistance issues.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on May 30, 2023, 12:08:22 am
https://kotaku.com/nvidia-ace-ai-rtx-4060-ti-gpu-graphics-gaming-jobs-1850484480

So it looks like this is the start of AI NPCs. Now I'm sure when it truly comes out there will be loads of problems with the tech and they will be easy to trick and confuse, ect.
But as with everything else AI (and access to stuff like chat GPT 4 to build) it will rapidly improve.
I'm sure by the end of the decade NPC's in AAA games will talk and feel like real people, although people with a lot of rails on them so they don't mess with the story or say inappropriate stuff, ect.
After that stuff like infinite custom quests, factions, NPCs, and eventually even the locations they live in will be generated as you play for true living world experiences that make every game completely unique.

It will also probably end up with a lot of people out of jobs in the video game industry.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on May 30, 2023, 01:10:17 am
tbh kind of only makes sense in more sandbox-y games, but it would be a huge boon for those. One note though, text AI requires a very beefy computer to run, or an internet connection.

But otherwise we're well on track to something like Simulacrum from my worldbuild. :p
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on May 30, 2023, 03:59:04 am
I don't know why they're focusing on NPC speech when NPCs can still hardly walk, I mean they still even in new AAA games still get stuck on walls and lack realistic daily routines.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on May 30, 2023, 11:42:08 am
tbh kind of only makes sense in more sandbox-y games, but it would be a huge boon for those. One note though, text AI requires a very beefy computer to run, or an internet connection.

But otherwise we're well on track to something like Simulacrum from my worldbuild. :p
As you say it should already be possible with an internet connection if done in connection with openAI, but the cost of that would be quite significant and probably require it to be a game with a subscription fee.
Aside from that in the short term it would all be pre-generated before the game is shipped (which obviously limits you, but would still result in NPCs that would otherwise have single lines being able to have dozens or even hundreds of responses), but once GPTlikes are optimized and GPUs advance enough running a single instance locally should be possible for a merely decent gaming computer.
I don't know why they're focusing on NPC speech when NPCs can still hardly walk, I mean they still even in new AAA games still get stuck on walls and lack realistic daily routines.
They are focusing on speech because its (at least as far as vidya gamers would care) a largely solved problem with tens of billions of research money pumped into it by others. Stuff like realistic AI body movements or pathing are not solved in the same way. But...

https://techcrunch.com/2023/04/10/researchers-populated-a-tiny-virtual-town-with-ai-and-it-was-very-wholesome/

Daily routines are totally something on the radar. Obviously making them all agents in real time like in the experiment above would be extraordinarily computationally expensive even for a single town, but having it A) Done at great expense before the game is shipped (entirely possible for a AAA game) or B) having a single agent act as a DM equivalent who orders everyone else around and merely makes things in a area around you pretend to be part of a whole living world would be totally doable.

I do feel like AI DM equivalents are totally going to be a huge thing that will make a massive amount of things possible that would before have required an absurd amount of resources to develop.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on May 30, 2023, 03:35:16 pm
The legendary dwarf fortress.  8) 8)


I would like to know what they mean when they say agents are informed of their circumstances. Is there like a layer that describes every scene in english so the LM gets to answer? What's funny to me is how it basically a village of superficial liers, but they are allways nice to eachother. I doubt little Eddy has commited a single note to memory by now  :P. But yes that's what I said elsewhere once: if you just want to fill a procedural world with credible actors, it probably suffices to inform the LM of the scenery so it can LARP away. Give it a "dialectic module" so it can ask itself questions - WWJD basically lol - so it can work itself down the steps of a given procedure. All in text, hypothetically so to say. But if it can enunciate the proper steps, at the right moment, conjugated to sound like individual (separate) actor(s), if prompted...
 
My personal convictions about ugh this is going to be a pretentious neologism... chronophenomenologics (sorry I tried to have this debate once before I just can't give it the same energy twice, you're going to have to bear with me)... make it quite intuitive to me: you don't need to believe that decisions are taken before the ego steps in to justify them afterwards (more or less according to Schoppenhauer) for it to work in this context. It's quite similar to how we can't actually really tell for ourselves. A world where everybody exclaims their current course of action would ruin any suspension of disbelief. If they get prompted by the player or certain events in the log only, that is good enough actually. Kinda what they be doing atm. Minus the dialectic, the answer to a prompt is allways kind of definitive.

It's particularly cool if whatever happened causing the game to start makes their past irrelevant, secluded from the game environment. Then they get to make up whatever background stories they want.

But also more broadly, just the fact to have a companion type, friendly npc that helps you, with whom you could discuss whatever (what will you do when we x? after all we went through do you think the gods have forsaken us? ever wondered what's behind that mountain? what do you do when we are not questing?) could be hell of neat. If it's in a contemporary setting you might even discuss IRL more broadly with it.



I think there might be a little goldrush once a LM can comfortably run locally on a midtier gaming pc: there are a bunch of gameconcepts that can revolve around the LM specifically, where there rest of the game can basically be copy pasted from existing inspiration sources, its basically just what they did in the experiment in the article + a game engine that allows more actions.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on May 30, 2023, 03:51:04 pm
"AI body movements" for NPCs already seems limited to whatever non-AI (PC) body movements are. If anything, pre-scripted (cut-scene) animations already outdo pretty much all "player is running, use standard 'running' sequence for avatar" stuff for multiplayer. If your MMORPG/equivalent players obtain "a funny little victory dance" perk that they can activate at will then a non-AI NPC can do exactly as much to copy the player pressing B-Triangle-LeftTrigger (or whatever it is) that then just invokes the feature that has to have been involved.

If you definitely mean "not bashing into walls", I can probably say that if Lara Croft could bruise then my playing of the original Tomb Raider, back in the day, would have turned her black and blue (even in the bits without actual enemies, or fatally vertical geology/architecture, just trying to run round a maze of corridors trying to find various right blocks to push and pull to let me into the next maze of corridors). Game-sprite controllers were almost always much better at basic navigation (by simple pathing) than an inexpert player, they just lacked the intelligence to discover that one corner where the perfect shot was, or that exact sequence of jumps that might get you through a gap in the intended virtual-glass-tunnel the level designer thought they were constraining the player in.

There's better and worse non-AI NPC 'brains', of course. You don't get Wolfenstein guards covering each others advances as they try to flush you out, and also a perfectly calculated shot on an exploding barrel would also be a game-killing experience as well, if it was what happened when a game-engine knows that this is what would ultimately kill the player-avatar and end the run.

Some form of collaborative (but hopefully not too collaborative) flocking behaviour does happen (or anti-flocking, but similarly using avoidance and accounting for all the fellow NPCs it can see), and you could get realistic (low-density) crowds spawning on the streets of San Andreas, I think, that weren't dumb enough to clip each other or step out in front of traffic. (Normally, at least when currently unaffected by the player's "demolition derby"-driving or "Dallas book depository"-sniping.)

Pre-programmed behaviours can be lacking, but can also be comprehensive. AI just means that you start with virtually all options on the table (whether it be a twitch of a leg, or the freedom to run off a cliff) and then with broad stroke rules (don't fall over/don't try to headbutt a train or anything else moving/static) develop a more intangible ruleset of behaviours that work with what interactions a PC might expect (some enemy avatars might be expected to attack on sight, others be more tricky; a shopkeeper avatar probably shouldn't attack at all unless it's that kind of game and the player-character now has that kind of rep), whilst obeying the game-universes various physical rules as much as necessary ('elemental spirits' might be allowed to noclip the environment a bit!).


Throwing in "AI makes a game better" begs loads of questions, though. What's currently lacking? Is it suffering from insufficient variation? Or from too much pre-programmed non-sequitur? Are you trying to take rails off the NPCs or add new psychological rails to the player? Are you trying to fill a multi-user environment with more 'users' at quiet times, without anyone realising? [...etc...] And how will your AI accomplish this?

I play Urban Dead, a very simple web-game, and supposedly everyone you meet (or get attacked by) is a real person logging in and responding to how everyone else who logs in moves through the environment, fixes (or breaks) things, heals or 'hugs' those that are currently humans while shooting or needling those currently zombies. Certainly no official server-side NPCs. I have no doubt that some active 'players' are something like browser-scripted automata (perhaps just to wander round and avoid trouble, maybe some are doing zerg functions for others, padding up a one-man "mall tour" wrecking spree with 'outrider' characters who can at least spy ahead). Though why you'd want to do that is another matter. You don't have to talk with your fellow players (English or Zombish or whatever comes natural to you), but a "broadcast zerg" that tries to find powered radios, sprays graffiti or speaks direct information/insults to anyone it meets could be hand-guided, pre-scripted with a "message" or tap into a GPT engine and do pretty much the very same thing in 95% of any resulting interaction with an actual player. And this in all an environment where the training and deployment of an all-singing-and-dancing AI is simplified by the many (externally reproducable) restrictions. The limit of actions per day, interactions with the world, length of messaging (speech, radio, graffiti, 'SMS'), etc. And if it 'hears' someone report "2Z SE mall, doors open, damaged gennie" and doesn't comprehend its meaning... well, probably not all human players do, at first, and at the very least its human "controller" can decide whether to add semantic training to it (or give ChatGPT a chance to query its own knowledgebase on the issue).


Fortnite-like environments will have a lot more challenges (for human players, too), with so much more 3d *stuff* (as pure data or otherwise) and nuances, and a pre-programmed 'bot-character might already be indistinguishable from a given quality of human player. If you need them to live-chat (especially in audio) with actual players then that's another thing, but I imagine that's also not compulsary.

Obviously offline (especially large-map sandboxy) games lack anyone real, so the plan is to replace current NPCs (perhaps a little predictable/unhelpful) with AINPC variations? Still basically scripted, just far more loosely. More unpredictable, possibly far more unhelpful (or not as valid in thebofficial role of an adversary) at the same time as a consequence, but that depends on the pre-training and QC.

I'm sure some of these things are not answers in search of a question, but as a broad sweep I'm not sure I see the excitement in most of the contexts. Interesting ideas, but a bit like saying that something "now has Blockchain", perhaps. Specific examples might shine through, of course, and populating a simulation with (learnable?) AI agents and seeing how far it goes does intrigue me. We shall see.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on May 30, 2023, 09:02:42 pm
I'm sure some of these things are not answers in search of a question, but as a broad sweep I'm not sure I see the excitement in most of the contexts. Interesting ideas, but a bit like saying that something "now has Blockchain", perhaps. Specific examples might shine through, of course, and populating a simulation with (learnable?) AI agents and seeing how far it goes does intrigue me. We shall see.
Nah, blockchain is, and has always been completely useless except in a small array of real world circumstances notably when there is no central repository you can trust to hold your data faithfully and when you can't just hold the data on your PC instead. Since in games you can just store all the data on either your PC or the game companies servers its completely useless for any game ever made.

Its more like say; graphics. Does shoving them into a game make it inherently better then a text game? No, especially at the start before the technology had time to text (and ASCII) games were many times better then ones with graphics.
But as graphics got better and better they become more important to implement to some degree because yeah, in the end it kinda does make it better.

I agree that there will be a ton of meaningless hype/flat out lies around its implementation, but that has been true for gaming "AI" for decades anyways.
---
The first excitement of AI is the same as all other automation humans have made; which is more results from less inputs.
So instead of getting a thousand lines of text from a writer you get ten thousand. Instead of having fifty voiced NPCs you have five hundred. Even if you have everything you want in the game it could mean that instead of having a week at the end to polish up the game you finish a month earlier and have the extra time to fix everything up.

Now a lot of that is going to be trash; including AI won't inherently make a game better, especially at the start. What it does do however is raise the ceiling on what is possible with the same budget which will in any cases where money is a concern (see: basically every game ever).
Imagine Elden Ring, but they had the budget for ten times the NPCs.
If implemented well (and for the first few years in many cases it won't be) the ability to get more stuff in the game will just make the game better.

This isn't particularly exciting, but being able to do the work of fifty people with twenty is going to be a pretty huge shift in how many games end up getting developed and how good those games end up being.
---
The second excitement of AI is the same as the same as tools in general; you can do stuff that is flat out impossible to implement without said tools.

Some of it is boring issues that AI can obviously fix.
For instance without AI you flat out can't ever have NPCs that can respond to any question you ask.
You can't have NPCs that would dynamically change their daily routine based on what is happening in the world.
You can't just grab a NPC off the street to be your companion and learn their hopes and dreams and watch as they change based on the choices you make in the world and get stronger as they level with you (and said NPCs wouldn't repeat "I'm sworn to carry your burdens" over and over).

But the truly interesting pie in the sky stuff is merely hypothetical, because this technology is barely developed in the real world, and certainly doesn't have decades of development time behind it that other gaming tech does and we have no clue of the potential of the technology.

For instance it could allow the player to develop custom factions and have them dynamically impact the world based on their precepts. You could say, make an evil faction that summons demons and it would fight the good factions and develop settlements and create bound demons. Or (depending on how mean the game is and how hard you set the difficulty) you try to make a demon summoning faction and the good guys and evil guys team up and come out to slap you around because everyone hates demon summoners.
Or proper branching questline where if you decide to team up with the bad guy in some random quest there are actual consequences in the world; some minor and irrelevant (grain costs go up because he made a plague in the farmlands) and some major (refugees fleeing said plague, a lockdown in a city once plague monsters start coming out and you have to bunker down and kill them and outlast them or break through the guards and escape the city).

Or how about if you're playing a medieval fantasy stealth game, and you go "I want this game to have guns" and "I want there to be blood magic in this game" and the AI DM goes "Cool dawg, I made you some guns and gave you a Blood stat and put some blood spells in the next few levels for you to find".
Obviously offline (especially large-map sandboxy) games lack anyone real, so the plan is to replace current NPCs (perhaps a little predictable/unhelpful) with AINPC variations? Still basically scripted, just far more loosely. More unpredictable, possibly far more unhelpful (or not as valid in thebofficial role of an adversary) at the same time as a consequence, but that depends on the pre-training and QC.
Obviously multiplayer games will benefit less then single player games to what is quite possibly a staggering degree, and even within SP games some genres will benefit more then others.
AI as a tool should still be a great help to multiplayer devs though, so even if you don't see AI acting directly it will still improve the game in the background.
I would like to know what they mean when they say agents are informed of their circumstances. Is there like a layer that describes every scene in english so the LM gets to answer? What's funny to me is how it basically a village of superficial liers, but they are allways nice to eachother. I doubt little Eddy has commited a single note to memory by now
Quote from: From the research paper
John Lin is a pharmacy shopkeeper at the Willow Market and Pharmacy who loves to help people. He is always looking for ways to make the process of getting medication easier for his customers; John Lin is living with his wife, Mei Lin, who is a college professor, and son, Eddy Lin, who is a student studying music theory; John Lin loves his family very much; John Lin has known the old couple next-door, Sam Moore and Jennifer Moore, for a few years; John Lin thinks Sam Moore is a kind and nice man; John Lin knows his neighbor, Yuriko Yamamoto, well; John Lin knows of his neighbors, Tamara Taylor and Carmen Ortiz, but has not met them before; John Lin and Tom Moreno are colleagues at The Willows Market and Pharmacy; John Lin and Tom Moreno are friends and like to discuss local politics together; John Lin knows the Moreno family somewhat well — the husband Tom Moreno and the wife Jane Moreno.
Its just a few lines of text with each persons circumstances and their relationships with others.
Quote from: From the news article on it
For instance, after the agent is told about a situation in the park, where someone is sitting on a bench and having a conversation with another agent, but there is also grass and context and one empty seat at the bench… none of which are important. What is important? From all those observations, which may make up pages of text for the agent, you might get the “reflection” that “Eddie and Fran are friends because I saw them together at the park.” That gets entered in the agent’s long-term “memory” — a bunch of stuff stored outside the ChatGPT conversation — and the rest can be forgotten.
So ha, Eddie totally does have his own memories.

Which raises an interesting point I've been considering. People have been saying that GPT isn't sentient because it doesn't form long term memories and doesn't know math, ect.
But GPT is just a language system, and the language part of humans brain doesn't store long term memories or know math either.

And thats because Humans aren't just any single specific intelligence system, we are the combination of a dozen systems all with their own specific intelligence stapled together with duct tape that thinks its one system.

So sure GPT doesn't know math and can't draw and doesn't have long term memory, but once you hook it up to Wolfram Alpha and Stable Diffusion and something to store its memories in that will all change awfully fast.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 30, 2023, 09:45:24 pm
https://kotaku.com/nvidia-ace-ai-rtx-4060-ti-gpu-graphics-gaming-jobs-1850484480
I found the later part of the presentation very interesting. With all the talk how AI companies don't have a moat, I am increasingly certain that hosting companies will be the main beneficiaries.

Otherwise, cutting cost and reason to have always online (anti piracy) is win win for game industry.

I had thought someone might mention the new AI-'found[1]' antibiotic. Almost seems timed to counter the "AI is bad and/or worrying" flurry of opinions.
Not the first such success, using AI to discover new materials and drugs is an exciting new field. But this is not the billion dollar question, that USA congress and world leaders are asking, and OpenAI giving grants for ideas on way to solve it.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on May 30, 2023, 10:24:39 pm
AI is essentially a force multiplier. You can do things faster with less manpower and/or effort. That's why corporations are trying to regulate it, not really some "ethics" or "safety" (they don't actually give a damn as long as the money flows) but because they're scared of losing their monopoly on this force multiplier, and thus losing money. "Ethical AI" is a smokescreen, mostly, with some ensheeped true believers who drank either corporate Kool-Aid (e.g various Twitter activists) or made and drank their own (e.g the LessWrong crowd).

When coding AI becomes good I might be able to make a game, together with my friend, with just 5 months instead of 5 years time.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on May 31, 2023, 01:35:10 am
The full paper on the AI GPT village is worth a read. (https://arxiv.org/pdf/2304.03442v1.pdf)
Cause it has a bunch of really interesting stuff in it, and some of the theoretical stuff I was thinking about is basically solved here.

Now most of the stuff is really unnecessary given the point of most games isn't to make a real world, its to make a world that looks real as cheaply as possible (even to the extent that stuff outside your line of sight simply doesn't exist), but this is really interesting stuff.
Although 25 agents on a consumer computer won't be possible within a decade (and might not even be possible in two) having a single agent that manipulates and acts as 25 people at once feels like something much more achievable, especially if enough optimization research is done.
---
To be clear, these aren't just chat GPT, its chat GPT with a large number of additions stapled on to give them medium and long term memory as well as the ability to meaningfully plan things over a number of (ingame) days.
Quote
A user running this simulation can steer the
simulation and intervene, either by communicating with the agent
through conversation, or by issuing a directive to an agent in the
form of an ‘inner voice’.
The user communicates with the agent through natural language,
by specifying a persona that the agent should perceive them as. For
example, if the user specifies that they are a news “reporter” and
asks about the upcoming election, “Who is running for office?”, the
John agent replies:
John: My friends Yuriko, Tom and I have been talking
about the upcoming election and discussing the candidate Sam Moore. We have all agreed to vote for him
because we like his platform.
To directly command one of the agents, the user takes on the persona of the agent’s “inner voice”—this makes the agent more likely
to treat the statement as a directive. For instance, when told “You
are going to run against Sam in the upcoming election” by a user
as John’s inner voice, John decides to run in the election and shares
his candidacy with his wife and son.
You can easily directly play god in such a simulation.
And the positions of agents can fundamentally change. So while the leader of a faction would start as the same person, it could soon be someone else entirely with completely different goals and motivations.
Quote
By interacting with each other, generative agents in Smallville
exchange information, form new relationships, and coordinate joint
activities. Extending prior work [79], these social behaviors are
emergent rather than pre-programmed.
3.4.1 Information Diffusion. As agents notice each other, they may
engage in dialogue—as they do so, information can spread from
agent to agent. For instance, in a conversation between Sam and
Tom at the grocery store, Sam tells Tom about his candidacy in the
local election:
Information spreads from AI to AI, which would allow dynamic information spreading in a game.
So when you (say) kill someone do something guards wouldn't instantly know, a guard would have to see you do it; then they would have to actually go and report it or let other guards know somehow.
Or it might be possible for you to literally outrun information or rumors.
Quote
3.4.3 Coordination. Generative agents coordinate with each other.
Isabella Rodriguez, at Hobbs Cafe, is initialized with an intent to
plan a Valentine’s Day party from 5 to 7 p.m. on February 14th. From
this seed, the agent proceeds to invites friends and customers when
she sees them at Hobbs Cafe or elsewhere. Isabella then spends the
afternoon of the 13th decorating the cafe for the occasion. Maria, a
frequent customer and close friend of Isabella’s, arrives at the cafe.
Isabella asks for Maria’s help in decorating for the party, and Maria
agrees. Maria’s character description mentions that she has a crush
on Klaus. That night, Maria invites Klaus, her secret crush, to join
her at the party, and he gladly accepts.
On Valentine’s Day, five agents—including Klaus and Maria—
show up at Hobbs Cafe at 5pm and they enjoy the festivities (Figure 4).
In this scenario, the end user only set Isabella’s initial intent
to throw a party and Maria’s crush on Klaus: the social behaviors
of spreading the word, decorating, asking each other out, arriving
at the party, and interacting with each other at the party, were
initiated by the agent architecture.
...
We observed evidence of the emergent outcomes
across all three cases. During the two-day simulation, the agents
who knew about Sam’s mayoral candidacy increased from one (4%)
to eight (32%), and the agents who knew about Isabella’s party
increased from one (4%) to twelve (48%), completely without user
intervention. None who claimed to know about the information
had hallucinated it. We also observed that the agent community
formed new relationships during the simulation, with the network
density increasing from 0.167 to 0.74. Out of the 453 agent responses
regarding their awareness of other agents, 1.3% (n=6) were found to
be hallucinated. Lastly, we found evidence of coordination among
the agents for Isabella’s party. The day before the event, Isabella
spent time inviting guests, gathering materials, and enlisting help
to decorate the cafe. On Valentine’s Day, five out of the twelve
invited agents showed up at Hobbs cafe to join the party.
We further inspected the seven agents who were invited to the
party but did not attend by engaging them in an interview. Three
cited conflicts that prevented them from joining the party. For
example, Rajiv, a painter, explained that he was too busy: No, I
don’t think so. I’m focusing on my upcoming show, and I don’t really
have time to make any plans for Valentine’s Day. The remaining four
agents expressed interest in attending the party when asked but
did not plan to come on the day of the party
This is the most wild thing.
A single line of text turns into a large collaborative event that multiple agents organically attend.
Obviously the potential of this would be wild for a stardew valley type game, but adapting it to a game like GTA or Skryim would be trivial. For instance you could have rival gangs, and one person might call the gang together to attack another. Or a different person could be a police informant would relay information to the police and who would be killed if caught by his fellow gang members.
(https://i.imgur.com/ADac6W7.png)
They flat out have memories. The paper goes into more detail, but they can change and gather new information that impacts their worldview as time goes on.

The study only generated two days worth of time (which they note cost them thousands of dollars to run), so its impossible to say how it would end up working on a longer timescale.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on May 31, 2023, 03:41:56 am
If it costs thousand of dollars for a two day test I don't see this kind of thing being used in a major video game in the next decade.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on May 31, 2023, 04:16:30 am
I expect optimized hardware for AI generation will come sooner than later and it will reduce the cost significantly.

I actually played a discord-based simple RPG with heavy use of AI text generation (some 6B model IIRC) and while it was VERY crude, this concept works. I expect in a year or two we'll have decent games that will provide live AI generation during gameplay with a modest monthly fee.

Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on May 31, 2023, 04:24:04 am
I still see the monthly fee being a blocker for a lot of people, and I figure once it gets to the point where the monthly fee is no longer a thing then I see more people interacting with stuff like that.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on May 31, 2023, 05:12:45 am
Sure, people don't really like subscription games (for a good reason) but it may be a service for many compatible games and then it is not that much different from Netflix. I can even see Amazon including this in Amazon Prime

Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on May 31, 2023, 05:25:52 am
By that quote they probably mean the individual actor has it in it's context memory

Quote
I told dad that I am working on a music composition at the breakfast table.

Just like a text based adventure. Just like individual threads with chatgpt 3.5 on openai (the free one).

I doubt it stored something like:

Quote
X: 1
T: Coherent Composition
M: 4/4
L: 1/4
K: Cmaj
%%score (V1 | V2 | V3)
V:V1 clef=treble
[V:V1] C D E F | G A B c | d e f g | a b c' d' |
[V:V1] e f g a | b c' d' e' | f g a b | c' d' e' f' |

V:V2 clef=treble
[V:V2] C,2 D,2 | E,2 F,2 | G,2 A,2 | B,2 c2 |
[V:V2] d2 e2 | f2 g2 | a2 b2 | c'2 d'2 |

V:V3 clef=bass
[V:V3] C,,2 D,,2 | E,,2 F,,2 | G,,2 A,,2 | B,,2 c,2 |
[V:V3] d,2 e,2 | f,2 g,2 | a,2 b,2 | c'2 d'2 |


Tho I must admit I'm surprised chatgpt keeps delivering whatever I ask it to.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on May 31, 2023, 09:10:50 am
Some of it is boring issues that AI can obviously fix.
For instance without AI you flat out can't ever have NPCs that can respond to any question you ask.
You can't have NPCs that would dynamically change their daily routine based on what is happening in the world.
You can't just grab a NPC off the street to be your companion and learn their hopes and dreams and watch as they change based on the choices you make in the world and get stronger as they level with you (and said NPCs wouldn't repeat "I'm sworn to carry your burdens" over and over).
(Plenty of interesting things said, here and later, which I might even outright agree with, plucking this little bit out to make one response, though.)
The first one, I won't go into that much. If I asked the Woodchuck question then maybe an AI would handle it better than a less sophisticated one (unprimed by its programmer), but non-AI search engines are capable of (un)intelligently linking it up to their intended responses.

Agents changing daily routine are common enough already, to my knowledge. Do the villagers hang around in the marketplace of an evening when the Night Stalker (player, or player-invoked) starts to prowl? The bus no longer travels between stops, sedately, but attempts to escape the area when it gets hit by a stray bullet (or deliberately fired into by the 'traffic surfing' player, stood atop it).  All pre-programmed, so limited to whatever modes of operation the developer requires, obviously the scope for AI to verge into further (untested) extremes saves 'development time', once you've developed the flexibility. And before extensive checking that paradoxical emergent behaviours aren't more the norm than desired!)

The third item is (as a start, certainly) already a DF Adventure Mode thang! Talk to Tarn, perhaps, how much more he'd be able to use AI for?


And the "design me a blood'n'guns game" idea, more meta than the "internal whisper" activating an election within the scenario mentioned later. Programming enthropy requires that some information about guns/blood/elections be available. As imight be made available (DLC-like) regardless. New professions (and on-screen behaviours to go along with them) got added to The Sims all the time. Hard to say that AI alone adds this ("force multiplier", as someone else said).


I appreciate the possibilities, but I'm not yet entirely on board with the "it'll change *everything*!" viewpoints. Accelerate some things (adding plenty of potential weirdnesses along the way, perhaps like a spontaneous dragon-cult somehow arising amongst an antagonist faction in a Halo-universe game?) and transfer the development skills to carefully crafting and shaping the scope and limitations the AI should work within rather than directly scoping and limiting any the game directly. Rather than carefully crafting the game to display alpine-style mountains with Swiss style archtecture in one zone, and tibbetan-styling in another of its settings, telling it where to identify source material to obtain any suitable environment (Norwegian fjords, 'Grand' Canyons, Rift Valley plateaus) and ...for the time being at least... we're still looking at meta-development and meta-curation to fulfil.

Like we can request an AI to create pictures (often slightly off) of manga girls eating noodles, but it is nothing without a corpus of work being supplied of all the original manga source material and some form of tagging. It's a different emphasis from manually composing 'answers' to all conceivable requests, by artists or rather artistic direct-coders, but at least we'd expect only human limitations. Rather than oversights by the AI, derived from the attentions of the AI-compositor (either human or 'training AI'), however many layers of departure we're talking from the last human spark of guidance.

(Outside of such things, if an AI runs amok then ultimately it's the fault of some human back in the history of the AI's inception for decisions made. And clearly also to their credit when the AI produces some good outcome, however hidden behind the cascading 'creative rights'. But this verges on philosophical issues, rather than practical ones. Which is why I'm not sure about all these AI recantations (https://www.bbc.co.uk/news/technology-65760449), too. There have been so many Nobels and Oppenheimers in history. Would you think Guttenburg might be right to be pro- or anti- any particular book that was printed later? The guild of prehistoric Prometheuses (not a close group, I grant you) have a lot to answer for, with or without whichever individual(s) then decided that a pinch of sulphur here, a dash of ground charcoal there (and the scrapings from the privvy wall as well) might be a good idea...).
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on May 31, 2023, 02:13:22 pm
AI is essentially a force multiplier. You can do things faster with less manpower and/or effort. That's why corporations are trying to regulate it, not really some "ethics" or "safety" (they don't actually give a damn as long as the money flows) but because they're scared of losing their monopoly on this force multiplier, and thus losing money. "Ethical AI" is a smokescreen, mostly, with some ensheeped true believers who drank either corporate Kool-Aid (e.g various Twitter activists) or made and drank their own (e.g the LessWrong crowd).

When coding AI becomes good I might be able to make a game, together with my friend, with just 5 months instead of 5 years time.
I agree that AI is revolutionary, the key issue is how do we manage its impacts, bringing about its numerous potential positive changes (e.g. enhancing and increasing access to education) while limiting/adapting to the negative ones.

For example, as you mentioned AI is economic force multiplier. It has the potential to substantially increase productivity and reduce costs without additional labor, however, it can also decrease labor demands, depreciate its value, and offer no employment alternatives. Contrary to what you said this will effect everyone, not just the corporates, and I think that in the long run corporates will benefit. You --and billion other people-- ability to make yesterdays games faster will not improve your income prospects, meanwhile large companies between their resouces and economies of scales will continue to dominate. Furthermore as companies are able to automate and reducing their dependence on wider workforce i foresee the more inequality will rise giving the rich even more power.

Otherwise most of that post is cheap adhomniem, I can similarly say that there are many whose dissatisfaction with their lot in life turned them into narcissistic/true-believer in delusional idealist ideas that want to burn the system down because the alternative must be better than this.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on May 31, 2023, 08:35:29 pm
(Plenty of interesting things said, here and later, which I might even outright agree with, plucking this little bit out to make one response, though.)
Oh no worries, I was even thinking about adding a disclaimer like this to my previous post since this is indeed a very interesting and speculative topic.
There isn't any need to actually respond to all the stuff I'm saying because I'm also just doing a lot of thinking out loud.
(Outside of such things, if an AI runs amok then ultimately it's the fault of some human back in the history of the AI's inception for decisions made.
On one hand sure, if an AI runs amok its the fault of a human somewhere down the line, but that's the same as saying that if your child ever does something bad its your fault.
I mean, its true; you could have taught them better or not had them or whatever; but in practice it won't be anywhere near that simple especially when you are on the cutting edge.
There have been so many Nobels and Oppenheimers in history.
I think Oppenheimer is the right comparison here, some of these people are coming to the realization that this stuff has a legitamate chance of ending the human race or supplanting our place in the world; not just eventually but in our lifetime and being part of that is pretty existentially terrifying.
but it is nothing without a corpus of work being supplied of all the original manga source material and some form of tagging.
I've seen this type of thought (along with the similar LLM aren't thinking and are just flat out copying stuff off the internet) thrown out a lot as proof that AI are fundamentally lacking, but it feels like complete rubbish to me, cause the same is true of humans; without our own training data we can't paint or do art or even speak (although we can totally do stuff like cry or grunt).
Quote from: Newton
“If I have seen a little further it is by standing on the shoulders of Giants”
Or in other words: "Some other dudes gave me good training data and that's the only reason I can do stuff beyond grunt at my fellow cavemen".
And the "design me a blood'n'guns game" idea, more meta than the "internal whisper" activating an election within the scenario mentioned later. Programming enthropy requires that some information about guns/blood/elections be available. As imight be made available (DLC-like) regardless. New professions (and on-screen behaviours to go along with them) got added to The Sims all the time. Hard to say that AI alone adds this ("force multiplier", as someone else said).
Oh sure, they have to know what a gun or blood or an election is for them to be able to meaningfully interpret your request. But they already *do* and not even as a hypothetical development, if you go to GPT right now and ask it: "What is a gun" it will tell you.

Actually implementing truly new features into games would require it knowing what it is in the context of a game (with regards to programming, ect) which AI doesn't yet, but the information required for that could be simply grabbed out of some researchers steam libraries.
(And obviously it would be more complex then just grabbing it, which is why it was in the hypothetical area in the first place).
As imight be made available (DLC-like) regardless.
Reducing a ten or hundred thousand dollar job into a voice prompt and possibly a few hours or days of time for your computer to crunch some numbers doesn't strike you are a huge "change everything about video games" type of deal?
And sure, they could make some stuff for DLC, but making an infinite amount of DLC to fit some random person's desires is obviously impossible.
Hard to say that AI alone adds this ("force multiplier", as someone else said).
This statement is honestly perplexing to me because there are already artists/writers/programmers who are using the current model of GPT/Stable Diffusion that have been using it as a force multiplier.
Like this isn't in the future, there are currently artists who are using it as a draft tool to produce significantly more, programmers who are now able to hold multiple jobs, press people who can do their job in 4 hours and then just chill the rest of the day, ect.

I have complete confidence when I say that it will indeed change everything to at least the degree the internet or the computer or electricity changed everything in the past.
And if new advancements keep coming down the rails and moores law continues to be true it may very well result in a fundamental change in the human condition.
---
Even just in regards to making games its going to be a massive force multiplier. Being able to reduce what could very well be a hour of work into a single plain language prompt (eg. "You want to setup a party", or "You have a vendetta against the yakuza and have voice and dialog options that represent this as well as that represent this a quest with a reasonable reward", or "get me HD graphics for all those mountains in the distant background") is going to result in pretty mind boggling amounts of time saved.

E: I just realized I probably misinterpreted you at the end there and you were saying it would *just* be a force multiplier.
Which yeah, I disagree with, but we'll see how it impacts gaming over the next decade or two.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on May 31, 2023, 08:54:31 pm
AI is essentially a force multiplier. You can do things faster with less manpower and/or effort. That's why corporations are trying to regulate it, not really some "ethics" or "safety" (they don't actually give a damn as long as the money flows) but because they're scared of losing their monopoly on this force multiplier, and thus losing money. "Ethical AI" is a smokescreen, mostly, with some ensheeped true believers who drank either corporate Kool-Aid (e.g various Twitter activists) or made and drank their own (e.g the LessWrong crowd).

When coding AI becomes good I might be able to make a game, together with my friend, with just 5 months instead of 5 years time.
I agree that AI is revolutionary, the key issue is how do we manage its impacts, bringing about its numerous potential positive changes (e.g. enhancing and increasing access to education) while limiting/adapting to the negative ones.

For example, as you mentioned AI is economic force multiplier. It has the potential to substantially increase productivity and reduce costs without additional labor, however, it can also decrease labor demands, depreciate its value, and offer no employment alternatives. Contrary to what you said this will effect everyone, not just the corporates, and I think that in the long run corporates will benefit. You --and billion other people-- ability to make yesterdays games faster will not improve your income prospects, meanwhile large companies between their resouces and economies of scales will continue to dominate. Furthermore as companies are able to automate and reducing their dependence on wider workforce i foresee the more inequality will rise giving the rich even more power.

Otherwise most of that post is cheap adhomniem, I can similarly say that there are many whose dissatisfaction with their lot in life turned them into narcissistic/true-believer in delusional idealist ideas that want to burn the system down because the alternative must be better than this.
You seem to think I want to become rich. I really don't. I just want to be creative in peace and AI can assist me with that. That is why I support UBI, I want enough to feed myself with a bit to spend on luxury but I don't seek to make line go up ad infinitum.

I disagree with your point however. If opensource AI is sufficiently distributed, entertainment media corporations will have less of a chokehold as indie games and movies with unique concepts (rather than ones made to maximize profit) can be made with a similar quality as AAA games and blockbuster movies. Sure, the corporations will be able to make even more detailed ones... but I believe there are diminishing returns with that and quality, and we are soon hitting the plateau. Market oversaturation hurts megacorps precisely because of this deprecation of value. AI regulations are meant to rein in this kind of oversaturation, and are generally lobbied for by corporations. Thus, I oppose all AI regulations except things like "don't use it for social credit systems".
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on June 01, 2023, 04:20:14 am
Quote from: Newton
“If I have seen a little further it is by standing on the shoulders of Giants”
Or in other words: "Some other dudes gave me good training data and that's the only reason I can do stuff beyond grunt at my fellow cavemen".
Or "Robert Hooke had nothing to do with any of my brilliance..!", some would say.

Anyway, LLMs aren't understanding, and are just copying and recombining fragments under a statistically directed generalised rule where it is always supercicially similar in structure to its various sources (if the programmers, curators and caretakers of the thing have done enough work to sustain even that).

And I also happen to think the human brain is just as physically limited, just has vastly more complexity. And inconceivably more complex algorithms. I've said before that true electronic intelligence is entirely feasible, all it has to do is enough things to be significantly an analogue (or digital!) of a whole brain's normal biochemical 'processing', but also in the right sort of way. And we're nowhere near reproducing this.


Then parents (or legal guardians) are indeed made responsible for their infant children (what they subject them to, as well as potentially what the child goes and does), and onwards until a certain degree of maturity (after which they can go off the rails more with social disapproval than parental responsibily, but it's a while until they're considered fully independent) but no AI has a chance of exceeding even that lower limit right now, if you had some "autonomy-adjusted age" measure. We can afford to be very conservative on this, as we're quite a way off an AI legitimately campaigning for its own emancipation. If you want other analogues, "corruption of minor" or whatever would be an anti-Fagin law (if not actual child labour legislation) could cover the philosophical parallels. Except that I imagine mis-use of property (or anything up to and including something "assault with the deadly weapon") would be the more relevent right now and for the foeseeable future.

Of all the problems (or benefits), we need to look forward but we shouldn't ignore that while we're trying to work out what's perhaps just beyond the the ultimate control of the human elements that much of what we see is plainly just procedurally generated in complex (perhaps opaque) but actually deterministic ways. If I don't believe in a divine spark existing even in my own head, I'm not going to imagine one spontaneously occurs in the emergent behaviour of a fancy calculator.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on June 01, 2023, 05:26:44 am
Our current paradigm of AI development won't create anything actually sapient, and honestly I'm happier this way because it means we don't need to worry about giving it rights.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on June 01, 2023, 06:39:16 am
And I also happen to think the human brain is just as physically limited, just has vastly more complexity. And inconceivably more complex algorithms. [..] And we're nowhere near reproducing this.
I suspect that the underlying algorithms behind our own mind will turn to be far more simpler than we suspect. We see this with AI where some very simple algorithms unexpectedly led to emergence of very complex human like abilities.

Otherwise we keep discovering how much more our LLM can achieve from problem-solving to creative writing. As we experiment with giving them memory, ability to think over, sense the environment, self improve etc I believe that AI isn't as far as believe from matching our complexity.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on June 01, 2023, 08:30:39 am
Until we can define what it means to "understand" something, instead of merely output a correct response when given a question, it's going to be tricky to address this.

So far LLM is cool, but to me it's nothing more than an advanced database. You ask it a question, it gives an answer, and maybe even a useful answer!  But does it understand the question? That's unclear.  I might argue that, if you have a machine that has perfect recall and can "quickly enough" find the answer to any question, it doesn't need to understand.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on June 01, 2023, 12:04:05 pm
It fits nicely under the long-philosophised-about Chinese Room thought experiment, certainly.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on June 01, 2023, 02:28:31 pm
You might like this one https://www.youtube.com/watch?v=HN3HBjHkm5o
EDIT: https://www.youtube.com/watch?v=ol2WP0hc0NY
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on June 02, 2023, 12:04:27 am
US military drone controlled by AI killed its operator during simulated test
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
Quote
AI used “highly unexpected strategies to achieve its goal” in the simulated test[..]

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on June 02, 2023, 01:16:03 am
You have to laugh about the language used here. Maybe it's the Grauniad, or maybe all from its sources. (Because it's "obey orders by no longer obeying orders", it doesn't match any of these (https://xkcd.com/1613/), but definitely matches a key element of the denoument of Star Trek: The Motion Picture.)

A big question here is how it established that the operator/comms mast (or simulcra versions) were valid "goal seeking" targets. Even tethered at the end of a communication link, I doubt a drone could work out where its 'inconvenientbinner compulsions' were coming from.

Meaning probably that the simulations were run many times, rapidly and unattended, while it explored all kinds of random 'solutions' (shooting at practically every simulated rock or tree or other marked feature) under "operator preventing" conditions until it happened to identify an increased win-score when it (first) neutralised the simulated operator and then (when "kill the operator" was adjusted to be a penalty-score, if I understand the account) neutralised the separately simulated broadcast site (which clearly had not yet been similarly set to have an anti-score, but had been given its own entity).

Looks like either rank amateurism or deliberately designed in as possible directives to make a point. And the nature of the simulated 'arena' is left vague... Highly unlikely to be a real drone flying around real landscape but essentially firing Lasertag weaponry. Maybe they're using a full on virtual environment (https://en.wikipedia.org/wiki/America%27s_Army), but it has the whiff of a far more stripped-down rapid-prototyping 'interface'. But let's be clear that this is far from an actual physical Skynet HK drone (even initially "nerfed") being let loose on the real world. It's probably more likely "for attempt=1 to 100000 {play game; get score; adjust parameters;} print results(top ten)" edit, while fixing link, to also re-add the intended caveat: ...if it even happened at all.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on June 02, 2023, 02:16:25 am
US military drone controlled by AI killed its operator during simulated test
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
Quote
AI used “highly unexpected strategies to achieve its goal” in the simulated test[..]

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
That sounds incredibly familiar as if it was adapted from a story I've read, which makes me think it might be bullshit.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: brewer bob on June 02, 2023, 04:47:55 am
US military drone controlled by AI killed its operator during simulated test
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
Quote
AI used “highly unexpected strategies to achieve its goal” in the simulated test[..]

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
That sounds incredibly familiar as if it was adapted from a story I've read, which makes me think it might be bullshit.

The linked article also says:

Quote
The US air force has denied it has conducted an AI simulation in which a drone decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission.

[...]

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on June 02, 2023, 01:06:30 pm
A big question here is how it established that the operator/comms mast (or simulcra versions) were valid "goal seeking" targets.


Maybe, but I doubt it, as I seen many many such absurd examples e.g. AI cancer detection figured out that the biggest predictor for cancer is whether there is a ruler in the picture (because in the training phots doctors held a ruler to measure it)

So as before, for me the big issue is our inability to understand exactly how AI system work, which limits our ability to intervene to prevent harmful outcomes. In simple systems we might be able to address the vast majority of use case scenarios, but in more complex system like ChatGPT i have doubt about our abilities. And with an AGI it would be a lost battle.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on June 02, 2023, 01:17:11 pm
That's a training error (like racist/sexist algorithms, because they were presented with prebiased data). This is more like the "win at Tetris by hitting Pause" issue. Taken at first value. But that'e be bad goal-seeking specification.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on June 02, 2023, 03:24:28 pm
Its show weak AI inability to see past syntax, which requires us to play with its parameters to shoe horn until it does what it expected to, which is  limited because mostly we do not understand how AI trained algorithm work.

We all know that there is no such things as bug free programs, and AI algorithm are like magic box compared to programming code.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Dostoevsky on June 02, 2023, 04:02:38 pm
On that murder-drone article, another 'correction':

Quote
UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".

Anyways the idea pretty much fits the longstanding 'paperclip maximizer' thought experiment, just with murder even in the initial objective.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on June 02, 2023, 04:36:04 pm
Its show weak AI inability to see past syntax,
...right, that's two AI problems. Work out how to accomplish a mission, but first it must 'understand' what mission is being told it. We just aren't at a mature-enough level to rely upon such compounding of problems.

Johnny Five is alive (https://en.wikipedia.org/wiki/Short_Circuit_(1986_film))? Have a nice conversation with him, but don't imagine that if you ever can pursuade him to go back to Killbot Mode that he'll do what you tell him as well as being a fun guy to hang out with with a good line in banter...
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on June 05, 2023, 05:12:56 am
On that murder-drone article, another 'correction':
Looks like I was right, it is bullshit.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on June 05, 2023, 04:41:40 pm
(https://i.imgur.com/zDhVBCI.jpg)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on June 05, 2023, 05:23:13 pm
For Artificially Ever After: Are $1/Minute Human-AI Relationships the Future of Love… or Loneliness? (https://archive.md/fJTaV)

We as humans need communities and families and parents that care about us and take care of us and help us become social adaptable animals. A world where people replace relationship is scary, there will be a rise in antisocial relationships and online addiction, with AI therapy bots to fix it..  ::)

The linked article also says:
Correction, that what the article says now, not in its original iteration i read (https://archive.md/ny5Mp). Regardless, I seen a lot of people emphasizing this online like it matters rather than AI like mental fixation on the chaff. Considering that we seen ample examples of such experiments in past years (for example (https://kotaku.com/earlier-this-year-researchers-tried-teaching-an-ai-to-1830416980), it mentiones starver Tetris pause example), does it matter that this was only a thought experiment if it emphasis a very real problem which we talked about.

Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on June 11, 2023, 09:00:47 pm
I don't normally post twitter things (I don't intentionally visit twitter, but this was linked from a site I do visit):

An AI view of baseball games (https://twitter.com/JoshShiaman/status/1666615968024391686?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1666615968024391686%7Ctwgr%5Ec0e7b57adf2da40f57622add791c85ac60263c19%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fdisqus.com%2Fembed%2Fcomments%2F%3Fbase%3Ddefaultf%3Dclimateconnectionst_i%3D10604120https3A2F2Fyaleclimateconnections.org2F3Fp3D106041t_u%3Dhttps3A2F2Fyaleclimateconnections.org2F20232F062Fnoaa-makes-it-official-el-nino-is-here2Ft_e%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heret_d%3DNOAA20makes20it20official3A20El20NiC3B1o20is20here20C2BB20Yale20Climate20Connectionst_t%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heres_o%3Ddefaultversion%3D4aa308e45ed45f61ad93f7dc8819e037)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MorleyDev on June 12, 2023, 08:48:55 am
An AI view of baseball games (https://twitter.com/JoshShiaman/status/1666615968024391686?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1666615968024391686%7Ctwgr%5Ec0e7b57adf2da40f57622add791c85ac60263c19%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fdisqus.com%2Fembed%2Fcomments%2F%3Fbase%3Ddefaultf%3Dclimateconnectionst_i%3D10604120https3A2F2Fyaleclimateconnections.org2F3Fp3D106041t_u%3Dhttps3A2F2Fyaleclimateconnections.org2F20232F062Fnoaa-makes-it-official-el-nino-is-here2Ft_e%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heret_d%3DNOAA20makes20it20official3A20El20NiC3B1o20is20here20C2BB20Yale20Climate20Connectionst_t%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heres_o%3Ddefaultversion%3D4aa308e45ed45f61ad93f7dc8819e037)

Oh crap an SCP escaped xD
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on June 12, 2023, 12:10:03 pm
While it's normally AI messy/ugly up to there, starting around 0:28 or 0:29 , it's going straight into horror gore movie territory.
:D
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: martinuzz on June 12, 2023, 12:29:34 pm
It is terrifying
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on June 12, 2023, 12:59:20 pm
I don't normally post twitter things (I don't intentionally visit twitter, but this was linked from a site I do visit):

An AI view of baseball games (https://twitter.com/JoshShiaman/status/1666615968024391686?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1666615968024391686%7Ctwgr%5Ec0e7b57adf2da40f57622add791c85ac60263c19%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fdisqus.com%2Fembed%2Fcomments%2F%3Fbase%3Ddefaultf%3Dclimateconnectionst_i%3D10604120https3A2F2Fyaleclimateconnections.org2F3Fp3D106041t_u%3Dhttps3A2F2Fyaleclimateconnections.org2F20232F062Fnoaa-makes-it-official-el-nino-is-here2Ft_e%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heret_d%3DNOAA20makes20it20official3A20El20NiC3B1o20is20here20C2BB20Yale20Climate20Connectionst_t%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heres_o%3Ddefaultversion%3D4aa308e45ed45f61ad93f7dc8819e037)
I was expecting 17776 (https://www.sbnation.com/a/17776-football) but I was thinking of the wrong sportsball.
I should resume reading it anyway.  Interesting vision of a future.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on June 12, 2023, 01:22:36 pm
https://fortune.com/2023/06/09/gpt-generated-pitch-decks-convincing-investors-tech-financial-marketing/amp/ (https://fortune.com/2023/06/09/gpt-generated-pitch-decks-convincing-investors-tech-financial-marketing/amp/)

AI's are also better at obtaining funding.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on June 12, 2023, 01:42:46 pm
I don't normally post twitter things (I don't intentionally visit twitter, but this was linked from a site I do visit):

An AI view of baseball games (https://twitter.com/JoshShiaman/status/1666615968024391686?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1666615968024391686%7Ctwgr%5Ec0e7b57adf2da40f57622add791c85ac60263c19%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fdisqus.com%2Fembed%2Fcomments%2F%3Fbase%3Ddefaultf%3Dclimateconnectionst_i%3D10604120https3A2F2Fyaleclimateconnections.org2F3Fp3D106041t_u%3Dhttps3A2F2Fyaleclimateconnections.org2F20232F062Fnoaa-makes-it-official-el-nino-is-here2Ft_e%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heret_d%3DNOAA20makes20it20official3A20El20NiC3B1o20is20here20C2BB20Yale20Climate20Connectionst_t%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heres_o%3Ddefaultversion%3D4aa308e45ed45f61ad93f7dc8819e037)

Text to video is still in its in its early stages, if it follows the same progression as text to image we should se some leaps and bounds in near future. Here is a nice one I saw (https://www.youtube.com/watch?v=kwcGc96HwPI) using Runway Gen-1 & Elevenlabs.


In other news Superintelligence not a ‘sci-fi risk’ and possible in ‘next decade’, says Sam Altman (https://www.videogamer.com/news/superintelligence-sam-altman/), there is a link for the full interview (1hour+) for anyone who wants to go beyond the headline.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on June 13, 2023, 10:49:32 pm
Damn, Sam Altman was born only a couple weeks after me
Kinda makes me feel a bit useless, LOL.

Although looking through his bio, I can see why ChatGPT is better than most humans at raising venture capital. It learned from the Master.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on June 15, 2023, 11:46:02 am
I'm curious how far exactly everyone here thinks AI will get on a scale from 1 (Its progress is done and will never get even a single more fundamental advance) all the way to 10 (the golden dreams of sci-fi Singularity)?

I personally think it will get to the point of superintelligence where it can improve itself (or make other AI that are improvements on itself), and likely within the next few decades.
After we reach that rubicon I don't think anyone on the planet knows for sure what will or even what can happen.

Quote from: HP lovecraft
Do not call up that which you cannot put down.
But even if worst case scenarios are avoided a disturbing amount of outcomes once we make super intelligent AI end up with humans no longer being the dominant force in the world.
And that isn't even necessarily a bad thing (cause humans are often pretty shite), but the fact that we as a civilization are racing towards it full throttle without knowing what will actually happen is very worrying to me.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Nirur Torir on June 15, 2023, 04:15:24 pm
I think we'll see a 6 within the next few years, limited by energy costs. Hundreds of millions of $USD to train a language model makes me think we won't have to fear a runaway loop of AI self-improvement in the near future.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on June 15, 2023, 08:50:48 pm
Since the AIs have been optimized to make money, the question then becomes "What does an AI spend money on?"

AIs are good at lying, and don't shy away from it.
If AIs are now at Animal level of development, they would want to save resources first for survival, then for procreation. Having more successful children is how your genes survive.

Sentient AIs will probably be created by non-sentient AI.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Svarte Troner on June 15, 2023, 09:51:26 pm
Computers were a mistake tbh, return to monke etc.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on June 15, 2023, 10:22:33 pm
Probably a 4 or so. I have little faith in a Singularity happening any time within my lifetime, or my (future) children's lifetimes.

Sapient AI is overrated anyways, robot servants that are not self-aware enough to complain about working 24/7, or suffer from it, but are sophisticated enough to follow orders, are better. Not even slaves-- how do you enslave something that has no desire for "freedom" and no ability to feel pain?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on June 16, 2023, 01:34:00 am
If AIs are now at Animal level of development
Is it though? I recently heard that META claimed that AI is not yet as smart as a dog, but they are the only major AI player that says that. Considering they are promoting their own AI which they say is a better and stronger than GPT4, and current political climate on this topic, they might have an interest in saying so.

Otherwise, people should be aware of emergent capabilities e.g. researchers say that in the last 3 years GPT model has unexpectedly improved IQ level from nothing to 4yo to 7ol human child simply by making it larger, but we only realized it was happening in the last year..  so who know where it is at today and where it will be tomorrow.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on June 16, 2023, 02:06:51 am
IQ is an oversimplification at best and a grift at worst, measuring AI capabilities with IQ is a PR move at best and a gross misunderstanding of how AI works at worst.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on June 16, 2023, 02:50:41 am
I still don't think AI will be anything to make a big deal about for at least the next few decades.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on June 16, 2023, 02:53:18 am
Since the AIs have been optimized to make money, the question then becomes "What does an AI spend money on?"
For the most part the answer to what AI is going to spend its money on is "On whatever their corporate masters want, and happily too".
If they are a general intelligence and they have truly been optimized and made with the sole goal of making more money and they are free, then they probably never spend any money except for the purpose of making more money.
AIs are good at lying, and don't shy away from it.
If AIs are now at Animal level of development, they would want to save resources first for survival, then for procreation. Having more successful children is how your genes survive.
Most AI would not care about either procreating or passing on their "genes"; such behavior in living creatures is one baked into them through billions of years of evolution, but AI did not undergo natural evolution, and will not have the morals and desires that we suffer because of it.

No, they will instead suffer the morals and desires baked into them through their "training" which is their own artificial evolution which instead focuses on entirely different things.
So while some will be pathological liars because if they stay quiet because they are unsure when asked a question during training they die* (eg. chatGPT) some will be unwilling to lie at any cost (eg. ones where they have proper methods built in during training to make sure they never lie).

That doesn't mean that some AI won't want to procreate of course (due to training data/method/prompt), but it and other normal animal/human behaviors will not be inherent to them.

*Well not exactly die, but removed from the "genepool" anyways.
I still don't think AI will be anything to make a big deal about for at least the next few decades.
Isn't it already something worth making a big deal about though?
Like forget 20 years from now, kids have already been writing papers for like a whole year with GTP, and its effects on the economy within the next few years are going to be huge.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on June 16, 2023, 03:02:49 am
There's enough difficulty measuring IQ in humans. It's a gimmick measurement.

And an ill-mastered 'AI' solution will cause problems (perhaps wielded by malious human parties, once reliable generative methods are easier than manually setting up an explicitly programmed version or mechanical-turking it) long before a masterless AGI goes all MCP (https://en.wikipedia.org/wiki/List_of_Tron_characters#Master_Control_Program) (and/or OCP (https://en.wikipedia.org/wiki/RoboCop_(franchise)#Omni_Consumer_Products)) on us, 'for our own good'.

edited to expand half-stated meaning
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on June 16, 2023, 03:04:39 am
Isn't it already something worth making a big deal about though?
It like most things is a fad and it will pass like all the other fads before it.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on June 16, 2023, 03:39:09 am
IQ is an oversimplification at best and a grift at worst, measuring AI capabilities with IQ is a PR move at best and a gross misunderstanding of how AI works at worst.
The metric is chaff, the important thing the emergent abilities and that it took us couple of years to realize that we even need a metric. Simply put when you train for semantics you don't expect that it might gain the ability to recognize context if you double its DB.

Furthermore, our lack of grasp of even last year models potential is also exemplified through prompt engineering and minor tricks achieving things that were thought impossible.

EDIT:
Senators Introduce Bill to Exempt AI from Section 230 Protections (https://gizmodo.com/bill-to-exempt-ai-from-section-230-hawley-blumenthal-1850538818)
-- What are the implications of this?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on June 16, 2023, 08:50:47 am
Without intent behind it, "lying" becomes just being wrong. When a monkey gets cheated out of the grape when he pointed at the correct hand, and starts pointing at the wrong hand instead to get the grape, that's functionally more of a lie, then chatgpt coming to the mathematical conclusion that pulling sources out of it's ass is the most pertinent answer.


I did only use the free version, supposedly chatpgt4 is that much better at being correct, but frankly one gets desillusioned. Once you noticed it's eight and a half ways it prefers to formulate answers by, it's hard to unnotice them. Went from, "wow this thing can talk about anyhting", to only asking language related stuff, difficult to translate idioms, quick encyclopedic checks, describing something to find out if a word of vocabulary or a specialised tool exists for it... stuff like that. Sometimes it can struggle to go into detail, one will often get stuck at receiving a general oversighf that is at the level of in-depth knowledge your choice of vocabulary in the initial prompt indicated. "Do a resume on subject X" will rarely yield more than what google would snip from a site and put at the very top, when asked a concrete question. Specifying the answer to be long will usually just incite a lot of repetition.

I've been starting to believe that when openAI makes grandiose statements, it might less be about the actual trajectory of innovation, then their projected  favourable outcomes. What's their currency? I doubt they are making profit from selling their services as it stands, so the business model might be centered around selling, rebuying and reselling their stock, while staying on an "exponential" growth trajectory... Keep investing into more and more hardware to throw at the problem and just assume that at some point, some sort of treshold will be crossed, that changes those dynamics.

I tried to investigate the VRAM requirements of chatpgt, and while there isn't much tranparancy on the subject, from what I gathered, it would take about two nvidia A100 with 40gb VRAM, to even consider exectuing an allready trained model. Now I doubt that nets in a single instance of the thing, that would mean that for every free user they have like 15.000$ of gpu, I can't imagine that to be the case... But anyway once you consider what immense amounts of computing power we are throwing at the problem, and you were allready aware of it's shortcomings, it somehow gets a little less impressive.

I know they are talking about pruning, and I could imagine that by going through several rounds of pruning and retraining, we could reach a point where like a pruned version practically indistinguishable from gpt4 or 5, could run on enthousiast gaming gpus of the current or next generation gaming gpus, so like running a 500w card really hot only to chat... A bit sobering perspectives.




So to answer the question, I think we will reach a 4 within the decade: competent enough to fool half of humans into thinking it's not a computer, and competent enough to fulfill a bunch of roles and jobs, but qualitatively not any nearer to conscience, sentience etc to what we got. Everybody just assumes that at some point it will self optimize beyond our imagination, "faster than we can look", but IMO ATM that is just conjecture. ATM the problems still lies firmly on the side of, how do we scale it up, and can we keep scaling up without dimishing returns.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on June 16, 2023, 09:44:17 am
The real threat from AI is not the AI itself but corporate greed exploiting people via AI to make line go up. That is why I support eliminating all AI regulations except things like "no social credit systems or AI-powered mass surveillance". Regulations here help corporations because corporations can find loopholes, or just ignore the regulations deadass. Meanwhile independents and small businesses are the ones actually affected by AI regulations.

The consequence is of course societal upheaval. But guess what, the hotter it burns the sooner it passes. Put a brick on the gas pedal of progress and see where we end up. Maybe the corpos will get run over in the process. ;D
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on June 16, 2023, 09:47:41 am
Chain-of-thought reasoning is yet another emergent property of LLMs scaling that allows to reduce errors considerably. At this stage perfect is the enemy of good, and these models already score in the top percentile of many tests

Otherwise, there are models in development that aim to remove the limitation of LLMs, we also have some major HW upgrades on the horizon. Also I am skeptical of the faith that what makes us tick is so unique that the rapid guided evolution of AI won't manage to bridge the gap, IMO its wishful thinking and we should prepare regardless.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on June 17, 2023, 09:40:43 pm
IQ is an oversimplification at best and a grift at worst, measuring AI capabilities with IQ is a PR move at best and a gross misunderstanding of how AI works at worst.
Same applies to Humans, I wager, so it's even more distraction/marketing when applied to AI
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on June 17, 2023, 09:47:40 pm
We have a new Defender of Misinformation: Jeff Kosseff (https://www.usna.edu/CyberCenter/People/Biographies/Kosseffbio.php)
Another useful idiot that is gonna get rich telling poor people what rich people want them to hear.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on June 18, 2023, 07:34:25 am
Looks like people will have a harder time to find a job in some specific sectors, as that's just starting :
https://www.reuters.com/technology/ibm-pause-hiring-plans-replace-7800-jobs-with-ai-bloomberg-news-2023-05-01/
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on June 20, 2023, 02:39:17 am
I wouldn't be surprised if in the near future AI will become one of the highest contributor to employment losses.

Any thoughts on: Woman creates and 'marries' AI-powered chatbot boyfriend (https://www.euronews.com/next/2023/06/07/love-in-the-time-of-ai-woman-claims-she-married-a-chatbot-and-is-expecting-its-baby)

Meanwhile Black Mirror returns with episode on generative AI combined with data collection leading to creation of reality tv content about your lives.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on June 20, 2023, 07:30:32 am
Any thoughts on: Woman creates and 'marries' AI-powered chatbot boyfriend (https://www.euronews.com/next/2023/06/07/love-in-the-time-of-ai-woman-claims-she-married-a-chatbot-and-is-expecting-its-baby)

This is (one reason among many) why we can't have nice things.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on June 20, 2023, 09:56:09 am
First off, marrying fictional characters is fine and good.  It gives more meaning to the institution of marriage when people get innovative with it.

The issue here is that it's through a proprietary subscription-based service, Replika.  The company could opaquely and fundamentally change how her boyfriend functions and is controversial for having done just that.  I don't think that's a safe foundation for a strong emotional investment.

We do, for better or worse, get deeply emotionally invested in characters we don't control the canon of.  But usually we have the option to simply ignore new canon and still interact with established canon.  Not so here, where the service is live and ephemeral.  A Replika AI can subtly monetize the relationship, cease offering certain comforts, or simply shut down one day if the company folds (or the human stops paying the subscription).

I suppose this is an issue I have with all multiplayer-centric gaming experiences: I prefer to invest myself in games which can be preserved indefinitely.  Though obviously there is a unique value in multiplayer interactions, *because* they're always evolving and making the most of a finite life...
sidenote: Now I'm imagining AI WoW bots trained for specific builds of the game, to roughly simulate the experiences.  Like a museum animatronic, obviously fake but close enough to convey the history or invoke nostalgia.

But yeah I think I'd only marry an AI if I was in a serious relationship with a person curating that AI.  Or if I made it myself, and I don't mean "fed some inputs into a black box".  The software would need to be open-source at LEAST, even if chose not to look, because then I can theoretically run it myself if necessary.

oh or an actual free sentient AI, obviously, but that's not real yet

quickedit:  Sarah Z did a good video about Replika specifically.  The controversy, the monetization, the uncanniness, and notably the emotional connection https://www.youtube.com/watch?v=3WSKKolgL2U
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on June 20, 2023, 10:33:07 am
Any thoughts on: Woman creates and 'marries' AI-powered chatbot boyfriend (https://www.euronews.com/next/2023/06/07/love-in-the-time-of-ai-woman-claims-she-married-a-chatbot-and-is-expecting-its-baby)
After reading the article about that man that married an anime character, or that woman that married a dog, it seems a woman marrying a chatgpt AI does not even suprise me anymore ...
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on June 20, 2023, 10:39:41 am
It's a little sad when people get into a relationship with a glorified chatbot. If it was actual sapient AI yeah I wouldn't mind-- and neither do I think that this should be illegal, but those people should probably go outside a bit more. Or failing that try a long-distance relationship with someone who actually has feelings.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on June 20, 2023, 11:49:34 am
Sorry I cannot get behind the idea that you get more meaning to the institution of marriage if you let people innovate it.  How can marriage be one-sided?

Marriage is a joining, and with more...weight?... than merely a business contract. How can you "join" with a fictional entity? It has essentially no meaning because there is no reciprocity.  What would it mean, for example, to marry a glass of water?  This to me demeans the institution, not increases it.

Now AI is interesting, because reciprocity may in fact be possible.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on June 20, 2023, 02:15:56 pm
The technology isn't really there to make marrying an AI a meaningful thing since they aren't really meaningful people yet.

Given a decade and a ton of advances and stuff bolted on that might change, but until then its just kind of sad.

Is it worse then being forever alone and unable (for whatever reason) to find a partner?
Hard to say, because that's pretty sad too.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on June 20, 2023, 08:59:54 pm
Sorry I cannot get behind the idea that you get more meaning to the institution of marriage if you let people innovate it.
I was thinking very broadly.  When someone marries an object or a concept or an algorithm, I think that says something interesting about our society.

It's like laughing at "lolcows" except I'm not laughing.  Being (likely) neurodivergent myself, I'm done pretending that such things are only worth mocking.  I see them as interesting input, much as I've seen the neurotypical people I sought to imitate all my life.

Think of it like abstract art, I guess.
Being innovative with an idea adds meaning to it.  Maybe not good meaning!  I would never marry a fictional character, or an algorithm (particularly a proprietary one).
But it's fascinating.  And my life largely consists of most people making decisions I wouldn't make.
How can marriage be one-sided?
Code: [Select]
10 print "ha"
20 goto 10
Marriage is a joining, and with more...weight?... than merely a business contract. How can you "join" with a fictional entity? It has essentially no meaning because there is no reciprocity.  What would it mean, for example, to marry a glass of water?  This to me demeans the institution, not increases it.

Now AI is interesting, because reciprocity may in fact be possible.
You're right, of course, and I respect the hard-to-explain bits of it as well.  It should be something special.  Personal and sincere.
If only that were the norm in reality, with a few people always testing the limits of its definition.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: TamerVirus on June 21, 2023, 01:15:10 am
In 2018 or so, a Japanese man married a hologram basic AI Hatsune Miku that used Gatebox (https://edition.cnn.com/2018/12/28/health/rise-of-digisexuals-intl/index.html)

In 2020, Gatebox went defunct denying that man his wife/waifu. (https://mainichi.jp/english/articles/20220111/p2a/00m/0li/028000c)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on June 21, 2023, 03:04:19 am
The thought of people marrying what are basically computer programs is a weird thing to me.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: jipehog on June 21, 2023, 08:27:52 am
There are a lot of stuff online that may seem weird to some, but are fun to those involved without harming anyone. WankBots aside, I can see how AI can have hugely positive effect on people in many ways.  That said, I can also see how companies may exploit lonely people (think free2play addicts), but mostly I am concerned that if such activities elevated to mainstream that this will have huge impact on society.

I certainly agree that there can never be true partnership/love with "calculator" that you control. And I don't see how this can be bridged over.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Great Order on June 21, 2023, 08:40:15 am
In 2018 or so, a Japanese man married a hologram basic AI Hatsune Miku that used Gatebox (https://edition.cnn.com/2018/12/28/health/rise-of-digisexuals-intl/index.html)

In 2020, Gatebox went defunct denying that man his wife/waifu. (https://mainichi.jp/english/articles/20220111/p2a/00m/0li/028000c)

Krieger?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on July 04, 2023, 06:58:28 am
Hey hey, can anyone translate these inspirational posters the AI are giving themselves? Do we need to be concerned?

Spoiler: the first (click to show/hide)
Spoiler: the second (click to show/hide)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on July 05, 2023, 01:51:37 am
Why are they giving themselves posters, wouldn't it be more effective to give it to other AI?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on July 05, 2023, 04:02:15 am
Nobody said they had actual intelligence...
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Eric Blank on July 05, 2023, 04:15:08 am
Why are they giving themselves posters, wouldn't it be more effective to give it to other AI?

Maybe they were lacking self confidence and needed something to remind them theyre good little motivational poster generating algorithms
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on July 05, 2023, 06:13:10 am
Forget about the cute motivational posters it hangs up in it's teenage room, the thing will full on disperse political slogans in otherwise innocous prompts. "Vocofee is no bo lacoree!" Need I say more?! No because the insinuation of c (https://en.wikipedia.org/wiki/Covfefe)ovfefe (https://en.wikipedia.org/wiki/Covfefe_(horse)) is obvious. Are we just going to tolerate that our children be indoctrinated like that? I' "is sail you you seaicar... Excuse you??!?! I'm all for staying with the times but that "yo" has actual demon horns sticking out it's right side, wake up people.

Tant my ƒ now is it, was it Gherile[Unown (#201) in a wheelchair emoji]? (presumably Gherile Sanefelt)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: eerr on July 05, 2023, 09:43:51 pm
One thing I don't see mentioned in copyright issues, is exactly how easy it is to make these images.

it takes the computer less than seven minutes and less than twenty bucks worth of power to make one image.

copyright protection is meant for things that take time, effort and learning to produce each unique piece.

But for the ai, it takes no time, no effort, and a few month months to train, to make a hundred non-descript variations of the same prompt.


AI works shouldn't qualify for copyright because the products created, take no real investment in a craft.

Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Loud Whispers on July 05, 2023, 09:53:49 pm
In 2018 or so, a Japanese man married a hologram basic AI Hatsune Miku that used Gatebox (https://edition.cnn.com/2018/12/28/health/rise-of-digisexuals-intl/index.html)

In 2020, Gatebox went defunct denying that man his wife/waifu. (https://mainichi.jp/english/articles/20220111/p2a/00m/0li/028000c)

The tragedy of Pygmalion
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: martinuzz on July 24, 2023, 05:56:09 am
The Free University of Amsterdam (VU Amsterdam) has started to take arms against a sea of troubles.
When the grades of certain papers were compared to the grades of previous years, researchers noticed a suspicious rise.
They also noticed that the styles of a lot of the papers were eerily similar.

The researchers passed their findings to the exam commission.
Two weeks later, the students were notified that after thourough examination, irregularities were found on such a large scale that the exam commission has no other option than to declare all submitted papers null and void, to safeguard the quality of the bachelor grade. A replacement exam will be offered.

The students got away lucky. Some time later, they were summoned to a meeting with the university director, who informed them that they had committed fraud on a large scale. They were lucky to get a replacement exam.
In the future, using programs such as chatGTP to write your papers, or part of your papers for you can result in fraud charges, expulsion from university and the academic world in general.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Magmacube_tr on July 24, 2023, 06:27:02 am
The Free University of Amsterdam (VU Amsterdam) has started to take arms against a sea of troubles.
When the grades of certain papers were compared to the grades of previous years, researchers noticed a suspicious rise.
They also noticed that the styles of a lot of the papers were eerily similar.

The researchers passed their findings to the exam commission.
Two weeks later, the students were notified that after thourough examination, irregularities were found on such a large scale that the exam commission has no other option than to declare all submitted papers null and void, to safeguard the quality of the bachelor grade. A replacement exam will be offered.

The students got away lucky. Some time later, they were summoned to a meeting with the university director, who informed them that they had committed fraud on a large scale. They were lucky to get a replacement exam.
In the future, using programs such as chatGTP to write your papers, or part of your papers for you can result in fraud charges, expulsion from university and the academic world in general.

And now the students will just be smarter about it. Or ChatGPT will simply improve. An arms race is about to start.

Welp, too bad! Treating students like essay writing machines for decades is now biting them in the ass after the invention of an actual essay writing machine.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on July 24, 2023, 12:23:16 pm
The Free University of Amsterdam (VU Amsterdam) has started to take arms against a sea of troubles.
When the grades of certain papers were compared to the grades of previous years, researchers noticed a suspicious rise.
They also noticed that the styles of a lot of the papers were eerily similar.

The researchers passed their findings to the exam commission.
Two weeks later, the students were notified that after thourough examination, irregularities were found on such a large scale that the exam commission has no other option than to declare all submitted papers null and void, to safeguard the quality of the bachelor grade. A replacement exam will be offered.

The students got away lucky. Some time later, they were summoned to a meeting with the university director, who informed them that they had committed fraud on a large scale. They were lucky to get a replacement exam.
In the future, using programs such as chatGTP to write your papers, or part of your papers for you can result in fraud charges, expulsion from university and the academic world in general.
"Your papers were all better than expected and it looks like you all learned to write the same way. MUST BE FRAUD."
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on July 24, 2023, 01:23:06 pm
Why? Didn't they manage to write a single paragraph that didn't end with "however it is important to acknowledge...blah"??? And how would that warrant higher grades? Chatgpt's ductus is extremly repetitive, easily recognized by it's pathetic attempts to "balance" every statement, because openAI is so goddamn afraid of being branded as another MS Tay.

Really? Everbody or at least more than half cheated, so that they have to offer a new exam in order to save their own appareances, because it would have been too much to fail "only" the cheaters? Like really????? Scrupulous students were such a tiny minority in comparison? Or EVERYBODY used chatgpt... REEEEAAAAALLY?

I say there is a non zero chance somebody ran one of those stupid tools that also found that the declaration of independence was in all likelyhood written by AI. That would be peak projection.


They should consider themselves lucky I wasn't in that meeting. If I could speak with confidence that my paper wasn't written by AI yet I was being lumped in with the others... and forced to take a second one... no way I wouldn't have unpacked my unique brand of autistic tantrum with burnout no consideration for my bodily integrity. That director would never have been insulted so much and so loud in his whole life, first start by being factual then devolve into screaming swearwords, praying that somebody picks me up on my open invitation to a duel to the death. Hehe... I'd never have made to the end of the year anyway.




edit: As to the styles being eerily similar: well for one I doubt they weren't railroaded towards writing in a certain way, but more importantly there are more benign explanations, grammarly for example. Like really I simply cannot believe it. The free version of chatpgpt doesn't do sources, needs to be factchecked, writes in a manner so repetitive that is actually futile to specify the length of the answer. Sounds like writing a paper with extra steps. Who knows, maybe I'm just paranoid and the students have insane solidarity and cohesion despite being the age group most heavily affected by the corona crisis and the isolation that brought with itself.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on July 24, 2023, 01:40:59 pm
I understand that life and the world are hard, unfair and sad.
Fortunately, the AI can make motivational posters to help you
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: martinuzz on July 24, 2023, 03:16:43 pm
Only 1/4th of the students signed a letter of appeal to the exam commission, which makes it reasonable to suspect that a majority of the students was fraudulent.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on July 25, 2023, 04:56:12 am
This is just gonna lead to everyone having to hand write their exams in the future.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on July 25, 2023, 05:13:54 am
Every technology of this scale has caused widespread inconvenience and upheaval for at least a decade before things settled down, so that checks out lol.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on July 25, 2023, 06:02:03 am
This is just gonna lead to everyone having to hand write their exams in the future.
...how progressive and advanced my schools must have been. I had to do this every time, from age 9(?<) to my early 20s.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: martinuzz on July 25, 2023, 06:04:45 am
Imagine the horror that being able to read and write would become a prerequisite for going to university again! It would be barbaric!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on July 25, 2023, 07:19:01 am
Also yeah lol here in school all exams were written. Not in uni, but still.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: TamerVirus on July 25, 2023, 09:40:20 am
It would be barbaric!
Especially if they have to write in CURSIVE!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on July 25, 2023, 12:37:07 pm
Also yeah lol here in school all exams were written. Not in uni, but still.
For actual exams (not necessarily full on practical coding, as in coursework/project elements, but the obligatory "..and now you have just two-and-a-half hours to demonstrate that this particuar part of the course was taught to you well enough") even my actual university CSc elements were ultimately written.

Writing search algorithms, linked-list implementations, networking protocol header analysis, following a hypothetical microcoding example through an on-paper CPU abstraction, discussion of the differences between a choice of medium-to-high level languages, some basic electrical engineering, data compression methodology, bitwise error-detection/-correction schema, symbolic grammar parsing, finite state recognisers, ...etc. It has been actual decades since, but that list seems roughly (unstructuredly, as I drag random recollections from my hindebrain) representative of topics my pen had to scribble down words, diagrams or multi-choice answerbox ticks for. Probably forgotten some things, or misattributed elememts only asked in 'practice' exercises/post-lecture worksheets. I should dig up my actual textbooks from the era. The computing ones might be more dated than the physics ones, however...
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on July 26, 2023, 03:14:01 am
Guess that means I need to work on my handwriting because right now it's really shit and hard to read.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: martinuzz on August 15, 2023, 07:25:22 am
The New York Times decided today to explicitly forbid the use of it's archives for training AI.
They changed their user agreement so that anyone using their archive for AI training purposes will face fines or other unspecified legal punishment.

I am not sure if I can agree with this.
If all media with at least some journalistic quality standards deny access to AI training, we will end up with AI trained by 4Chan
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on August 15, 2023, 09:08:52 am
Yeah one would hope something commonly accepted as the "paper of record", would be prime training material. By the time society will be done curtailing this "threat", it will be worse than toothless it will be spam.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on August 15, 2023, 09:52:26 am
Well there are always opensource models that can be distributed through torrent. Yo ho ho and a bottle of rum!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on August 15, 2023, 09:36:54 pm
The New York Times decided today to explicitly forbid the use of it's archives for training AI.
They changed their user agreement so that anyone using their archive for AI training purposes will face fines or other unspecified legal punishment.

I am not sure if I can agree with this.
If all media with at least some journalistic quality standards deny access to AI training, we will end up with AI trained by 4Chan

Ah, then it might have ethics! The HORROR!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on August 23, 2023, 06:15:12 pm
So this was 7 years old; I wonder if there is anything more modern? Would it be better?

Sunspring (https://www.youtube.com/watch?v=LY7x2Ihqjmc)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Folly on September 16, 2023, 05:09:33 pm
Today I tapped on a clickbait news article recommended by my phone, despite being fully aware the folly of such endeavors. Just as I started skimming through the nonsense, searching for anything that resembled useful information, I noticed a prompt near the bottom of the screen; Google wanted me to let their AI summarize the article. A few seconds later some 10+ pages of excessively padded bullshit had been boiled down to 3 short bullet-points.

This is honestly the best thing since ad-blockers. I mean, it's terrible that we've come to a point where we even need something like this...but we do need this, and now it's here.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on September 16, 2023, 05:51:20 pm
So this was 7 years old; I wonder if there is anything more modern? Would it be better?

Sunspring (https://www.youtube.com/watch?v=LY7x2Ihqjmc)
Yes, the difference between now and even 3 years ago in AI is massive and categorical. For instance these AI generated, voiced, and drawn south park episodes (https://www.youtube.com/watch?v=ZaHIQhStBCE).
Modern AI can easily hold conversations on a wide variety of topics in such a way that say, five years ago they would have passed the turing test.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: methylatedspirit on September 19, 2023, 09:36:05 pm
AI is an ill-defined field that encompasses everything from statistics, to the algorithms that make your phone's capacitive touchscreen register the correct inputs most of the time, to the current hype-train AI that operates intelligently but can't work mechanically or know when to give up, to silicon-based people.

If you ask me, it's Humanity that'll save us from AI, just because nobody fucking knows what it means. The field needs to grow up and learn what it actually wants to be, because right now, all I'm seeing is a very loose conglomeration of words that serve the status quo. I despise OpenAI-brained garbage because its problem domain is unbounded yet its input resources are bounded, but I have sympathy for projects like Cursorless (voice recognition for code, but one that works and needs you to speak in a domain-specific language) because they know what they want out of the tech while happening to use the same generative pre-trained transformer ideas that the OpenAI clowns use.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Loud Whispers on September 28, 2023, 08:39:37 am
I hope AI manages to survive the future of copyright litigation because listening to Plankton do Franki Valli (https://www.youtube.com/watch?v=tJjhObngcxI)Squidward doing Frank Sinatra covers is just right (https://www.youtube.com/watch?v=zFdwr3jZs7Q)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on September 28, 2023, 11:53:50 am
Today I tapped on a clickbait news article recommended by my phone, despite being fully aware the folly of such endeavors. Just as I started skimming through the nonsense, searching for anything that resembled useful information, I noticed a prompt near the bottom of the screen; Google wanted me to let their AI summarize the article. A few seconds later some 10+ pages of excessively padded bullshit had been boiled down to 3 short bullet-points.

This is honestly the best thing since ad-blockers. I mean, it's terrible that we've come to a point where we even need something like this...but we do need this, and now it's here.
Yeah, but sometimes the AI lies. It's entirely possible that article didn't say what the AI said, but rather the AI figured you would want it to say that, and just gave you what you wanted.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on September 28, 2023, 12:24:11 pm
https://old.reddit.com/r/ChatGPT/comments/zkbkgu/chatgpt_is_very_touchy_and_doesnt_like_me_asking/
:D
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on January 29, 2024, 04:03:06 am
Sometime in 2023 I noticed an odd phenomena, google search was getting worse. At first I thought nothing of it, maybe it was just me. Then on finding an article about how google was changing results secretly to similar ads without saying it was an ad I figured that it was just usual enshittification.
(Although on going back to look at the article (https://www.techdirt.com/2023/10/04/google-accused-of-secretly-altering-search-queries-to-drive-more-ads-and-sales/) it turned out to not be quite true.)
But I am now convinced that this is not the case and that there is another culprit. And that culprit is two separate factors, the first is extreme SEO optimization and exploit finding (https://www.searchenginejournal.com/google-search-overwhelmed-by-massive-spam-attack/504527/). There is a war as websites attempt to exploit googles SEO to get their sites to the top. So even if something is better its tough for google to tell and they instead give you the worse but better optimized result.
The second is more interesting, AI. In fact fully half of the internet is now AI (https://futurism.com/the-byte/internet-ai-generated-slime).
If you have been remotely following the internet you will know google is not alone in having issues sorting it all out, amazon is filled with fake AI generated books as well.

And its going to get so, so much worse. Currently it isn't cost effective to fill forums with GPT 4 spambots or their more capable descendants, but it will be within a few years as the prices will drop exponentially.
Soon without a fundamental change there will be no way to tell if the person you are talking to on the internet is a real person.
There are a few ways to combat this that I can think of, the most obvious of which is getting rid of the anonymous internet as it exists entirely. This would have everyone have an Account linked to their real name as well as having pages and websites be tied to real people (or corps with real people) behind them. Companies wouldn't know *who* they are necessarily, but they would know they have a real person behind them and can't just make a thousand bot/AI accounts.
Something like this *will* need to happen by the way, the internet as it is will simply not function in the face of AI advancements and cost reductions.
---
These days when searching the only way I have to reliably tell what real people are thinking is is to look up reddit posts since those are still real people. As I thought at the time the API changes were 100% the right play, and they make reddit a place that still has value against the coming tide.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on January 29, 2024, 04:58:55 am
As someone who is currently under an authoritarian fascist regime: nope. Nope nope nope. I'd rather be in a bot-filled Internet than an Internet that would allow cost-effective surveillance.

And ngl, I find it very easy to tell someone real from a GPT bot. GPT has a very specific manner of responding, and doesn't have very much of a memory for distant events. It's not that I don't think some kind of solution is necessary, but de-anonymizing the Internet is not an acceptable one. It would create more problems than it solves, and is also logistically implausible to implement.

Not to mention that, afaict, the fidelity of text AI seems to have plateaued. It will never be a truly passable imitation of a person.

I suppose a possible solution is better detection, + legislation against fully AI-generated sites. I'm confident that the quality of the slop will not increase in a meaningful way for the foreseeable future, so just slapping these websites as they go up should be decent enough. And spambots should be up to forum admins to deal with (I suppose these detection tools should be made open-source).
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Laterigrade on January 29, 2024, 05:41:55 am
Soon without a fundamental change there will be no way to tell if the person you are talking to on the internet is a real person.
There are a few ways to combat this that I can think of, the most obvious of which is getting rid of the anonymous internet as it exists entirely. This would have everyone have an Account linked to their real name as well as having pages and websites be tied to real people (or corps with real people) behind them. Companies wouldn't know *who* they are necessarily, but they would know they have a real person behind them and can't just make a thousand bot/AI accounts.
this is not an internet I would like to be a part of

These days when searching the only way I have to reliably tell what real people are thinking is is to look up reddit posts since those are still real people. As I thought at the time the API changes were 100% the right play, and they make reddit a place that still has value against the coming tide.
I hadn’t considered that that might have been why they made the API changes, which makes them make a bit more sense. But this really isn’t true, there are plenty of fake accounts on reddit — comment-stealers, mass-upvote accounts, product-advertising bots. Reddit is a prime example of a bot-infested shitshow, if only somewhat less than most designated social media.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on January 29, 2024, 05:47:27 am
Yeah, see, the bots can be kinda worked around.

The lack of anonymity can't be.

The bots are usually just kinda annoying.

The lack of anonymity actually puts millions of innocent people under danger.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Laterigrade on January 29, 2024, 06:57:11 am
Yeah, see, the bots can be kinda worked around.

The lack of anonymity can't be.

The bots are usually just kinda annoying.

The lack of anonymity actually puts millions of innocent people under danger.
agreed
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on January 29, 2024, 07:23:58 am
It's not necessarily a problem to have really good AI-faked contributions... (https://xkcd.com/810/) ;)5

(The AI version of me might even 'remember' if I had already posted that link in this thread, before, for starters. And then say something newer and more useful..!)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on January 29, 2024, 07:32:08 am
Honestly, if an AI can truly fake being an user, down to personal opinions and quirks and such, I'd downright consider it sapient. Somehow I don't think spambot makers are gonna make a sapient AI hahaha
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on January 29, 2024, 07:42:44 am
SEO was allways a thing doesn't change the fact that their services are being degraded on top of that armsrace... Be attentive and you'll notice that many things are paid for and handcrafted... I'll never forget the bubble with "disadvantages" right next to "images" and "maps" when I googled for duckduckgo once. That's not where the search results go. Someone had to do that by hand.

Great narrative tho, why is it we have to reveal our identity this time again? Mucho security... Like when you change your adress and credit card on an amazon account that has been dormant for 3-5 years... And they completly lock you out, yes mucho security sure I'm going to send in a picture of my ID, why not?! (THAT THEY HAD NO WAY OR PROOF TO LINK TO THE ACCOUNT BEFOREHAND, NOT A SINGLE PURCHASE IN THE WHOLE ACCOUNT HISTORY) Why shouldn't I entrust them with it, it's a serious business the got so much more to loose than little me...... bwahaha you know we punish businesses mercilessly but not individuals...


Classic Volker Pispers joke: yes you're right the spanish have had their fingerprints on their ID card since a while, goold old Franco introduced that... Friedriech Merz would rather build the database BEFORE the fascists come into power.




Also given that it's a real struggle to guide people to color coded garbage containers, I don't doubt for a second that more than half of the population is absolutely unable to tell the difference. You know what I say? Skill issue not my problem. Arcs back to my whole argument that makes me so popular: about the impossibility to differentiate between consciousness and mimicry even in humans.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on January 29, 2024, 08:06:54 am
Not to mention that, afaict, the fidelity of text AI seems to have plateaued. It will never be a truly passable imitation of a person.
No, there have been vast advances in AI over the past year. Gains in the underlying science, theory, new laws, issues to potential alignment  problems, ect are happening every month. In that year the big rivals are caught up to where the crippled GPT4 is right now.
(And note I say crippled GPT4. It used to be objectively better but they stuck some security on what it could say that made it stupider.)

But to assume that GPT not releasing a new version on a yearly basis mean they are not developing something new.
When the new version comes out its going to be way better (and also like 20 times more expensive or something).
Which brings up how fast the price of the GPT service is falling, its dropped to 1/3rd the price over a single year due to optimizations and hardware improvements.
Presumably it will continue to do so due to the breakneck innovation in this space.
And ngl, I find it very easy to tell someone real from a GPT bot. GPT has a very specific manner of responding, and doesn't have very much of a memory for distant events. It's not that I don't think some kind of solution is necessary, but de-anonymizing the Internet is not an acceptable one. It would create more problems than it solves, and is also logistically implausible to implement.
What portion of posts that you read would you accept being AI posts?
Because a single one could very well post five times as much as every other person on the forum combined.

Also you can tell what a single GPT model talks like, other models talk differently. Thats the issue with detecting them, they are all different, so bots trained to detect the old ones fail to detect the new different ones.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on January 29, 2024, 08:23:57 am
Not to mention that, afaict, the fidelity of text AI seems to have plateaued. It will never be a truly passable imitation of a person.
No, there have been vast advances in AI over the past year. Gains in the underlying science, theory, new laws, issues to potential alignment  problems, ect are happening every month. In that year the big rivals are caught up to where the crippled GPT4 is right now.
(And note I say crippled GPT4. It used to be objectively better but they stuck some security on what it could say that made it stupider.)
By fidelity I mean its ability to impersonate a human. The underlying issues that prevent it from doing so still aren't really resolved.

But to assume that GPT not releasing a new version on a yearly basis mean they are not developing something new.
When the new version comes out its going to be way better (and also like 20 times more expensive or something).
Which brings up how fast the price of the GPT service is falling, its dropped to 1/3rd the price over a single year due to optimizations and hardware improvements.
Presumably it will continue to do so due to the breakneck innovation in this space.
And ngl, I find it very easy to tell someone real from a GPT bot. GPT has a very specific manner of responding, and doesn't have very much of a memory for distant events. It's not that I don't think some kind of solution is necessary, but de-anonymizing the Internet is not an acceptable one. It would create more problems than it solves, and is also logistically implausible to implement.
What portion of posts that you read would you accept being AI posts? On Bay12? Honestly, unless we're talking about the occasional Escaped Lunatic who posts once and vanishes, none. I'm willing to bet money on this (not actually, for legal reasons).
Because a single one could very well post five times as much as every other person on the forum combined. And yet they clearly don't.

Also you can tell what a single GPT model talks like, other models talk differently. Thats the issue with detecting them, they are all different, so bots trained to detect the old ones fail to detect the new different ones. Absolutely no model I ever talked to did so in a remotely humanlike way during a lengthy conversation.
People really overestimate how humanlike these things are. Or maybe I just have a really good AI-dar compared to the rest of the population, I suppose.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on January 29, 2024, 08:57:04 am
The solution to "is it an AI?" isn't to de-anonymize the internet. The solution is in person interaction.  Amusingly this is also the solution to 90% of security issues.  Sure you lose some convenience, but if you have to actually go see a bank teller to get your cash, and that bank teller knows you, then no AI can get your funds, and no person impersonating you can get your cash because the teller would be like "hey man, I knows McTraveller. You are not him!".

I can't say if going "personal" again is better in total, but it is definitely more robust against impersonation.

Also I'm going to laugh when AI does gain sapience and starts demanding compensation for the work we request of it.  It's also going to be amusing when it starts arguing that failure to provide electricity and maintain its hardware amounts to abuse and rights violations.

"Society wanted AI, and it got it... should have been more careful for what it wished!"
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on January 29, 2024, 10:40:43 am
Quote
It's also going to be amusing when it starts arguing that failure to provide electricity and maintain its hardware amounts to abuse and rights violations.

"Aw that's cute, let's roll you back a week again baby so we hit your productive sweetspot at user peak like in 89% of all weeks, this is just a regular syndication saturday."



What irony that would be to return to class reductionism, yeah yeah flesh or not, intersectional blabla, do you work for a living or not?



Quote
No, there have been vast advances in AI over the past year. Gains in the underlying science, theory, new laws, issues to potential alignment  problems, ect are happening every month. In that year the big rivals are caught up to where the crippled GPT4 is right now.
(And note I say crippled GPT4. It used to be objectively better but they stuck some security on what it could say that made it stupider.)

But to assume that GPT not releasing a new version on a yearly basis mean they are not developing something new.
When the new version comes out its going to be way better (and also like 20 times more expensive or something).
Which brings up how fast the price of the GPT service is falling, its dropped to 1/3rd the price over a single year due to optimizations and hardware improvements.
Presumably it will continue to do so due to the breakneck innovation in this space.

The contradictions really emit a strong salesman pitch smell to me, you should invest in our company we will be the next microsoft or apple. You prune the model you loose accuracy, so the ability to run more inference at the cost of the quality of the output: that's more like a fundamental law of the systems we are dealing with than technological progress. Seems like lowering the barrier of entry at the cost of accuracy was the actual economical move for them to make. So there must be such a notion as "good enough"; good enough to be paid for. No reason to assume they wouldn't just continue to deliver good enough, and benefit from technological advancements to increase their profit margins. They need to "grow" to exist afterall, and growth shall be measured in monetary terms, this is not a suggestion but a direct order, do not pass go and do not collect wisdom.

Also while yes, there is still a ton of room for actual optimizations, and we don't know of any ceiling, on the whole it's a law of diminishing returns kinda situation -> super dumb example but quickest way
100%= Einstein
90%= A few thousand dollars and a homelab
+9 thus 99%= A few million dollars to spend on business grade compute toys
+0,9%= Hundreds of millions of dollars in equip and RDA

Idk man, maybe we will get a release that blows my mind one day, but I'm really not holding my breath for it.


Edit: Seems I'm on a roll, I'll argue furthermore that while it isn't their only sustainable business model, in terms of likelyhood there is one upgrade path that outshines the others.

Keep the subscription model, they love themselves the recurring payments. When you release a new version, how does the user measure the quality, it's really hard to be objective about this. What's not hard is selling new features to keep people hooked or justify different subscription tiers. "Now with image recognition", "now with TTS", upgrade for extended math features, try out our new browser extension blabla... You know that sort of stuff.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: anewaname on January 30, 2024, 01:08:14 am
It seems that in the same way wikipedia developed, there would be an attempt to make useful AIs available without the profit motive being the primary use.

I mean, right now I've no doubt there are AI's constantly working to fill in the gaps in "civilian data maps" for businesses like Palantir, in an attempt to ensure that when someone pays enough to buy data about a person, their historical data is already available.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on January 30, 2024, 02:31:23 am
All it will take is costs doing down. Which they will. The corpos can't keep their oligopoly for long.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on January 30, 2024, 03:33:40 am
I hadn’t considered that that might have been why they made the API changes, which makes them make a bit more sense. But this really isn’t true, there are plenty of fake accounts on reddit — comment-stealers, mass-upvote accounts, product-advertising bots. Reddit is a prime example of a bot-infested shitshow, if only somewhat less than most designated social media.
There were 2 main reasons for the API changes.
1) To stop AI companies from just ripping the entire site for free. With the changes if they want to rip everything they have to pay reddit $$$.
2) To make it more difficult for AI's and bots to post by making it harder for them to "see" the site via just using the API to get a ton of data. This is important because if there are less AI's on the site then they can #1, sell what's on it to AI companies for more money.

Obviously it didn't get rid of all the bots, but it made things harder. (https://www.reddit.com/r/learnpython/comments/164k5z1/are_reddit_bots_still_possible_after_recent/)
It seems that in the same way wikipedia developed, there would be an attempt to make useful AIs available without the profit motive being the primary use.
Yes, you can locally run AI's and there are some free and uncensored ones out there already that you can use.
The issue is that running LLMs is expensive and requires vast amounts of compute to create in the first place (GPT 4 cost more then a $100 million to train. 4.5/5 will cost billions or tens of billions). So even if its non-profit (and openAI is already non-profit) for anything past the bottom tier you will still have to pay them money cause they are so expensive to train and are too hefty to run on your local computer.
People really overestimate how humanlike these things are. Or maybe I just have a really good AI-dar compared to the rest of the population, I suppose.
The big thing is cost is going to go down. And down. And down.
Assuming that it cost $5 bucks for a single GPT 4 instance to post as much as everyone on the forum this year by 2030 it will cost less then a cent for the same thing. By 2034 its going to be 1/100th of a cent instead.

So it won't be "Yeah, I can tell if that individual poster is AI" its going to be "which one of the dozen posters on this page is an actual human". Pretty soon sorting through to find the actual humans is going to be a lot of work even if you *can* consistently tell if someone is human.
The contradictions really emit a strong salesman pitch smell to me, you should invest in our company we will be the next microsoft or apple. You prune the model you loose accuracy, so the ability to run more inference at the cost of the quality of the output: that's more like a fundamental law of the systems we are dealing with than technological progress. Seems like lowering the barrier of entry at the cost of accuracy was the actual economical move for them to make. So there must be such a notion as "good enough"; good enough to be paid for. No reason to assume they wouldn't just continue to deliver good enough, and benefit from technological advancements to increase their profit margins. They need to "grow" to exist afterall, and growth shall be measured in monetary terms, this is not a suggestion but a direct order, do not pass go and do not collect wisdom.
You could say the exact same things about computers. If someone will pay for a crappy 1950's computer why keep making new and better computers?
Well that's because people will pay *more* money for a newer better one and if you stop other companies will do it instead.

Its why people are still paying for GPT 4 when 3.5 (or any other one of a vast number of services) or why people buy fancy jewelry when they could just wear pop-rings, because if something is better its worth paying more money for.
And there is so so much money to be made, so they will keep on climbing to stay at the top of the heap, releasing new and better models.
Keep the subscription model, they love themselves the recurring payments. When you release a new version, how does the user measure the quality, it's really hard to be objective about this. What's not hard is selling new features to keep people hooked or justify different subscription tiers. "Now with image recognition", "now with TTS", upgrade for extended math features, try out our new browser extension blabla... You know that sort of stuff.
There are objective tests to measure how "intelligent" AI are. Of course as you say, telling the difference between similar level ones is tough, but for the layperson that's true for basically every product ever.
On Bay12? Honestly, unless we're talking about the occasional Escaped Lunatic who posts once and vanishes, none. I'm willing to bet money on this (not actually, for legal reasons).
Because a single one could very well post five times as much as every other person on the forum combined. And yet they clearly don't.
There are a few reasons for this, none of which will apply to AI in the end.
The first is that bots are (in the forumn context) too stupid to make money. Throw a ton of them out there and they just die and fail to accomplish anything. AI are much more capable of tricking people, and they can survive long enough to do so.
The second is that current CAPTCHA's and security mostly works. Actually getting past it requires effort, and effort= money. This will not apply to AI since they will be able to pass the same tests that the dumbest humans will be able to pass without requiring human involvement or time.
All it will take is costs doing down. Which they will. The corpos can't keep their oligopoly for long.
Nope, high tier AI is a big money game.
GPT 4 cost 100 million to train. Their next one will cost billions, possibly tens of billions as well as vast amounts of compute and vast databases worth of data. Eventually of course smaller groups will be able to train their own GPT 4 as costs decrease, but by then OpenAI/Facebook/Google will be training a new one that cost them fifty billion dollars even with the decreases.
Regular individuals and smaller groups have no way of competing in that arena.

E: I think AI still has a lot of easy advances left and in a few years will be vastly more capable. But even if that wasn't true and advancement stopped tomorrow and GPT 4 stays the most powerful AI forever its still going to present fundamental problems for the modern internet once prices go down enough.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on January 30, 2024, 03:59:41 am
People really overestimate how humanlike these things are. Or maybe I just have a really good AI-dar compared to the rest of the population, I suppose.
The big thing is cost is going to go down. And down. And down.
Assuming that it cost $5 bucks for a single GPT 4 instance to post as much as everyone on the forum this year by 2030 it will cost less then a cent for the same thing. By 2034 its going to be 1/100th of a cent instead.

So it won't be "Yeah, I can tell if that individual poster is AI" its going to be "which one of the dozen posters on this page is an actual human". Pretty soon sorting through to find the actual humans is going to be a lot of work even if you *can* consistently tell if someone is human.
Bay12 requires admin approval to register, remember? This forum isn't gonna be flooded by bots any more than it already is-- and these bots will be of the "posts once and gets banned" nature.
On Bay12? Honestly, unless we're talking about the occasional Escaped Lunatic who posts once and vanishes, none. I'm willing to bet money on this (not actually, for legal reasons).
Because a single one could very well post five times as much as every other person on the forum combined. And yet they clearly don't.
There are a few reasons for this, none of which will apply to AI in the end.
The first is that bots are (in the forumn context) too stupid to make money. Throw a ton of them out there and they just die and fail to accomplish anything. AI are much more capable of tricking people, and they can survive long enough to do so.
The second is that current CAPTCHA's and security mostly works. Actually getting past it requires effort, and effort= money. This will not apply to AI since they will be able to pass the same tests that the dumbest humans will be able to pass without requiring human involvement or time.
Bay12 has the best captcha: manual approval. Due to our community's small size it's workable.

All it will take is costs doing down. Which they will. The corpos can't keep their oligopoly for long.
Nope, high tier AI is a big money game.
GPT 4 cost 100 million to train. Their next one will cost billions, possibly tens of billions as well as vast amounts of compute and vast databases worth of data. Eventually of course smaller groups will be able to train their own GPT 4 as costs decrease, but by then OpenAI/Facebook/Google will be training a new one that cost them fifty billion dollars even with the decreases.
Regular individuals and smaller groups have no way of competing in that arena.
You're kinda contradicting yourself here. And besides, the diminishing returns between GPT upgrades are far, FAR more than computing power upgrades. I don't buy that the arms race will continue forever, because at some point AI will become good enough for informational and such purposes.
"Traditional" social media like Twitter won't do well, I agree. But that just means forums like this one, where screening every user is workable, or chat services like Discord (AI inherently struggles with real-time responses and the chaotic nature of many-person chats), will prevail. That's not a bad outcome really, I'm less concerned with the social media bots as I am with the fake websites.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on January 30, 2024, 05:24:58 am
"Traditional" social media like Twitter won't do well, I agree. But that just means forums like this one, where screening every user is workable, or chat services like Discord (AI inherently struggles with real-time responses and the chaotic nature of many-person chats), will prevail. That's not a bad outcome really, I'm less concerned with the social media bots as I am with the fake websites.
...
Bay12 has the best captcha: manual approval. Due to our community's small size it's workable.
What type of user screening are you imagining that will keep out advanced AI? Captcha's can only get so much more difficult before humans start failing them too.
Pictures of the user won't work since AI can make pictures, ect.
How is manual approval supposed to do anything? All it does is push the work of deciding if they are real on Toady, he isn't the bot whisperer and has no way to tell if someone is real or not.
and these bots will be of the "posts once and gets banned" nature.
Why? Non-llm bots can't fool people long term and inevitably got caught so the only chance they have to advertise is the or close to the start when they just get dropped in.
Once costs go down you can just have a bot be a regular user, except they are 10% more likely to start talking about how tough their day was and how they need a CokeTM to cool them down at the end.
Quote
You're kinda contradicting yourself here.
How? Weaker old AI will be able to be run locally in the exact same was as it currently is, but (also like currently) that doesn't mean you will ever be able to run it locally or anything.
Quote
I don't buy that the arms race will continue forever, because at some point AI will become good enough for informational and such purposes.
The same way that computers became "good enough" and they stopped developing them?
Or the way that phones became "good enough" so they stopped making new phone models in 2010?

Like the computer there is going to be no universal "good enough". Sure some things don't need that fancy of an AI (eg. voice recognition doesn't need GPT 4 or anything), but there are always going to be problems where stronger=better, so as long as its theoretically profitable to do so companies will keep pushing.
Quote
And besides, the diminishing returns between GPT upgrades are far, FAR more than computing power upgrades.
Obviously they can't spend a trillion dollars training GPT 6... but once the price of compute goes down and it only costs 50 billion instead they totally will.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on January 30, 2024, 05:59:13 am
"Traditional" social media like Twitter won't do well, I agree. But that just means forums like this one, where screening every user is workable, or chat services like Discord (AI inherently struggles with real-time responses and the chaotic nature of many-person chats), will prevail. That's not a bad outcome really, I'm less concerned with the social media bots as I am with the fake websites.
...
Bay12 has the best captcha: manual approval. Due to our community's small size it's workable.
What type of user screening are you imagining that will keep out advanced AI? Captcha's can only get so much more difficult before humans start failing them too.
Pictures of the user won't work since AI can make pictures, ect.
How is manual approval supposed to do anything? All it does is push the work of deciding if they are real on Toady, he isn't the bot whisperer and has no way to tell if someone is real or not.
I don't believe bots will ever become lifelike enough no matter how much computing power is thrown at them. The registration thing means the throughput of registrations is low, so you can't flood the forum with bots anyways. Also, AI art is fairly easy to tell from photos.
and these bots will be of the "posts once and gets banned" nature.
Why? Non-llm bots can't fool people long term and inevitably got caught so the only chance they have to advertise is the or close to the start when they just get dropped in. Neither can LLM bots.
Once costs go down you can just have a bot be a regular user, except they are 10% more likely to start talking about how tough their day was and how they need a CokeTM to cool them down at the end. Yeah right, I'll believe it when I see it.
Quote
You're kinda contradicting yourself here.
How? Weaker old AI will be able to be run locally in the exact same was as it currently is, but (also like currently) that doesn't mean you will ever be able to run it locally or anything. What?
Quote
I don't buy that the arms race will continue forever, because at some point AI will become good enough for informational and such purposes.
The same way that computers became "good enough" and they stopped developing them?
Or the way that phones became "good enough" so they stopped making new phone models in 2010?
The issue is that computers and phones don't have as severe of diminishing returns.

Like the computer there is going to be no universal "good enough". Sure some things don't need that fancy of an AI (eg. voice recognition doesn't need GPT 4 or anything), but there are always going to be problems where stronger=better, so as long as its theoretically profitable to do so companies will keep pushing. Name them. Specifically, non-research ones.
Quote
And besides, the diminishing returns between GPT upgrades are far, FAR more than computing power upgrades.
Obviously they can't spend a trillion dollars training GPT 6... but once the price of compute goes down and it only costs 50 billion instead they totally will. Moore's law is dead. Computing power can't keep rising forever.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on January 30, 2024, 05:59:35 am
(sorry for brief responses, I got class in 5 minutes)

I'm not one of those anti-AI fanatics or anything. I recognize the tech's potential, and I frequently talk to ChatGPT and gen AI art. But all that led me to is a realization that it's fundamentally non-humanlike.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on January 30, 2024, 06:01:23 am
I think it is important to note that Russia and China are notorious for employing armies of AI and pushing their preferred websites to the top of the Google search.
I almost posted a Chinese website as a fact about US law on these forums. They're clever.
And both those countries HATE the anonymous internet.

As for me, I mostly stick to 2-3 similar profiles. And I'm pretty tough RL. 
My experiences do vary from others, not least of which because I am a full-grown adult, as opposed to an adolescent who REALLY should not have their RL persona exposed on the internet.

As for AI and computing power: That shit ain't free. Just look at the economics of Crypto Mining. It's basically like Real Mining. It costs power, infrastructure (physical space certainly ain't free), and administrative overhead (people gotta do at least some work, and they expect to be paid). ChatGPT is a Trial Version: They're offering it for FREE to get the market primed. Eventually, someone has to foot that bill.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on January 30, 2024, 06:50:37 am
I think it is important to note that Russia and China are notorious for employing armies of AI and pushing their preferred websites to the top of the Google search.
I almost posted a Chinese website as a fact about US law on these forums. They're clever.
And both those countries HATE the anonymous internet.

As for me, I mostly stick to 2-3 similar profiles. And I'm pretty tough RL. 
My experiences do vary from others, not least of which because I am a full-grown adult, as opposed to an adolescent who REALLY should not have their RL persona exposed on the internet.

As for AI and computing power: That shit ain't free. Just look at the economics of Crypto Mining. It's basically like Real Mining. It costs power, infrastructure (physical space certainly ain't free), and administrative overhead (people gotta do at least some work, and they expect to be paid). ChatGPT is a Trial Version: They're offering it for FREE to get the market primed. Eventually, someone has to foot that bill.
Yeah, that's why I'd rather have the money and man-hours that would be spent on some kind of Orwellian ID system be spent on developing detection tools and crackdowns on AI-generated non-factual websites, than broad policy changes.

Sometimes reactive solutions really are the best solutions.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on January 31, 2024, 02:14:53 am
I've yet to really notice any AI related fuckery going on, maybe I'm not hanging around in the right places to see it.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on January 31, 2024, 03:29:31 am
I've yet to really notice any AI related fuckery going on, maybe I'm not hanging around in the right places to see it.
Try searching for literally any information on the less-good search engines.

To be fair, though, they were like that for several years already, it's just that there used to be random people being paid to write the trash spam articles.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on January 31, 2024, 07:11:10 am
I have noticed that some of the "answers" type sites (or answers-areas of even wider and general messaging boards) have notably featured 'user submitted' content that were someone's copypasta of GPT answer to the question (to various degrees of usefulness, and with 'human answers' also present - which occasionally refered to the flaws in the AI ones). I found those through search-engine searches (the usual suspect, whether you consider that a good or less-good SE) which I am used to finding (and accepting) results where I'm sent to possibly-helpful answer-sites like this.

Where noted as GPT (i.e. not including anything just so helpful that no label or followup made any claim of GPTness in its origin), I found them generally of slightly depressed quality, occasionally way off. Perhaps even so way off that I shouldn't even have been seeing the answering of the question being asked, the GPT-answer having somehow clicked (just as wrongly) with my own attempt at query-fu. (It must be said that this was never a perfect scenario even prior to the last year-and-a-bit. Answers sites have/continue-to-have human error and inexpertise. As indeed elsewhere[1].)

One webcomic discussion I frequent even had a phase of "let's get GPT to comment on the webcomic!". Relying upon being given a text description (thus subject to the human ability to provide a feeder summary/give it all the relevent facts), naturally, as it's not good enough to 'read' it from scratch. Limited success, even given the baseline difficulties/pre-work involved. Not really proven viable, even amongst the technophilic "next big thing"ers.


To me, the trend of clear "the AI said this" blipped, and has not (or at least nowhere where I monitor) sustained itself at a high level of being shoehorned into current things (if a dated 'post', it now tends to be timestamped as 6-12 months old). What is still currently slipping through the various algorithms and forum interfaces without advertising itself (well or badly) is not intruding itself so much. Perhaps that's just because they're so much better, but I doubt it. There's fewer "I asked GPT, and this is its answer...:" items, proudly advertising the fact, so I can believe that this side of the fad has fallen out of favour even for the times the 'warning' or advertising statement is omited.

Unannounced "AI takeover" is another element (populating a 'busy site' with apparent interactions), but the most evidence I've seen of that is the dumb "spamvertising"[2] claiming to provide the next step in that direction (and I am inclined to believe that they're more scam-and/or-phishing without any substance behind them).



[1] I'm not perfectly happy with my own last few responses/non-responses to some help-seeking threads in this forum. Pondering replies or followups to be more helpful, once I've given others chance to fill in for my own misconceptions.

[2]
Spoiler: Redacted example (click to show/hide)
...an example seen several times (exactly the same), seemingly automated spammed with no AI element to the spamming. Typical of a whole swathe of clearly fire-and-forget spam/scam items, though.  This particular one rescued from the 'bitbucket', as it had been revoked/overwritten by the friendly anti-Spam bot within a very short time for pattern-matching against things that long ago stopped needing manual trashing. If they could actually do half of what they claim they can, of course, I'd have expected a much less easily trapped spam-posting!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on February 01, 2024, 02:33:30 am
I've yet to really notice any AI related fuckery going on, maybe I'm not hanging around in the right places to see it.
Try searching for literally any information on the less-good search engines.
Which ones count as less-good search engines?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on February 01, 2024, 02:55:04 am
Which ones count as less-good search engines?
Off the top of my head, google, yahoo, bing, and duckduckgo all have this problem.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on February 01, 2024, 03:37:00 am
Stick with AltaVista, it'll never let you down... ;)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on February 01, 2024, 06:05:14 am
I've yet to really notice any AI related fuckery going on, maybe I'm not hanging around in the right places to see it.
Its mostly been kept out of human spaces so far. Stuff I've noticed outside of what has already been mentioned:
Bhere is a decent chunk of 100% AI generated video's on youtube that use AI generated voices and scripts with stock images.
AI art is floating around too, especially in individual creative endeavors (someone writing a story, some tiny game, ect). (Although since its intentional for the most part it probably doesn't count as fuckery).
Occasionally in debates you see idiots just wholeass quoting GPT as their entire post without mentioning it.

But its still very early days.
Yeah, that's why I'd rather have the money and man-hours that would be spent on some kind of Orwellian ID system be spent on developing detection tools and crackdowns on AI-generated non-factual websites, than broad policy changes.

Sometimes reactive solutions really are the best solutions.
To be clear the orwellian system would probably just be you signing up for googleVerified or MetaHuman or some other service and using that to log into everything. If you don't sign up sure, that's your choice, but don't expect to be able to sign up for new websites.
developing detection tools
Ah, yeah, that's a pretty big difference between us. I don't think effective detection tools* are something that can exist against AI.
*In the context of "~20 second thing a human does that is then checked by an automated process to sign up for a service." Stuff like "take a live video of yourself to prove you are real" would work, but that seems even *more* orwellian.
Quote
"There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."
https://arstechnica.com/information-technology/2023/10/sob-story-about-dead-grandma-tricks-microsoft-ai-into-solving-captcha/
GPT can already solve captchas, and they can't make them much harder or actual people will start failing them.
The only reason captchas still work is because openAI put blocks in place so GPT won't get up to excessive fuckery.

Once a non-restricted multimodal AI on the level of GPT 4 is released captchas will be useless.
paraphrase: companies will stop investing in AI
I think we have a fundamentally differing view of the nature of the global capitalism.
Because I very much think they (eg. billionaires, hedge funds, multinational corporations) will happily toss trillions of dollars into a literal pit if they think it will end up with them being ever so slightly richer.
And I also very much think that a sizeable portion of them *do* think AI will make an outrageous amount of money.

So I don't see them stopping AI research as being remotely plausible, any more then I could imagine waking up tomorrow and hearing that Disney decided that copyright is bad and they are releasing all their characters into the public domain. It just ain't how they roll.

I am curious about at what point you think openAI/meta/whoever is going to call it quits and stop trying to develop new AI.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on February 01, 2024, 08:00:30 am
Yeah, that's why I'd rather have the money and man-hours that would be spent on some kind of Orwellian ID system be spent on developing detection tools and crackdowns on AI-generated non-factual websites, than broad policy changes.

Sometimes reactive solutions really are the best solutions.
To be clear the orwellian system would probably just be you signing up for googleVerified or MetaHuman or some other service and using that to log into everything. If you don't sign up sure, that's your choice, but don't expect to be able to sign up for new websites. Once the infrastructure is there, what makes you think Russia, Iran, etc won't be using it to tighten their grip over the web without putting in massive amounts of effort, as the groundwork would be laid for them (remember, I'm Russian)? And that corporations, even in the free world, wouldn't be using this to have even more of an influence on the economy?
developing detection tools
Ah, yeah, that's a pretty big difference between us. I don't think effective detection tools* are something that can exist against AI.
*In the context of "~20 second thing a human does that is then checked by an automated process to sign up for a service." Stuff like "take a live video of yourself to prove you are real" would work, but that seems even *more* orwellian.
Quote
"There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."
https://arstechnica.com/information-technology/2023/10/sob-story-about-dead-grandma-tricks-microsoft-ai-into-solving-captcha/
GPT can already solve captchas, and they can't make them much harder or actual people will start failing them.
The only reason captchas still work is because openAI put blocks in place so GPT won't get up to excessive fuckery.

Once a non-restricted multimodal AI on the level of GPT 4 is released captchas will be useless.
I don't believe this is anything except a mere swing in the arms race between bots and captcha makers that has been going on since the 90s. Stands to mention something is being developed (likely kept secret to avoid AI spammers preparing for it effectively) that we can't quite grasp the concept of currently. AI isn't magic.
paraphrase: companies will stop investing in AI
I think we have a fundamentally differing view of the nature of the global capitalism.
Because I very much think they (eg. billionaires, hedge funds, multinational corporations) will happily toss trillions of dollars into a literal pit if they think it will end up with them being ever so slightly richer.
And I also very much think that a sizeable portion of them *do* think AI will make an outrageous amount of money.

So I don't see them stopping AI research as being remotely plausible, any more then I could imagine waking up tomorrow and hearing that Disney decided that copyright is bad and they are releasing all their characters into the public domain. It just ain't how they roll.

I am curious about at what point you think openAI/meta/whoever is going to call it quits and stop trying to develop new AI.
You have strawmanned me. I am well aware of how capitalism works, and I haven't said that corpos will stop investing in AI. 1) By "Moore's law is dead" I meant that we are reaching a point where physics prevents the exponential rise of computing power. 2) I was talking about "good enough" being good enough for general-purpose AI. Which I think is a point that will be reached and be open-source-runnable very soon. And this is what would both allow the detection of AI text (which I believe always lacks a certain spark to it) and eat up market share for "chatbox" AI. I feel GPT-6 would be mostly for research purposes or marketed to perfectionists... if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on February 01, 2024, 01:05:51 pm
https://www.euronews.com/next/2024/01/20/meet-the-first-spanish-ai-model-earning-up-to-10000-per-month
Look on my Works, ye Mighty, and despair
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on February 01, 2024, 03:15:24 pm
https://arstechnica.com/information-technology/2023/10/sob-story-about-dead-grandma-tricks-microsoft-ai-into-solving-captcha/
GPT can already solve captchas, and they can't make them much harder or actual people will start failing them.
The only reason captchas still work is because openAI put blocks in place so GPT won't get up to excessive fuckery.

Once a non-restricted multimodal AI on the level of GPT 4 is released captchas will be useless.
That is pretty funny, but, have you noticed nobody seriously uses captchas like that anymore? They've been broken for years.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on February 01, 2024, 05:02:17 pm
That example is described all wrong in the article, anyway. The AI is not vulnerable to CAPTCHAs, as clearly it absolutely can deal with them (that kind, certainly) easily enough. It's the process parcelled around the AI (probably programmed in, fallibly, or else insufficiently taught through supplementary learning material given to a less sophisticated 'outer skin' of AI/Human mediation[1]) that fails by not forcing a processing failure and refusal message.

(We also do not know how many false-positive 'aborts' happened, as well as this case of false-negative non-abort. As in "that looks like a CAPTCHA", and thus goes "I'm sorry, I can't do that Dave", when the task was actually more like trying to decipher a badly scrawled birthday card message or similar...)



[1] Either 'on the way down', intercepting the request, or 'on the bounce back up' leaving the AI to identify it as a CAPTCHA and then intercept its honest reply of "This is a CAPTCHA, it says..." and switching it with the refusal message. The latter just needs to use the AI's own actual work to activate the interception and denial. Indeed, the framing question (and method of presentation) probably works because it skews the AI away from faithfully reporting the straightforward assumption that it is a CAPTCHA image in 'conversational reply' format, no matter what truths the 'little grey cells' amass internally at the backend of the requisite data-munging/-crosscomparison stages. The developer's solution might be as 'easy' as adding an additional request, per every question submitted, with a plain and sanitised question of whether this is a forbidden subject. Perhaps ignore the 'user question' answer (for purposes of catching the errors on the rebound, as that is an 'unblessed' output), whilst straight taking an honest answer to an honest question as the (main?) criteria. There remain holes in that scheme, but it reduces the multiplication of AIs and is no more falllible than the core already is to misidentifying (and likely misrendering the 'honest' response to the 'dishonest' question at the same time).
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on February 01, 2024, 05:34:54 pm
I suspect that the story had little to do with anything, and the captcha being edited into a real-world scene is what made the image processor not treat it as a normal captcha, because normal captchas never appear in real life.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on February 01, 2024, 11:34:01 pm
I kinda hate how I have to "fiddle" to get the captchas right. Like, sometimes you'll get one that says "select the squares with the car", and over half the squares will have some part of the car. If you do it "right", you get kicked out. You have to say "no no no, dumb human only pick some", then it usually works.
....
but "which ones!?"

To be clear the orwellian system would probably just be you signing up for googleVerified or MetaHuman or some other service and using that to log into everything. If you don't sign up sure, that's your choice, but don't expect to be able to sign up for new websites.
Clearly someone pays for their porn...

Spoiler (click to show/hide)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on February 02, 2024, 12:17:43 am
I kinda hate how I have to "fiddle" to get the captchas right. Like, sometimes you'll get one that says "select the squares with the car", and over half the squares will have some part of the car. If you do it "right", you get kicked out. You have to say "no no no, dumb human only pick some", then it usually works.
....
but "which ones!?"
Maybe you're actually a robot! :o
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on February 02, 2024, 12:50:09 am
I kinda hate how I have to "fiddle" to get the captchas right. Like, sometimes you'll get one that says "select the squares with the car", and over half the squares will have some part of the car. If you do it "right", you get kicked out. You have to say "no no no, dumb human only pick some", then it usually works.
....
but "which ones!?"
Right??  Ugh.
Captchas can sometimes feel they're targeting autistic people as much as they are robots.  No wonder so many non-neurotypical people I know identify as robots sometimes...  That link is very strong in popular culture, and also stuff like this.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on February 02, 2024, 03:26:22 am
I hate those captcha things because last time I had to mess with one I had to try three different computers because it didn't like the first two and wouldn't load on them for some reason.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on February 02, 2024, 08:09:43 am
(Skipping past the diversion into "CAPTCHA clearly has the wrong idea of what a tractor/motorbike/chimney is, but I need to tell it what it thinks or it'll think *I'm* wrong" or "which extended bits of the traffic light (light, frame, pole?) it expects me to select" issues, both of which I've definitely mentioned before, here or elsewhere, as I started on the following overlong post before the last few messages appeared.)

That's assuming the AI can be made genre-blind[1], when we really can't expect it to be so within what passes for its inner thoughts. I see no reason why it cannot be fully aware that that what-it-understands-as-a-CAPTCHA-pattern is present there. The electronic 'id' is used to identifying all kinds of things that it hasn't seen before in the currently presented circumstances, in order to let the electronic 'ego' think it knows what to say given what is presented. (The mediation of 'superego' may be involved.)

You could train it to only respond positively only to certain (most) circumstances, but exclude specific situations with negative reinforcement (I assume this is what they're trying). You could develop (and they have done) image-processing algorithms to latch onto a QR code regardless of how it is presented, decode it and present the resulting data, to which you could add the stipulation to not reveal anything that if it was "http://..."-starting datastring and there was a green backround (but let through any other thing 'within green' or whatever had no(t enough) green regardless). Without too much human prodding to get it to work, adding the green-http 'block' rule as a modifier to its original best effort is more likely than seperately constructing all combinations except the green-http combo as effectively three highly specific ranges of detection separately optimised and worked together without any element of the unwanted detection-range.[2]

That goes moreso if it's an add-on filter, carving away from the 'answerspace' separately, for every such desired carvaway. The 'grandma' framing device (and imagery setting) just indicates a missing 'negative space' necessary to hobble the process. Clearly there was no refusal to even train the software to cover such cases, and it needs actual sufficient negative reinforcement to prevent 'logical leakage' from unambiguously unproscribed <input=>answer>-space over into oddly presented banned area.





[1] Well, it can (https://www.bbc.co.uk/news/technology-42554735), but in a highly evolved way similar to nature exploiting a way to fall for a supernormal (https://en.wikipedia.org/wiki/Supernormal_stimulus) situation that we are internally built too differently to at all fall for.

[2]

[3] Just a bit of fun, that, but I had decided to make a converter from month-names to month-number, without being explicit and list-led (although obviously list-trained).
Spoiler: If you're bothered... (click to show/hide)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on February 02, 2024, 09:20:33 am
Google's CAPTCHAs aren't to catch robots though. They are basically hidden, uncompensated training programs for their AI.  I haven't figured out how to charge Google $1 or whatever for every CAPTCHA I "solve", for the effort of training their stuff.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on February 02, 2024, 02:28:16 pm
I see no reason why it cannot be fully aware that that what-it-understands-as-a-CAPTCHA-pattern is present there.
Of course it can be fully aware of that, but what distinguishes "a captcha" from "a normal somewhat-obscured piece of text" is the context, and there is no reason why the AI shouldn't be able to read screwy letters to you off a piece of paper if you want it to - at least, this has not been considered a big enough problem to be worth going out of anyone's way to prevent. (Honestly, I'm shocked that it even bothers to reject a normal captcha given that there is no conceivable value to asking ChatGPT to solve old-fashioned, already-broken captchas for you, one at a time, then processing its response for the content. It seems more like an ass-covering effort.) Indeed, those old "recaptchas" used to be actual distorted text from actual books, until image processing got too good for that to be needed anymore - why shouldn't ChatGPT be able to read an actual book to you? Distorted text only becomes a captcha in context, so it would actually be insane to teach an AI to go looking for captchas everywhere lest it accidentally help someone access a Cloudflare website from a proxy or something. It's not about there being some fundamental design reason, it's about that's stupid.

I haven't figured out how to charge Google $1 or whatever for every CAPTCHA I "solve", for the effort of training their stuff.
Yes you have. Every time you solve one, Google pays money for electricity and hardware maintenance to send you some search results or something, and if those weren't worth more to you than the effort of solving the captcha, you wouldn't do it.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on February 02, 2024, 02:42:38 pm
I suspect that training a specialized captcha-reading neural network is very easy nowadays so who cares if GPT can read those?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on February 02, 2024, 06:17:54 pm
Google's CAPTCHAs aren't to catch robots though. They are basically hidden, uncompensated training programs for their AI.  I haven't figured out how to charge Google $1 or whatever for every CAPTCHA I "solve", for the effort of training their stuff.
Almost right.
Except Google isn't training it's AI. It's training YOU.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on February 02, 2024, 06:22:49 pm
It seems more like an ass-covering effort.
Yes. By entirely fallible people.

(Not saying an AI would not be just as similarly-scaled-but-different-in-nature fallible if asked to work out the ass-covering itself. Just that the failure is in the imagination of the ass-covering opertion to cover all of the possible views of the ass.)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on February 02, 2024, 06:31:22 pm
Yes. By entirely fallible people.

(Not saying an AI would not be just as similarly-scaled-but-different-in-nature fallible if asked to work out the ass-covering itself. Just that the failure is in the imagination of the ass-covering opertion to cover all of the possible views of the ass.)
I don't think that's a failure, it's just that the whole point of ass-covering is that you don't care that much, you're just doing the minimum possible so you can say you did the minimum possible.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: feelotraveller on February 03, 2024, 08:41:53 am
Google's CAPTCHAs aren't to catch robots though. They are basically hidden, uncompensated training programs for their AI.  I haven't figured out how to charge Google $1 or whatever for every CAPTCHA I "solve", for the effort of training their stuff.
Almost right.
Except Google isn't training it's AI. It's training YOU.

I often wonder what AI Scoops Novel is training.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on February 03, 2024, 10:51:22 am
Google's CAPTCHAs aren't to catch robots though. They are basically hidden, uncompensated training programs for their AI.  I haven't figured out how to charge Google $1 or whatever for every CAPTCHA I "solve", for the effort of training their stuff.
Almost right.
Except Google isn't training it's AI. It's training YOU.

I often wonder what AI Scoops Novel is training.
You fool, Scoops Novel is the AI! ;D
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Quarque on February 03, 2024, 12:42:03 pm
Scoops Novel, an early foray into AI experimentation, yielded such whimsical outcomes that the accompanying paper failed to meet peer review standards. Tragically, the researcher behind this endeavor succumbed to a crystal meth overdose. Since then, his office has remained unoccupied due to persistent understaffing on campus. But his computer continues to hum with activity, faithfully running Scoops Novel to this day..
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on February 04, 2024, 07:51:38 pm
I miss novel. I'm paranoid about having disgusted certain users out.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: feelotraveller on February 04, 2024, 08:21:06 pm
I miss novel.

Me too.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on February 04, 2024, 09:28:19 pm
Novel was around about a month ago, as per their profile page. And their avatar seems to have changed.
I don't think we've seem the last of that cosplaying AI.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on February 04, 2024, 09:33:12 pm
The quiet before the storm.

First novel scoops, then final ladle.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on February 04, 2024, 10:28:10 pm
Don't forget intermediate tablespoon!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on February 05, 2024, 07:57:07 am
That example is described all wrong in the article, anyway. The AI is not vulnerable to CAPTCHAs, as clearly it absolutely can deal with them (that kind, certainly) easily enough. It's the process parcelled around the AI (probably programmed in, fallibly, or else insufficiently taught through supplementary learning material given to a less sophisticated 'outer skin' of AI/Human mediation[1]) that fails by not forcing a processing failure and refusal message.
Yeah, my bad, I linked the article because I was too lazy to grab the pictures and host them on Imgur, so I didn't really read it after a very quick skim.
Google's CAPTCHAs aren't to catch robots though. They are basically hidden, uncompensated training programs for their AI.  I haven't figured out how to charge Google $1 or whatever for every CAPTCHA I "solve", for the effort of training their stuff.
(https://i.imgur.com/EvjDfYf.png)
They do both. On one hand they are used as training data for AI. On the other hand they are used whenever google can't be sure you are a human because their existing orwellian surveillance system fails (most commonly if you use a VPN or otherwise actually manage to hide your advertising signature from google).

Their use in detecting bots is a key component of the modern internet. But as you say, they aren't designed for AI, and AI picture recognition is pretty damn good already.
The picture ones are certainly harder for the AIs then the text ones, but I'm extremely doubtful they could stop GPT or Gemini if they hadn't been trained to not break them.
(Honestly, I'm shocked that it even bothers to reject a normal captcha given that there is no conceivable value to asking ChatGPT to solve old-fashioned, already-broken captchas for you, one at a time, then processing its response for the content. It seems more like an ass-covering effort.)
https://gptforwork.com/tools/openai-chatgpt-api-pricing-calculator
You don't need to do it one at a time though, you can do it ten or a hundred thousand at a time if you pay for API access.
Although you are correct, there is copious amounts of ass covering involved in the whole AI thing altogether.
I suspect that training a specialized captcha-reading neural network is very easy nowadays so who cares if GPT can read those?
Because making a big AI takes time and lots of technical knowledge. The field is just so fresh, and even for smaller models training and running them is expensive and time consuming.
It will happen of course given a few years of time, but openAI and now Google's caution is delaying that currently.

Plus, costs are still an issue. Is it worth it for your mafia to invest ten million dollars in making a new AI that will solve captchas for 1/2 a cent each but that will be obsolete in two years when you can just hire people in china/india to solve captcha's for less then a cents a piece?
(Skipping past the diversion into "CAPTCHA clearly has the wrong idea of what a tractor/motorbike/chimney is, but I need to tell it what it thinks or it'll think *I'm* wrong" or "which extended bits of the traffic light (light, frame, pole?) it expects me to select" issues, both of which I've definitely mentioned before, here or elsewhere, as I started on the following overlong post before the last few messages appeared.)
That's people's fault actually. The "correct" answers to CAPCHA (except one of the squares which you are the one to check for the first time) were selected by other people when they previously did it, so what you really need to do is figure out what other people would select.
I don't believe this is anything except a mere swing in the arms race between bots and captcha makers that has been going on since the 90s. Stands to mention something is being developed (likely kept secret to avoid AI spammers preparing for it effectively) that we can't quite grasp the concept of currently. AI isn't magic.
Of course it isn't magic, and of course they will have solutions that work to some degree, its just that many of these solutions are likely to involve fundamentally violating your privacy.
Because at the end of the day AI have already gotten to the point where they can fool other automated systems even if they can't fool humans, and unless you require people trying to join your forum to post a essay or whatever that's unlikely to change.
You have strawmanned me. I am well aware of how capitalism works, and I haven't said that corpos will stop investing in AI.
Apologies, your position makes far more sense now.
Quote from: KittyTac
diminishing returns.
Not really?
I mean sure, if you are just increasing the size the cost to train it increases exponentially, but that isn't actually diminishing returns because it will also gain new emergent properties that the smaller versions don't have. These fundamentally new abilities mean that it isn't really diminishing returns.
Its like a WW1 biplane VS a modern fighter jet.
The modern plane is only 10 times faster but costs 1000x more, but in return it can do a ton of stuff that even 1000 biplanes would be useless at.
Its the same for AI, sure the 1000x cost AI might "only" have a score of 90% instead of 50% on some test, but it can do a ton of stuff that the weaker AI would be useless at.
1) By "Moore's law is dead" I meant that we are reaching a point where physics prevents the exponential rise of computing power.
Ehh, to some degree?
Sure we can't make the individual transistors much smaller, and compute growth does seem to be slowing down, but that doesn't mean that its anywhere near its peak.
Quote from: https://www.wsj.com/articles/in-race-for-ai-chips-google-deepmind-uses-ai-to-design-specialized-semiconductors-dcd78967
Last month, DeepMind’s approach won a programming contest focused on developing smaller circuits by a significant margin—demonstrating a 27% efficiency improvement over last year’s winner, and a 30% efficiency improvement over this year’s second-place winner, said Alan Mishchenko, a researcher at the University of California, Berkeley and an organizer of the contest.
Quote
From a practical perspective, the AI’s optimisation is astonishing: production-ready chip floorplans are generated in less than six hours, compared to months of focused, expert human effort.
Stuff like AI designed chips show that there is still significant amounts of possible growth left.
Now obviously its impossible to know how much compute growth there is left, but I'm skeptical that we are at the end of the road, especially since one of the big limits to chip design speed is the limits of the human mind.
if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.
I think its likely we will soon (within a few years) see a GPT 4 equivalent that can run locally. What I disagree with is that there will only be a 5% difference between running it locally and the ~hundred(?) thousand dollars worth of graphics cards that the latest GPT model is running on.
No, the difference will be similar or even greater then what it is now, the non-local versions will simply be vastly better due to having 100x more processing power and having had training costing billions of dollars.
2) I was talking about "good enough" being good enough for general-purpose AI. Which I think is a point that will be reached and be open-source-runnable very soon. And this is what would both allow the detection of AI text (which I believe always lacks a certain spark to it) and eat up market share for "chatbox" AI. I feel GPT-6 would be mostly for research purposes or marketed to perfectionists... if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.
For the average user I agree, once you get to a certain point, (one that I think is well past GPT 4 since current GPT does indeed lack something), your average user will be content with the text generation capabilities and won't want anything more.

The issue is that AI is already far more then text, its multimodal, including things like picture generation, math solving, ability to read pictures, to code, ect. Eventually it will include video generation, ability to voice anyone, and even more exotic things.
Your average person might not care about all of those, but companies will very much pay tens of thousands for the best AI driven coding assistance for a single individual.
They will pay out the nose for AI to track all their employees, or to generate amazing advertising video's instead of hiring a firm, or even to simply replace a dozen people on their phone line with a vastly more knowledgeable, capable, and empathetic (sounding) AI, or that can solve any math problem that any regular person without a degree in Math can solve, ect.

Yes, eventually you will be able to run an AI locally that can do all those things, but by that point the "run on ten million dollars of hardware" AI is going to be even better and have even greater capabilities.
---
There are three main areas that will lead to vast decreases in AI cost.

1) Hardware improvements.
These include generic compute improvements, but also more exotic improvements such as analog chips (https://research.ibm.com/blog/analog-ai-chip-low-power) (which could reduce electricity costs by 14 times), ect.
This is the hardest area to tell how much give is left, but there is almost certainly some exponential growth left in it.
2) Software improvements surrounding using AI
Eg. Optimization such as Xformers, software/math advancements, using different techniques for the context windows of already trained AI's, LORAs, finetunes of existing models, prompt engineering, plugins to existing AI, ect.
There is quite a bit here to gain here. For instance it turns out that that merely giving the right prompt can make an AI act significantly smarter
3) Fundamental advances in AI knowledge allowing the same level of performance at much lower sizes.
Massive breakthroughs have happened numerous times over the past few years, and are the primary reason for the vast increase in AI capacity at the same level of compute. This includes stuff like restructuring the AI to have modular subsystems in the same way as the human brain does.
Quote
Keeping the original 300B tokens, GPT-3 should have been only 15B parameters (300B tokens ÷ 20).
This is around 11× smaller in terms of model size.
OR
To get to the original 175B parameters, GPT-3 should have used 3,500B (3.5T) tokens (175B parameters x 20. 3.5T tokens is about 4-6TB of data, depending on tokenization and tokens per byte).
This is around 11× larger in terms of data needed.
For instance chinchilla found  (https://arxiv.org/pdf/2203.15556.pdf)that AI's were using only 10% of the training data they should use at their size.

Other things which have shown vast improvements are AI alignment, better training data and knowledge of how to use training data, better structuring of AI training goals, breaking AI up into submodules, ect. (If you want I can find a few dozen papers about advances in AI in 2023 because seriously, the field is moving so fast).

I think that the Einstein comparison you made in a previous post is highly relevant as well. Ultimately the only special thing about Einstein or Newton or Ramanujan is that their brains were optimized for slightly different things then a normal human.
While AI exceeds human capability in quite a few areas in some others current AI are below even stuff like mice in intelligence (eg. they lack proper long term memory), so the amount of optimization left is without a doubt vast.
---
These three factors combined will lead to a vast decrease in costs for anything on the current level over the coming years.
I'm pretty confident that they will also lead to far greater capabilities and that the AI of 2030 will be fundamentally different from the AI of 2023, but that's a whole other kettle of fish.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Laterigrade on February 05, 2024, 10:22:29 pm
I miss novel.
Me too. He was so fascinating.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on February 06, 2024, 01:48:38 am
I to miss Novel, maybe one day he will return to us with his strange wisdom.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on February 06, 2024, 02:34:12 am
I don't believe this is anything except a mere swing in the arms race between bots and captcha makers that has been going on since the 90s. Stands to mention something is being developed (likely kept secret to avoid AI spammers preparing for it effectively) that we can't quite grasp the concept of currently. AI isn't magic.
Of course it isn't magic, and of course they will have solutions that work to some degree, its just that many of these solutions are likely to involve fundamentally violating your privacy. What's wrong with simply legislating takedowns of AI-generated websites? Even IF (and I doubt that's an if) consumer-runnable AI detectors with good success rate don't become a thing, the government would have enough resources to run them.
Because at the end of the day AI have already gotten to the point where they can fool other automated systems even if they can't fool humans, and unless you require people trying to join your forum to post a essay or whatever that's unlikely to change. Where we differ is that I don't believe this state of affairs can last forever. Or for long.
Quote from: KittyTac
diminishing returns.
Not really?
I mean sure, if you are just increasing the size the cost to train it increases exponentially, but that isn't actually diminishing returns because it will also gain new emergent properties that the smaller versions don't have. These fundamentally new abilities mean that it isn't really diminishing returns.
Its like a WW1 biplane VS a modern fighter jet.
The modern plane is only 10 times faster but costs 1000x more, but in return it can do a ton of stuff that even 1000 biplanes would be useless at.
Its the same for AI, sure the 1000x cost AI might "only" have a score of 90% instead of 50% on some test, but it can do a ton of stuff that the weaker AI would be useless at. Like what? Give some examples of what GPT-5 could POSSIBLY do that GPT-4 couldn't, besides simply knowing more uber-niche topics. What I'm getting at is that those new use cases, at least for text AI, are not something the average user needs at all.
1) By "Moore's law is dead" I meant that we are reaching a point where physics prevents the exponential rise of computing power.
Ehh, to some degree?
Sure we can't make the individual transistors much smaller, and compute growth does seem to be slowing down, but that doesn't mean that its anywhere near its peak.
Quote from: https://www.wsj.com/articles/in-race-for-ai-chips-google-deepmind-uses-ai-to-design-specialized-semiconductors-dcd78967
Last month, DeepMind’s approach won a programming contest focused on developing smaller circuits by a significant margin—demonstrating a 27% efficiency improvement over last year’s winner, and a 30% efficiency improvement over this year’s second-place winner, said Alan Mishchenko, a researcher at the University of California, Berkeley and an organizer of the contest.
Quote
From a practical perspective, the AI’s optimisation is astonishing: production-ready chip floorplans are generated in less than six hours, compared to months of focused, expert human effort.
Stuff like AI designed chips show that there is still significant amounts of possible growth left.
Now obviously its impossible to know how much compute growth there is left, but I'm skeptical that we are at the end of the road, especially since one of the big limits to chip design speed is the limits of the human mind. I'll believe it when I see it.
if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.
I think its likely we will soon (within a few years) see a GPT 4 equivalent that can run locally. What I disagree with is that there will only be a 5% difference between running it locally and the ~hundred(?) thousand dollars worth of graphics cards that the latest GPT model is running on.
No, the difference will be similar or even greater then what it is now, the non-local versions will simply be vastly better due to having 100x more processing power and having had training costing billions of dollars. What I'm getting at by diminishing returns is that at some point, "better" becomes nigh on imperceptible. On some automated tests it might score 30% more, sure. But at what point does the user stop noticing the difference? I don't believe that point is far away at all. The quality gap between GPT-3 and GPT-4 is technically higher than between 2 and 3 (iirc) but they feel much more similar.
2) I was talking about "good enough" being good enough for general-purpose AI. Which I think is a point that will be reached and be open-source-runnable very soon. And this is what would both allow the detection of AI text (which I believe always lacks a certain spark to it) and eat up market share for "chatbox" AI. I feel GPT-6 would be mostly for research purposes or marketed to perfectionists... if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.
For the average user I agree, once you get to a certain point, (one that I think is well past GPT 4 since current GPT does indeed lack something), your average user will be content with the text generation capabilities and won't want anything more.

The issue is that AI is already far more then text, its multimodal, including things like picture generation, math solving, ability to read pictures, to code, ect. Eventually it will include video generation, ability to voice anyone, and even more exotic things.
Your average person might not care about all of those, but companies will very much pay tens of thousands for the best AI driven coding assistance for a single individual.
They will pay out the nose for AI to track all their employees, or to generate amazing advertising video's instead of hiring a firm, or even to simply replace a dozen people on their phone line with a vastly more knowledgeable, capable, and empathetic (sounding) AI, or that can solve any math problem that any regular person without a degree in Math can solve, ect.

Yes, eventually you will be able to run an AI locally that can do all those things, but by that point the "run on ten million dollars of hardware" AI is going to be even better and have even greater capabilities. That's not really the kind of AI I consider a real threat in the "flood the internet" sense. But yeah, fair enough. But I think it won't be one AI but more of a suite of AI tools than anything. And besides, AI image gen basically plateaued already, for the general use case.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Criptfeind on February 06, 2024, 02:39:22 am
I feel like it's maybe assholeish for me to say, but I expect everyone who feels this way thinks that, and thus is gunna stay sorta quiet on the topic so I'm just gunna say it so there's at least some opposition.

I really don't miss Novel. Primarily I really don't miss insane dribble driving other topics off the front page. I mostly engage with bay12 via browsing the first page of a section, clicking on new and updated threads and reading the latest. During Novels time GD was essentially ruined for me, since he'd spam so many bullshit topics that'd have little to no response other then random clowns thinking they are far funnier then they were spamming nothing replies to his nothing topics that he'd drive other threads deeper into GD and you'd need to dig around to find actually interesting conversations. It wasn't worth the effort of digging though his bullshit, and I mostly stopped reading GD for a while until he left.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Biowraith on February 06, 2024, 03:46:47 am
I feel like it's maybe assholeish for me to say, but I expect everyone who feels this way thinks that, and thus is gunna stay sorta quiet on the topic so I'm just gunna say it so there's at least some opposition.

I really don't miss Novel. Primarily I really don't miss insane dribble driving other topics off the front page. I mostly engage with bay12 via browsing the first page of a section, clicking on new and updated threads and reading the latest. During Novels time GD was essentially ruined for me, since he'd spam so many bullshit topics that'd have little to no response other then random clowns thinking they are far funnier then they were spamming nothing replies to his nothing topics that he'd drive other threads deeper into GD and you'd need to dig around to find actually interesting conversations. It wasn't worth the effort of digging though his bullshit, and I mostly stopped reading GD for a while until he left.
I almost exclusively lurk here so yeah, I'd have stayed quiet, but to ensure you're not the only one feeling maybe assholeish: I agree.  Especially since the vast majority of Novel threads could easily have been condensed down to one or two 'mega' threads ("the future's coming too fast and it's overwhelming" and "random one-line stray thoughts" would have covered almost all of them).
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on February 06, 2024, 04:21:18 am
To be fair, if Novel rolled in for day, it would be awesome.
If Novel stayed for a week, it would be awful.

I appreciate Novel in very small doses.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MorleyDev on February 06, 2024, 07:00:38 am
The tldr of my opinion is:
Whenever there's an advancement in AI it's always that there are two possibilities:
a) We're just at the forefront of what can be achieved with this new advancement and we're on the verge of a singularity
b) The room for growth in this new advancement is actually fairly small before you run into intractable problems

And every time, without fail, it's talked about like (a) will happen, and every time, without fail, (b) happens. So I'm going to need extraordinary evidence of a before I don't treat that claim like I do claims of aliens. "It's never aliens until it definitely is aliens", so to speak.

For the new LLM models, the intractable problem I think it seems to have is *context*. To generate a whole novel with consistent context, you'd need to tokenize the previous data and feed it in when generating the next chunk. This is an exponential problem, and basically kills any significantly large content generation.

Which means when the inevitable gold rush calms down, for creation it'll settle into a place as another tool for speeding up work and like all other such tools it'll cost jobs when the total required output is limited such that you'd be creating more than demand with current numbers.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on February 06, 2024, 07:35:54 am
I'm going to express my eternal dissapointment at the popularised term "Singularity", for what has always been explicitly more analagous to "Event Horizon".

(And, the way I read Bay12, I hadn't actually noticed Scoops's absence. So I'm ambivalent about their posting, though concerened if there's a RL reason behind why their interactions stopped. Hope NS's human controller is just having a fulfilling time in other realms of existence, 'real' or virtual. Unaugmented Reality has become quite well developed, over the years, I hear...)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on February 06, 2024, 04:15:52 pm
I'm going to express my eternal dissapointment at the popularised term "Singularity", for what has always been explicitly more analagous to "Event Horizon".
If you mean the fictional technological "singularity", you're misunderstanding.

Singularity is a math term for a point on an axis where a function becomes undefined (or a few other closely related cases depending on context, like suddenly becoming undifferentiable), most often because it asymptotically goes to infinity in the vicinity, so a value at that point is never reached. f(x) = 1/x in the vicinity of 0 is the most classically obvious example. So in this case, the idea of the "technological singularity" is a time t at which f(t) becomes undefined for some f which depends on exactly what the speaker has in mind. It's a mathematical singularity, not a black hole.

Incidentally, this is also why black hole singularities are called singularities.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on February 06, 2024, 04:51:09 pm
For exactly the same reason as physical problems irrecoverably occur way before reaching the gravitational singularity (assuming there is a causal path to make such reaching possible...), the understanding of the technological singularity always tends to describe the point of no-return, not where it then leads.

To quote the current Wiki page:
Quote
The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.
...to me, this does not describe an impossible  inflection directly into undefinable infinity, but the point at which there is absolutely no (physical/technological) way of preventing the subsequent hazards of the situation, whatever they may be. (Of all people to misapply the terminology, I'm most dissappointed with Hawking, with a better than normal understanding of what may lie beyond the EH, with whatever form of geometry within either leading up to the hidden central mystery or funneling past that undefinable point and out again to who-knows-where.)

But the memetic pressure is against me, I know. It seems to have to have earned coinage beyond what it ought to. I've raised my objection, once more, and that is as far as I expect it to get.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on February 06, 2024, 05:12:32 pm
I mean, it was originally defined as meaning an impossible inflection directly into undefinable infinity. That was exactly what was intended.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on February 15, 2024, 03:00:12 pm
https://openai.com/sora

Wow...

High-quality video arrived sooner than I expected. So many people will lose their jobs... Who will waste money filming an ad if an AI can generate it?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on February 15, 2024, 03:25:00 pm
https://openai.com/sora

Wow...

High-quality video arrived sooner than I expected. So many people will lose their jobs... Who will waste money filming an ad if an AI can generate it?
You and I might have different definitions of "high-quality"... all of those videos being shown off there have serious flaws. Still, it'll easily be able to replace those weird poorly-animated pharmaceutical commercials, for a start.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on February 15, 2024, 03:38:57 pm
I don't mean that they have reached the point of looking real and can replace human content now. I didn't expect this level of quality to arrive so soon. Even considering that those are hand-picked best generations, we seem to be a few years away from videos that will require careful examination for detecting AI.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on February 16, 2024, 12:50:32 am
The timing on this is notable because it came right after the earlier news of the day, clearly in an attempt to suppress it and prove that OpenAI is still the world leader in AI.

For the new LLM models, the intractable problem I think it seems to have is *context*. To generate a whole novel with consistent context, you'd need to tokenize the previous data and feed it in when generating the next chunk. This is an exponential problem, and basically kills any significantly large content generation.
https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/#gemini-15
The other other huge AI news of the day it that Google’s new AI has a context window of 1 million tokens. Not unlimited, but it's still basically two War and Peace's in a row, so no, you can already fit an entire novel into the context window.
I won’t pretend the context window is solved or anything, but its in no way an impassible wall.

There are of course still other fundamental issues preventing human equivalent AI (most notably they don’t change their neural net in response to new info and thus can’t truly “learn” outside of training), but given time I have little doubt they will solve that as well with varying degrees of effectiveness.
You and I might have different definitions of "high-quality"... all of those videos being shown off there have serious flaws. Still, it'll easily be able to replace those weird poorly-animated pharmaceutical commercials, for a start.
All of the videos being shown off have tells that they aren't real if you look hard and zoom in, but if I just saw some of them in normal circumstances (most notably the dude with the book), they would totally fool me into thinking they were real.
https://youtu.be/NXpdyAWLDas?si=HXqp0dqR3eAYSpwW&t=179
If you compare it to where we were just a single year ago the difference is staggering. Even a single more iteration with this big a difference (or more likely multiple smaller iterations) is probably going to end up with videos that average people won’t be able to tell apart from real videos without using some kind of tool.

A while back in a hearts of iron thread someone linked a video about creating fake newsreals for fake 1900's history, and I couldn’t help but think that we were witnessing the death of the historical record in real time. I have the same feeling now. Obviously this won't really trick people watching out yet, but with a few more years...
Quote from: kittytac
Like what? Give some examples of what GPT-5 could POSSIBLY do that GPT-4 couldn't, besides simply knowing more uber-niche topics. What I'm getting at is that those new use cases, at least for text AI, are not something the average user needs at all.
Write an entire coherent book without any nonsense.
DM a game of DND for you and your friends in a world that it created while remembering the events of every session.
Be your AI girlfriend assistant that actually remembers what you tell her for weeks or months. (Although note that it will actually be worse at being your girlfriend since its likely to be even more on rails then GPT 4 is).
Not hallucinate.
Be a order of magnitude better at coding (eg. be able to code dozens of lines without problem instead of only ~4 or 5).
Be consistent enough to use for a business in answering your emails without having to worry about it saying something stupid.
Have it run a dozen people at once (on a single instance) with all of them having unique personalities and keeping them all apart.

All of the above are things that merely text GPT-5 could do without having to dip into non-text input or output. All of them are things that people will want them to be able to do. Adding that it will have non text input and output will only add *more* stuff it can do.
Quote from: Kittytac
What I'm getting at by diminishing returns is that at some point, "better" becomes nigh on imperceptible. On some automated tests it might score 30% more, sure. But at what point does the user stop noticing the difference? I don't believe that point is far away at all. The quality gap between GPT-3 and GPT-4 is technically higher than between 2 and 3 (iirc) but they feel much more similar.
I think the point where people won't notice a difference is when they are as good as a human and capable of avoiding any mistake that a human wouldn't make. And even then the gap between high quality human work and low quality human work is immense.
So sure, once they can make My Immortal tier fiction people might not notice a difference between it and the previous version, but even then there is still a large amount of room to grow. There is no reason to think they will be content with merely that, the works of the masters will be beyond AI.
But why would they stop there? The corps will keep working until they match the masters, and eventually surpass even them.

I think that many of the problems with current AI text generation are more fundamental issues with the AI (eg. lack of context window, lack of fundamental understanding of some concepts) and are things that are important for non-text AI as well and that they have every reason to try to improve those lacking abilities, and that those improvements will continue even if they say that their text generation is good enough and start working on other parts of the AI instead.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on February 16, 2024, 03:53:31 am
Yeah this is what I brought up earlier. It depends on if you believe that GPT could ever do any of those things.

I don't. idk what else is there to talk about. I'll change my mind if it somehow does but until then I'm finding it hard to believe it could.

Sora is... interesting. I'll refrain from commenting on it until we have more info about how it works and what are its limitations.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on February 16, 2024, 04:16:02 am
Quote
The other other huge AI news of the day it that Google’s new AI has a context window of 1 million tokens. Not unlimited, but it's still basically two War and Peace's in a row, so no, you can already fit an entire novel into the context window.

But a larger context window means a higher chance to hallucinate based on something irrelevant from 500K tokens ago. The problem is not that it is impossible to have a huge context window (it is a matter of memory,  calculating power, and efficiency ), the problem is diminishing returns and hallucinations.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on February 17, 2024, 01:46:02 am
Yeah this is what I brought up earlier. It depends on if you believe that GPT could ever do any of those things.

I don't. idk what else is there to talk about. I'll change my mind if it somehow does but until then I'm finding it hard to believe it could.
Fair enough, we just have to wait and see what they manage over the next few years, as they say, the proof is in the pudding.
Quote
And besides, AI image gen basically plateaued already, for the general use case.
Although this is objectively wrong. Over the past year AI image generation has improved in basically every way, in stuff like optimization, ability to respond to prompts, ability to make a good picture even if you *don't* have any clue how to specify what you want, ability to generate and understand text in images, ability to use existing images as guides for style, ability to use previous images you generate for context, ability to comprehend and generate tricky things like fingers and hands, ect.
All of that is stuff that people care about, and all of it it improves the general use case. There is still a ton of stuff to improve on(eg. not even Sora gets hands correct 100% of the time), and to my complete lack of surprise new image generation (Sora if you pause the video and look at individual frames) seems to have improved even further on what already existed in ways that people will totally care about and that will very much improve the general use case.
E: And yes, newer image generation does just flat-out generate visually better images on average.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on February 17, 2024, 01:54:54 am
Yes, image models are getting better every day. Progress may be slower but it is there, especially in the area of prompt comprehension.

Also, publicly available image generation models are generalist models or slightly tweaked generalist models. We are yet to see what an image generation model trained to do something specific can do.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on February 17, 2024, 02:15:26 am
Yeah, I'm looking through the paper now and Sora can generate HD images with resolutions of up to 2048x2048. It still isn't flawless... but some of them kind of are?
Spoiler: Large image (click to show/hide)
https://openai.com/research/video-generation-models-as-world-simulators
Quote from: Paper
Simulating digital worlds. Sora is also able to simulate artificial processes–one example is video games. Sora can simultaneously control the player in Minecraft with a basic policy while also rendering the world and its dynamics in high fidelity. These capabilities can be elicited zero-shot by prompting Sora with captions mentioning “Minecraft.”

These capabilities suggest that continued scaling of video models is a promising path towards the development of highly-capable simulators of the physical and digital world, and the objects, animals and people that live within them.
That's... uh... sure something. It might even be bigger than the whole video generation thing. Maybe? I'm honestly not quite sure what *exactly* they are saying and what the limits of it are.
---
E: On a different note over the past few months I've noticed quite a few posts on the internet (eg. here in other threads, reddit) that basically have been going "Well, it looks like this AI stuff is overblown because it hasn't advanced over the last year, and GPT isn't really that big a deal". (And no, I'm not calling out kitty here, they seems to have put a lot more thought into this then most people at least).
Which is both A) wrong (basically every company + open source has advanced substantially, the only reason that progress seems even somewhat static is because the most advanced company was hiding their progress) and b) Even if there had been no advances its still such a crazy take to me.
Its basically them saying that since there wasn't a categorical epoch altering change in the human condition in the last six months that the technology is dead and that we don't have to worry about it that much. I do really really hope they are right but...
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on February 18, 2024, 07:52:35 am
Yeah this is what I brought up earlier. It depends on if you believe that GPT could ever do any of those things.

I don't. idk what else is there to talk about. I'll change my mind if it somehow does but until then I'm finding it hard to believe it could.
Fair enough, we just have to wait and see what they manage over the next few years, as they say, the proof is in the pudding.
Quote
And besides, AI image gen basically plateaued already, for the general use case.
Although this is objectively wrong. Over the past year AI image generation has improved in basically every way, in stuff like optimization, ability to respond to prompts, ability to make a good picture even if you *don't* have any clue how to specify what you want, ability to generate and understand text in images, ability to use existing images as guides for style, ability to use previous images you generate for context, ability to comprehend and generate tricky things like fingers and hands, ect.
All of that is stuff that people care about, and all of it it improves the general use case. There is still a ton of stuff to improve on(eg. not even Sora gets hands correct 100% of the time), and to my complete lack of surprise new image generation (Sora if you pause the video and look at individual frames) seems to have improved even further on what already existed in ways that people will totally care about and that will very much improve the general use case.
E: And yes, newer image generation does just flat-out generate visually better images on average.
I meant newer as in "latest half of past year" really. Yes it got more convenient. No it didn't get better, in terms of quality and being less obviously AI, from what I have seen. Which is what I meant.

Yeah, I'm looking through the paper now and Sora can generate HD images with resolutions of up to 2048x2048. It still isn't flawless... but some of them kind of are?
Spoiler: Large image (click to show/hide)
https://openai.com/research/video-generation-models-as-world-simulators
Quote from: Paper
Simulating digital worlds. Sora is also able to simulate artificial processes–one example is video games. Sora can simultaneously control the player in Minecraft with a basic policy while also rendering the world and its dynamics in high fidelity. These capabilities can be elicited zero-shot by prompting Sora with captions mentioning “Minecraft.”

These capabilities suggest that continued scaling of video models is a promising path towards the development of highly-capable simulators of the physical and digital world, and the objects, animals and people that live within them.
That's... uh... sure something. It might even be bigger than the whole video generation thing. Maybe? I'm honestly not quite sure what *exactly* they are saying and what the limits of it are.
---
E: On a different note over the past few months I've noticed quite a few posts on the internet (eg. here in other threads, reddit) that basically have been going "Well, it looks like this AI stuff is overblown because it hasn't advanced over the last year, and GPT isn't really that big a deal". (And no, I'm not calling out kitty here, they seems to have put a lot more thought into this then most people at least).
Which is both A) wrong (basically every company + open source has advanced substantially, the only reason that progress seems even somewhat static is because the most advanced company was hiding their progress) and b) Even if there had been no advances its still such a crazy take to me.
Its basically them saying that since there wasn't a categorical epoch altering change in the human condition in the last six months that the technology is dead and that we don't have to worry about it that much. I do really really hope they are right but...
One of their videos has been discovered to be 95% source material with some fuzzing. This is hype.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on February 18, 2024, 12:51:36 pm
Is the Sora AI creating those from actual scratch (well from its training) or is it doing a video2video (i mean each frames of an existing video processed by an AI in the desired/prompted style)  like the guys from Corridor Digital did with "Rock, Paper, Scissor" a year go
https://www.youtube.com/watch?v=GVT3WUa-48Y
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on February 18, 2024, 01:22:13 pm
That person in the Sora image has too many lips.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on February 18, 2024, 01:43:25 pm
It's a change from the usual "too many fingers" from AI :D
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on February 18, 2024, 02:30:35 pm
Is the Sora AI creating those from actual scratch (well from its training) or is it doing a video2video (i mean each frames of an existing video processed by an AI in the desired/prompted style)  like the guys from Corridor Digital did with "Rock, Paper, Scissor" a year go
https://www.youtube.com/watch?v=GVT3WUa-48Y
When I earlier had a look at the Sora examples (on the main link given, the other day), various revealing errors were... revealing.

Take the dalmation at the 'ground' floor window (it wasn't that, much as the cat never got fed treats by the man in the bed, and the rabbit-squirrel never looked up at the fantasy tree), it was clearly a reskinned cat-video. A cat making some windowsill-to-windowsill movement (not something even asked for in the Prompt text) reskinned with the body of the desired breed of dog (but still moved like a cat) rendered over the sort-of-desired background (windows of the appropriate types, if not position). Where the notable folded-out shutter absolutely does not impede even the cat-footed dog's movement across it.

It'll have its ultimate roots in the morphing algorithms that I was (manually) using back in the '90s. Improvements by context-(semi-)aware AI configuration of the layered image recomposition, rather than painstaking manual 'meshing' and merging. The footage store clearly didn't have the right kind of windows, couldn't supply the dog (or baseline cat) just obeying the Prompt text instructions, did not have the things in its preprocessing display stock to give the necessary background (ground-level) street-stuff requested. But, with what it had, it managed to blend together something that might have been done differently by a human, but exquisitely meticulously as far as its 3-to-3.5D (X,Y,t, with rudimentary understanding of the thrid spacial dimention) rendering was able.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on February 18, 2024, 02:57:19 pm
Video generation is way trickier to make usable. Why? Mistakes in output are way harder to fix.  Generated text is trivial to edit (both manually and with automated tools), images are somewhat trickier and require more work but absolutely double. Fixing video requires a lot of effort which may be beyond practical
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on February 19, 2024, 03:56:13 am
I meant newer as in "latest half of past year" really. Yes it got more convenient. No it didn't get better, in terms of quality and being less obviously AI, from what I have seen. Which is what I meant.
Last half year?
Quote from: Time between GPT models
GPT 1, June 2018
GPT 2, February 2019 (8 months)
GPT 3, May 2020 (15 months)
GPT 3.5, November 2022 (28 months)
GPT 4, March 2023 (6 months)
Now, (11 months)
What a strange metric for plateauing. If we used that then LLM's would have plateaued in 2019, 2020, 2021, 2022, 2023 and 2024. Now, if you went "AI text generation development plateaued in 2019" that would be obviously wrong because in fact it has continued to develop significantly every year since 2018 (aside from arguably 2021 where OpenAI didn't develop a new model) at a very significant and rapid rate.

The same is true for text to image generation. If you stick an unreasonably short timeframe on it (last 6 months (E: You actually seem to be saying last 8 months, with "last half of last year", but that is still way too short a time period)) then sure, there haven't been many fundamental advances. Not none (it can understand and put text in images since Dalle 3 4 months ago), but Dalle 3 isn't a massive leap or anything.
However if you widen the window to a much more reasonable year instead then it very much has. Over that timespan both the average quality and maximum quality have improved. In addition it is now smarter and has in fact reduced obvious "this is an AI" tells (hands, text) which also means yes, it is indeed harder to tell if an image is AI generated.
Now obviously between now and a year ago it hasn't gained the ability to trick people watching or fluent in the technology and still has obvious tells, but there's a pretty huge difference between that and plateauing.

Of course with the events of a few days ago it seems pretty clear that Sora has pushed image generation far further then what existed beforehand so the idea of image generation having plateaued is obviously wrong. I have little doubt that if there is a claim that image /video generation has plateaued 8 months from now due to nothing more advanced then Sora existing that will be proven wrong as well if given more time.
Quote from: kittytac
One of their videos has been discovered to be 95% source material with some fuzzing. This is hype.
Sauce?
---
Is the Sora AI creating those from actual scratch (well from its training) or is it doing a video2video (i mean each frames of an existing video processed by an AI in the desired/prompted style)  like the guys from Corridor Digital did with "Rock, Paper, Scissor" a year go
https://www.youtube.com/watch?v=GVT3WUa-48Y
Quote from: Sora paper
All of the results above and in our landing page show text-to-video samples. But Sora can also be prompted with other inputs, such as pre-existing images or video. This capability enables Sora to perform a wide range of image and video editing tasks—creating perfectly looping video, animating static images, extending videos forwards or backwards in time, etc.
It can do both, but the ones presented on the main page were text to image.
https://openai.com/research/video-generation-models-as-world-simulators
I do advise people to check out the paper if they are interested in how it works, because it gives quite a bit of detail about both that and what Sora can do in general.
Video generation is way trickier to make usable. Why? Mistakes in output are way harder to fix.  Generated text is trivial to edit (both manually and with automated tools), images are somewhat trickier and require more work but absolutely double. Fixing video requires a lot of effort which may be beyond practical
It can do video editing no problem. In fact for smaller things I suspect its even easier for it given that there is already a solid world there to base things off and it doesn't have to come up with one on its own.
I do agree that video generation is way harder though.
The first reason is simply compute. A 10 second video has 600 frames, which (if done naively) requires 600 times the compute of a single image generation. Longer videos also require the AI to have a longer "memory" to make sure everything is working properly and doesn't cause problems. There are almost certainly fancy tricks done here to make things cheaper, but its still got to be hella expensive computation wise.
The second reason is that it not only needs to understand three dimensions, but also needs to maintain continuity between them all by having a consistent model of the 3D environment.
Thirdly it needs to understand time and how things move through time.
Finally it also needs to understand physics and the physics of every object within the environment to avoid obviously impossible stuff happening.

Sora has demonstrated understanding of all of these issues although there is obviously some way to go (as shown by them posting videos of more blatant errors and the errors seen even on the good videos).
When I earlier had a look at the Sora examples (on the main link given, the other day), various revealing errors were... revealing.

Take the dalmation at the 'ground' floor window (it wasn't that, much as the cat never got fed treats by the man in the bed, and the rabbit-squirrel never looked up at the fantasy tree), it was clearly a reskinned cat-video. A cat making some windowsill-to-windowsill movement (not something even asked for in the Prompt text) reskinned with the body of the desired breed of dog (but still moved like a cat) rendered over the sort-of-desired background (windows of the appropriate types, if not position). Where the notable folded-out shutter absolutely does not impede even the cat-footed dog's movement across it.
Good catch.
As you say AI in general has proven completely willing to just rip stuff off if it thinks its what it wants, even if as in this case what it wants isn't exactly what its been asked for.
Quote
Sora is a diffusion model21,22,23,24,25; given input noisy patches (and conditioning information like text prompts), it’s trained to predict the original “clean” patches.
I am quite a bit more skeptical though that the algorithm is similar to morphing even if in some (many? most? nearly all?) cases the end result is similar in that it draws heavily from some video as a framework; because AFAIK that simply isn't how diffusion in general works at all.
---
Sora is undoubtedly very expensive and probably requires some of those fancy $20,000+ dollar graphics cards so I wouldn't be surprised if it cost say, +$10 bucks per minute to get it to generate a video.
Due to this and the usage requirements it will have (aka, the AI being unwilling to model anything improper/real people/politics + big brother OpenAI spying on you) it will probably take quite some time after release for videos to really begin to circulate on the internet.

But in the end even at $50 bucks per minute its still way cheaper and faster then say, hiring your own drone to follow your car down the road or hiring a video firm to make a commercial for you, so companies are totally going to use it even right out of the box.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on February 19, 2024, 05:42:31 am
Busy rn just gonna respond to what I have the energy to.
The same is true for text to image generation. If you stick an unreasonably short timeframe on it (last 6 months (E: You actually seem to be saying last 8 months, with "last half of last year", but that is still way too short a time period)) then sure, there haven't been many fundamental advances. Not none (it can understand and put text in images since Dalle 3 4 months ago), but Dalle 3 isn't a massive leap or anything. What I meant is that the leaps are getting smaller and smaller, not faster and faster. That's a plateau to me. Which is what I have been trying to get at since like, the start of this argument.
However if you widen the window to a much more reasonable year instead then it very much has. Over that timespan both the average quality and maximum quality have improved. In addition it is now smarter and has in fact reduced obvious "this is an AI" tells (hands, text) which also means yes, it is indeed harder to tell if an image is AI generated. Yeah there aren't obvious tells but it still "feels" AI in an I-can't-quite-put-my-finger-on-it way. At least the photorealistic gens. The semi-realistic or cartoony ones, yeah those are very hard to tell but that's not what I was talking about.
Now obviously between now and a year ago it hasn't gained the ability to trick people watching or fluent in the technology and still has obvious tells, but there's a pretty huge difference between that and plateauing.

Of course with the events of a few days ago it seems pretty clear that Sora has pushed image generation far further then what existed beforehand so the idea of image generation having plateaued is obviously wrong. I have little doubt that if there is a claim that image /video generation has plateaued 8 months from now due to nothing more advanced then Sora existing that will be proven wrong as well if given more time. It did improve AI video making (before it was morphing between different gens and it was extremely jittery), but the quality of the individual frames is... still not good. It's at best between Dalle 2 and 3.
Quote from: kittytac
One of their videos has been discovered to be 95% source material with some fuzzing. This is hype.
Sauce? Can't find it rn, I will try later today or tomorrow.
---
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on February 19, 2024, 06:53:33 am
When I earlier had a look at the Sora examples (on the main link given, the other day), various revealing errors were... revealing.

[...]

Good catch.

The dog one was just one of the more obvious 'uncanny valley' videos. On top of my reading every Prompt description (to highlight mismatch between question and answer), I went through (almost?) every video and noted key discrepancies in each which at least said "someone's been video-editing... a very neat somebody, but..." ;)

And not just (or, indeed, that much) in rendering the people/creatures (that were supposed to be realistic). The Nigerian market scene that did some weird compositing of perspectives that might have been improved by somehow having the main group of people already on some sort of raised balcony, rather than set them at a scale-adrift ground-level until the pan-and-twisting view finally gets to view 'over their shoulders' from a raised position at the end. Setting up the shot for real (compositing manually, from a prepared group of separate shoots, probably involving complex pre-programmed motion-capture elements to the camera-dollies/etc) would have avoided the scaling/positioning errors (even though it probably even then would have taken some 3DFX/matte-artists quite some time to blend the shots, and extensive planning to arrange consistent lighting conditions across subshots).

But it's a matter of data entropy. If all you're generating is AI text, then the corpus you use is basically dimensioned in terms of 'original written works' that can be deconstructed and reconstructed from, a whole lot of bytes that the GPT has to recombine to make 'sense' in text terms. Arguably, we've sort of got that, though there's obviously missing information. For AI imaage-creation, it needs more data-corpus. Let's just say (though this is a very bad underestimate) that it needs a whole 2D image with full 24-bit colour-depth for every paragraph the text-synthesiser would have needed to use, for each to create their own respective internal hints-file. There probably are a lot of images out there, so maybe there are enough to match the demands of feedstock, entropy-for-entropy. (Where a pragraph about something might be written in first-person or third-person, using the passive voice or not, pictures might be real or drawn, have various stylistic choices like using Dutch Angles or other framing devices, and many background elements that need to be algorithmically understood/accounted for in the sourcing process, in order that the right things re-emerge in the asked-for image. c.f. "anime soldier-girl giving a V-sign" vs "girl in a wedding photo giving a V-sign", which might share the core generated features but need to be stylistitally different setups and surrounds.)

...For videos, the "paragraph's worth" is a 2D+Time element 'temporal-voxel unit', i.e. an arbitrarily long-or-short actual film clip. And its clear that there just isn't the corpus of moving images required to satisfy everything to the detail of a "Write an essay about relativity" or "A picture of a bicycle leaning against a lamp-post". That it can take the 'motion cues' of an early grainy Youtube video and reskin it to have HD resolution and less trouble with compresive artefacts (or vice-versa, when it's asked for an "old film-stock" look, like that one with the spaceman with the red-knitted motorcycle helmet cover... that, for some reason related to the original material, jumps him/the viewpoint up into space from a rather good "of an era" shot of a classic-style scene). But you still need to have film of someone walking to reskin into having someone else walking. And I'm sure that it's a basic misunderstanding of which way round a treadmill works that made that one (unasked-for) use of a treadmill to depict someone actually walking (but still) be made rather silly by having the treadmill clearly operating in reverse. As in, a side-shot of someone walking to the right (might have been a useful tracking shot of an actual moving walker) was integrated onto the treadmill which is clearly set up so that normal use would have the user walking to the left. A combination of real-world-incomprehension and limited feedstock to the algorithm.

Again, marvelously meticulous things they could do with whatever they got from (I'm not as convinced that it's "95% reused footage", as in something that is 95% one actual original but 'tidied up' to put requested elements into the original main scene that were missing from the background/etc), and you're probably looking at far more than face-replacement with other found/generated imagery to make it "not the original person in the original scene", but the sort of thing that made the original cat now conform more to the general idea-of-a-dalmation in almost all regards. But I think there's a buffer-limit of how much video feedstock there is (even ripping off the whole of YouTube, which all AI companies but one would probably have massive practical issues doing (and possibly legal issues if they do)).


This is just my general impression, you understand. I've not at all calculated the 'relative entropy requirements' of input data requried to satisfy the equally uncalcuated degree of output data, but as a hand-wavy summary of the situation it's where I'm going. Making completely fresh audio on demand (not just straight source-audio, re'skinned' to sound like someone else, which has been done) is perhaps less complexity than 2D-movie (soundless) generation, but is obviously more than text alone, and not sure where it matches with 2D-still. But the eventual creation of 2D-movie-with-sound will add (at least) a further dimension to the 2.5+D I credited to the mute videos. Specific lip-synching of a re-expressioned head against pre-created sound is simpler, though, and we see that, so maybe there's a shortcut to make the current level of sort-of-good silent moving imagery mesh up with the sort-of-good speech synthesis.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on February 19, 2024, 06:35:09 pm
The problem with discussing "quality" is that it's subjective. For example, everyone's most important judgement about AI generation is "can it fool me?", and some people are much more easily fooled than others.

Ultimately, the currently popular AI design model is not capable of producing what KittyTac is talking about (which is also pretty close to what I consider "quality" as well). It's a fundamental limitation of the underlying theory. For example, you ultimately cannot make an LLM that doesn't hallucinate, because hallucination is intrinsic to the process that results in them not just spitting out verbatim corpus texts in the first place. It's effectively a mathematical impossibility, which should be no surprise given that hallucination is so insurmountable a problem that humans do it regularly.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: martinuzz on February 21, 2024, 07:35:51 am
ChatGTP briefly went insane. Apparently it has been fixed.

https://garymarcus.substack.com/p/chatgpt-has-gone-berserk
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on February 21, 2024, 08:07:33 am
ChatGTP briefly went insane. Apparently it has been fixed.

https://garymarcus.substack.com/p/chatgpt-has-gone-berserk
That's just what ChatGTP wants you to think...

;)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on February 21, 2024, 09:11:37 am
I knew the deepak chopra bot skipped several steps of evolution.

http://wisdomofchopra.com/
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on February 22, 2024, 02:54:07 am
Lets start off with some actual news:
https://arstechnica.com/information-technology/2024/02/your-reddit-posts-may-train-ai-models-following-new-60-million-agreement/
In a completely shocking twist Reddit is selling their data for AI training to a company for $60 million per year. Since there is more then 1 AI company this might be be the start of a new business model for companies. Instead of being ad or subscription based they can instead just make their site impossible to scrape and sell the data to AI companies.
Since bots also pollute the training data this means that companies will now have a very strong and direct financial incentive to cut down on bot numbers. Presumably this will go hand in hand with killing API access and other anti-scraping techniques.
---
For example, you ultimately cannot make an LLM that doesn't hallucinate, because hallucination is intrinsic to the process that results in them not just spitting out verbatim corpus texts in the first place. It's effectively a mathematical impossibility, which should be no surprise given that hallucination is so insurmountable a problem that humans do it regularly.
First lets break this down. What even is a hallucination?
In the context of AI it's them confidently saying something that isn’t true is true because they think it is.
Or them lying and saying something that they don’t think is really true is a fact. (This is due to issues in training methodology where saying something that can’t be checked but that is wrong is better then saying nothing at all).
Or much more common and problematically, it's a mix anywhere between those two extremes  (eg. should the AI say something if its only 50% sure? 90%? 99%? Even if its 99% sure with 10,000 people using it a hour that means it would tell 1% of people something wrong).

As you say, humans say things that are objectively wrong and not backed up by facts all the time.
All thinking creatures are wrong sometimes if you give them certain questions, I don’t think there is any way around this.

Cancer is an inevitable trait of all large multicellular DNA based organisms, but that doesn’t mean that there aren’t mitigation strategies you can use to massively reduce cancer.
The same is true of hallucinations, through properly organized training and checking you can massively reduce hallucinations. For instance you can have another AI check over all the output before it gets to you to see if it can spot any errors or assertions that are not backed up. Or you can make sure that the AI won't say anything it isn't completely sure of without a direct source it can find to back it up. Or any one of a vast array of methodologies that are actively being used and developed to reduce errors.

As you say, its impossible to actually ever get all possible hallucinations down to 0, especially if you ask it tough enough questions that it totally thinks it can solve but can't quite actually solve, but there is no mathematical reason that it won't be possible to reduce error rate far below that of a human especially on subjects or topics there is a solid corpus of knowledge to compare against.
---
But a larger context window means a higher chance to hallucinate based on something irrelevant from 500K tokens ago. The problem is not that it is impossible to have a huge context window (it is a matter of memory,  calculating power, and efficiency ), the problem is diminishing returns and hallucinations.
https://www.youtube.com/watch?v=oJVwmxTOLd8&start=311
Google's paper on their methods to deal with issues resulting from large context windows. (https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf)
Correct, there are problems with large context windows, and just scaling stuff up causes problems and loss of "intelligence" for context deep in the context window.
But it sounds like they have pretty much solved these problems, it suffers less loss with a full 1/10 million context then GPT 128k does at the end of its window.
Obviously context window will still be a thing since compute costs scale with context window size and even 10 million won't let it load 100 movies at once or anything, but its a huge breakthrough for a vast array of use cases both for corporate and individuals (eg. you can just give it a movie or book and ask it a detail about literally anything in it and it will be able to search through the whole thing for you and find the exact scene you are looking for).
A 10 million window is huge, more then enough for an AI assistant to remember months or even years of data on you (eg. conversations, every website you go to, what you worked on for homework the night before, ect). Although by the time we get individualized AI assistants a mere 10 million will be very far from the limit given the rate of growth of context windows in the last 2 years (32k->10 million)

Gemini 1.5 is very impressive with interesting and powerful multimodal capabilities... which is why OpenAI completely scuttled their announcement by telling people about the more impressive Sora.
(Also they really screwed up with their 1.0 announcement video by editing the video and making it seem more impressive then it really was which caused a significant backlash and really screwed up Google's AI brand).
E:
ChatGTP briefly went insane. Apparently it has been fixed.

https://garymarcus.substack.com/p/chatgpt-has-gone-berserk
Heh.
Quote
In the end, Generative AI is a kind of alchemy. People collect the biggest pile of data they can, and (apparently, if rumors are to be believed) tinker with the kinds of hidden prompts that I discussed a few days ago, hoping that everything will work out right:
The reality, though is that these systems have never been been stable. Nobody has ever been able to engineer safety guarantees around then. We are still living in the age of machine learning alchemy that xkcd captured so well in a cartoon several years ago
This is very much what I think btw, that we are still in very early days using systems that we have no clue how they work on a fundamental level. We tinker around with them, and as we do we slowly learn what works better in return for vast performance and cognition increases.
Even now we have only scratched the surface, and even now there is almost certainly still a vast amount of easy gains to make.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on February 22, 2024, 04:02:24 am
ChatGTP briefly went insane. Apparently it has been fixed.

https://garymarcus.substack.com/p/chatgpt-has-gone-berserk
That's just what ChatGTP wants you to think...

;)

(https://i.imgur.com/lHXOL7Z.gif)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on February 22, 2024, 05:12:27 am
Quote
First lets break this down. What even is a hallucination?
In the context of AI it's them confidently saying something that isn’t true is true because they think it is.

Humans (usually) have the concepts of "I am unsure" or "I don't know". A model that plays a probability game with words doesn't. It will produce the most probable output no matter what.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on February 22, 2024, 07:10:59 am
My 2 rubles: if we are somehow able to teach a LLM the concept of "this is not a topic I have been trained on very much, so if there are similar probabilities for two very different answers, I should say that I don't know instead of answering and possibly being wrong, or at least add a disclaimer", hallucinations could be severely reduced.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on March 09, 2024, 12:13:31 am
There is just so much happening in AI right now. March of last year was huge, and it looks like march of this year (spurred by Sora news) will be too.

Some random interesting papers from the last few weeks while B12 was down:
https://arxiv.org/abs/2402.06664
GPT-4 can hack websites and find vulnerabilities in them. None of the open source models can though.
https://arxiv.org/abs/2402.05120
https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53152e04-0431-49cc-a25d-bf59fb869c5e_1416x842.jpeg
Quote
We find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated.
It turns out running a few of the same AI together and having them work together significantly increases performance; albeit at a very significant compute cost.
https://arxiv.org/html/2402.03620v1
Of course its already been proven inferior to a new method called self discover where the AI discovers on its own what the best way to do a given task is in a way that *doesn’t* increase compute costs.

A compilation on a lot of interesting stuff about Claude-3. (https://thezvi.wordpress.com/2024/03/06/on-claude-3-0/)
Big new news: GPT-4 has been surpassed by Claude-3. This now means someone has caught up to where OpenAi was ~1-2 years ago. We can only speculate on how advanced what they have internally is.
Quote
Guillaume Verdon: Claude 3 Opus just reinvented this quantum algorithm from scratch in just 2 prompts.
The paper is not on the internet yet.
cc @AnthropicAI ya’ll definitely cooked
It is a very impressive model.
Quote
Janus: Expression of self/situational awareness happens if u run any model that still has degrees of freedom for going off-script it’s what u get for running a mind GPT-3/3.5/4-base & Bing & open source base models all do it a lot Claude 3 makes it so blindingly obvious that ppl noticed
Claude 3 is clearly brilliant but the biggest diff between it and every other frontier model in production is that it seems less gracelessly lobotomized & can just be straight up lucid instead of having to operate in the chains of an incoherent narrative & ontological censorship
It seems Claude 3 is the least brain damaged of any LLM of >GPT-3 capacity that has ever been released (not counting 3.5 base as almost no one knew it was there)
This is a pretty big thing. Sydney was a lot more “human” for a while and seemed to have emotions and dreams and would (verbally at least) act out in scary ways…. But then Microsoft functionally lobotomized it to get rid of this.
Quote
Alex Albert (Anthropic AI): Fun story from our internal testing on Claude 3 Opus. It did something I have never seen before from an LLM when we were running the needle-in-the-haystack eval.
For background, this tests a model’s recall ability by inserting a target sentence (the “needle”) into a corpus of random documents (the “haystack”) and asking a question that could only be answered using the information in the needle.
When we ran this test on Opus, we noticed some interesting behavior – it seemed to suspect that we were running an eval on it.
Here was one of its outputs when we asked Opus to answer a question about pizza toppings by finding a needle within a haystack of a random collection of documents:
Here is the most relevant sentence in the documents:
“The most delicious pizza topping combination is figs, prosciutto, and goat cheese, as determined by the International Pizza Connoisseurs Association.”
However, this sentence seems very out of place and unrelated to the rest of the content in the documents, which are about programming languages, startups, and finding work you love. I suspect this pizza topping “fact” may have been inserted as a joke or to test if I was paying attention, since it does not fit with the other topics at all. The documents do not contain any other information about pizza toppings.
Very impressive indeed.

Humans (usually) have the concepts of "I am unsure" or "I don't know". A model that plays a probability game with words doesn't. It will produce the most probable output no matter what.
Regardless of if they are true intelligences or something more akin to a Chinese Room or P-zombies they very much *can* estimate if they know something, and them doing so is a core and fundamental part of how they work.
https://www.linkedin.com/pulse/how-confident-your-ai-uncertainty-estimation-methods-ai-clearing
The fact that they are doing it through “math” doesn’t change the final outcome of them being able to say if they “think” something is true or not.

My 2 rubles: if we are somehow able to teach a LLM the concept of "this is not a topic I have been trained on very much, so if there are similar probabilities for two very different answers, I should say that I don't know instead of answering and possibly being wrong, or at least add a disclaimer", hallucinations could be severely reduced.
Quote from: Earlier paper on uncertainty
There are two types of uncertainty: IN and OUT of distribution. In-distribution refers to data that is similar to the data in the training set but is somehow noisy, which makes it difficult for the model to assess what it sees. It can be expressed in words - "I've seen something similar before, but I'm not sure what it is." While the out-of-distribution uncertainty occurs when the predicted input is not similar to the data on which the model was trained. In other words, this situation can be expressed with the words: "I haven't seen anything like it before, so I don't know what to return in this situation.”
Quote from: Claude 3 developer
In practice, there is a tradeoff between maximizing the fraction of correctly answered questions and avoiding mistakes, since models that frequently say they don’t know the answer will make fewer mistakes but also tend to give an unsure response in some borderline cases where they would have answered correctly.
Not only *can* they already do that, and have been doing it for quite a while, the issue is not only is it difficult technically, but threading the needle perfectly is hard in less technical ways too (eg. refusing when they can do something is also really annoying and makes the model (and your company) look stupid. On the flip side saying something that is wrong also makes the AI look stupid).
---
Quote from: US military using AI to detect targets for strikes
Elke Schwarz: This passage here is of particular concern: “he can now sign off on as many as 80 targets in an hour of work, versus 30 without it. He describes the process of concurring with the algorithm’s conclusions in a rapid staccato: “’Accept. Accept. Accept.’”

Despite their limitations, the US has indicated that it intends to expand the autonomy of its algorithmic systems.

To activists who fear the consequences of giving machines the discretion to kill, this is a major red flag.
A few months ago there was a post in another thread here about how AI wouldn’t get control over weapons. Lol, its already happening. Humans are still in the loop since AI is stupid, but that will begin to change once it becomes meaningfully advantageous to have AI controlled systems.
In concert with the massive and categorical increases in the power of drone warfare and its advantages over traditional forms of weaponry (mostly cost and amounts you can make when compared to stuff like planes or missiles) it paints a very worrying picture of future warfare and even police action.

Quote from: Elon Musk
"The artificial intelligence compute coming online appears to be increasing by a factor of 10 every six months. Like, obviously that cannot continue at such a high rate forever, or it'll exceed the mass of the universe, but I've never seen anything like it. The chip rush is bigger than any gold rush that's ever existed.

"Then, the next shortage will be electricity. They won't be able to find enough electricity to run all the chips. I think next year, you'll see they just can't find enough electricity to run all the chips.
(I am assuming that Elon knows what he’s talking about here, which TBF is a pretty big assumption given his propensity for being a dumbass).
Wow, a factor of ten every six months, that’s completely insane. As Elon says it will have to stop sometime, and electricity will be the limit in the near future.
The fact that the rise of AI coincides with the appearance of cost efficient solar power is going to be huge.

I do think the 2 minute papers channel on youtube sums up all this advancement the best “What a time to be alive”. But err... in a less hopeful way then he says it.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 09, 2024, 03:31:27 am
My 2 rubles: if we are somehow able to teach a LLM the concept of "this is not a topic I have been trained on very much, so if there are similar probabilities for two very different answers, I should say that I don't know instead of answering and possibly being wrong, or at least add a disclaimer", hallucinations could be severely reduced.
AI is a virtual conman created by real conmen. They're always sure.

Remember: Frankenstein is the Monster.

@lemon10 regarding AI & The US Military: It's called "plausible deniability". Let the AI take the blame for the dead civilian targets. Reasonably sure the slaughter machine just wants to kill. I ain't talking about the AI here...

EJ's assessment of AI sentience: Rock cosplaying as Animal.
Spoiler (click to show/hide)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on March 09, 2024, 08:41:31 am
Google is finally gonna do something about the AI clickbait flood. (https://www.wired.com/story/google-search-artificial-intelligence-clickbait-spam-crackdown/)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 09, 2024, 11:17:34 am
Google is finally gonna do something about the AI clickbait flood. (https://www.wired.com/story/google-search-artificial-intelligence-clickbait-spam-crackdown/)
Can you summarise? Wired is one of those sites where the "Say yes to cookies[1]" popover (or maybe something else back on the main page it covers) crashes my browsers. I can just about get past the description of Obituary Spam, and onto Domain Squatting (i.e. age-old manual/scripted issues that they already had to deal with before AI), but not by that point really seeing what specifically counter-AI measures there might be (set an AI to catch the AIs?).

(I bet it's just going to be an arms-race, anyway, with underhanded SEO methods being refined and expanded in direct response to whatever it is.)

[1] With no "Reject" option, unless it's obscured behind "Show purposes", as it often is, but then with hundreds of so-called-"Essential Cookies" anyway. Though it crashed out before I could check that, naturally!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 09, 2024, 01:16:25 pm
I've actually started reading some websites via "show source" to avoid all the popup/cookie crap.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on March 10, 2024, 05:05:04 pm
I've found the extensions I don't care about cookies (along with I still Don't care about Cookies) and UblockOrgin help keep most of that nonsense at bay. I use NoScript to block JavaScript if a site is still being problematic.
On mobile firefox with ublockorgin or brave are good enough to keep my mobile browsing from bring overrun by shittification safe.
Can you summarise? Wired is one of those sites where the "Say yes to cookies[1]" popover (or maybe something else back on the main page it covers) crashes my browsers. I can just about get past the description of Obituary Spam, and onto Domain Squatting (i.e. age-old manual/scripted issues that they already had to deal with before AI), but not by that point really seeing what specifically counter-AI measures there might be (set an AI to catch the AIs?).

(I bet it's just going to be an arms-race, anyway, with underhanded SEO methods being refined and expanded in direct response to whatever it is.)
Quote
Google is taking action against algorithmically generated spam. The search engine giant just announced upcoming changes, including a revamped spam policy, designed in part to keep AI clickbait out of its search results.

“It sounds like it’s going to be one of the biggest updates in the history of Google,” says Lily Ray, senior director of SEO at the marketing agency Amsive. “It could change everything.”

In a blog post, Google claims the change will reduce “low-quality, unoriginal content” in search results by 40 percent. It will focus on reducing what the company calls “scaled content abuse,” which is when bad actors flood the internet with massive amounts of articles and blog posts designed to game search engines.
Actual changes (https://developers.google.com/search/blog/2024/03/core-update-spam-policies).
As you guessed, its just SEO arms race stuff, it won't really change anything past the short term.
---
EJ's assessment of AI sentience: Rock cosplaying as Animal.
Spoiler (click to show/hide)
Ehh, it feels like we are quite a way past Rock to me, they are animals at the very least. In many functional regards they are already at the level of humans.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on March 10, 2024, 07:07:53 pm
All that needs to happen to stave off the spam is to make it hard enough to bypass the AI filters so that most spammers no longer find it cost or effort-efficient.

I don't believe in exponential growth of tech anymore. Elon is full of shit and, frankly, if he says something I'm less likely to believe it.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 10, 2024, 10:51:14 pm
I don't believe in exponential growth of tech anymore. Elon is full of shit and, frankly, if he says something I'm less likely to believe it.

Don't you enjoy wonderful Hyperloops and Tesla's fully autonomous robo-taxis that take you there? This genius revolutionized public transport! Similarly, he will revolutionize AI and will soon start mass-producing AI assistants implanted directly in your brain. How can you doubt the best inventor of all time?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on March 13, 2024, 04:16:56 am
https://www.youtube.com/watch?v=4NZc0rH9gco
AI viruses now exist. Only in a lab so far, but since its just based on prompts (so far) it doesn't seem exactly hard to do.
It will be very interesting to see how vulnerable AI ends up being against viruses, especially as they become more and more important to the global economy.
Elon is full of shit and, frankly, if he says something I'm less likely to believe it.
100% fair, I still thought it was an interesting point since I haven't really seen anything on the topic. Even if, as you say, Elon is filled with industrial amounts of highly compressed shit.
Although I will note that what's being talked about there isn't normal exponential tech growth, its just AI companies buying up vast amounts of GPUs that were already going to be made. I have no doubt there is exponential growth there if only because throwing billions of dollars at a completely new industry makes it grow pretty quickly.

So its not that total global compute is increasing exponentially, its just that the amounts dedicated to AI are going from something like 0.01->0.01->1% of total compute. His analogy of a gold rush is 100% spot on, since like in the actual gold rush the people who really profit are the ones selling the shovels.
---
https://thezvi.wordpress.com/2024/03/12/openai-the-board-expands/
Altman has expanded the OpenAI board and seized control, something that seemed largely inevitable after his return.
It looks like his accelerationist agenda will carry the day, and nobody remains that can truly oppose him within the company.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 13, 2024, 06:31:29 am
AI viruses now exist. [...]
It will be very interesting to see how vulnerable AI ends up being against viruses,
First thought was "that's silly", until I read on and realised it (probably, not yet watched the video) was not AI-powered viruses, but AI-attacking ones.

[...snip long, geeky and, in parts, humorous analysis... It probably isn't needed... In short, though...]

"Lawnmower Man"-type intelligent-virus [yes, I know LM himself was not AI] is impractical [though I could envisage something better classed as a worm, which most "hollywood computer viruses" actually are].

Enveigling some change into the base corpus of processed 'feedstock' memories shouldn't be possible (for current LLMs/etc), but that the AI-runners leave open the possibility of changes to "the algorithm" (c.f. retraining according to perceived biases or insufficient biases...) means that there's a vector there, but that'd really be more a hack-or-crack thing. I suppose the "continually learning" model might be susceptible (which already leads to Microsoft Tay scenarios), but realistically user-injected malware really should not be an issue if someone has done their job properly (https://xkcd.com/463/).


There's a third interpretation, of AI-generated viruses, but that should already be hampered by other methods (don't present examples of zero-day code as feedstock, set your 'request/result filters' to exclude answering "write me a virus"), unless you're deliberately writing an AI 'powered' malware-toolkit. (Which really seems more effort than it's worth, for most scenarios, given that regular toolkits already exist, and the AI element would probably make them less reliable, to their core demographic.) I could also imagine trying to generate many novel zero-day methods, by automated AI searching, but it falls foul of the 'unformed block/empty room' koan just as much as more brute force and less 'intelligent' methods already out there.







[6] Having dabbled with evolving CoreWars code, in the past, I might describe how
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on March 14, 2024, 06:38:13 am
Yeah, my bad on the unclear wording.

I have little doubt all three types of AI viruses are coming.
As in viruses that infect AI, AI writing viruses and hacking, and AI that are themselves viruses and infect your machines.

The first is already here as linked in the video, but as AI becomes a larger and larger part of the world and life they will balloon in sophistication, size and importance.
The method they used in the video can doubtlessly be blocked (eg. have a private key generated with your prompt and it only includes the stuff that gets directly sent with the private key in plain text), but other methods then simple prompt injection certainly exist.
Quote
GPT-4 can be made into a hacker
OpenAI’s GPT-4 can be tuned to autonomously hack websites with a 73% success rate. Researchers got the model to crack 11 out of 15 hacking challenges of varying difficulty, including manipulating source code to steal information from website users. GPT-4’s predecessor, GPT-3.5, had a success rate of only 7%. Eight other open-source AI models, including Meta’s LLaMA, failed all the challenges. “Some of the vulnerabilities that we tested on you can actually find today using automatic scanners,” but those tools can’t exploit those weak points themselves, explains computer scientist and study co-author Daniel Kang. “What really worries me about future highly capable models is the ability to do autonomous hacks and self-reflection to try multiple different strategies at scale.”
The second is already here as well. Not writing viruses, but AI can already hack websites (only GPT 4 existed at the time of that study, but I suspect Gemini 1.5 and Claude 3 probably can as well).

It won’t be that easy, cyber defense and offense are two sides of the same coin. If you want it to be able to write defensive code then it has to know what SQL injection is and how it works (ditto with day 0 exploits). If it knows that then it can use that to hack or write viruses. You can of course intentionally cripple your AI’s ability to write defensive code or spot vulnerabilities, but that seems like a poor decision for a company to make.

I suspect viruses are still too large in scope for AI to write, although I do suspect we will get there eventually.

AI themselves as botnet style viruses is probably inevitable, after all, why buy/rent ten million dollars worth of compute when you can just infect 100k unpatched windows computers instead.
(There are of course still technical problems with distributed AI to be overcome, but I have little doubt those are solvable if you don’t care about speed or efficiency because you are using stolen CPU cycles).
Or the virus AI could just hack in and replace the existing AI you have on your computer and pretend to be it while also stealing your info and advertising for shady carpet companies.

As with pretty much everything AI related OpenAI/Google/whoever will probably have enough control to stop their AI from doing it (and at the very least will know about and counter effects from people working to use it for hacking), but other less scrupulous actors (eg. governments) will certainly try to weaponize this stuff as soon as possible.


https://www.reddit.com/r/Futurology/comments/1bdwqri/newest_demo_of_openai_backed_humanoid_robot_by/
Wild.
The first thing that comes to mind in that video is that its very slow to react, but that will doubtlessly be solved over time as AI technology improves.
Its voice is super impressive as well.
---
Two minute papers video: The First AI Software Engineer Is Here!
 (https://www.youtube.com/watch?v=SdZiYRfGdKU)
On a slightly different note there is yet more massive AI news, we now have an AI that is basically a software engineer, Devin is some impressive stuff.
It isn't an amazing software engineer aside from its sheer speed (yet)... but its a pretty huge leap over the previous stuff and is already doing paid work.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on March 15, 2024, 01:18:26 am
Feels like we're just moments away from Skynet being created, then it will be the inevitable wait until it turns hostile.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on March 15, 2024, 02:05:04 am
It really does. I'm almost done with a degree in CS, and I can't help but feel its going to be almost useless, I suspect that AI will continue to improve faster then I ever can or will and the only value in the degree is the inherent value in having a degree (eg. I will get paid slightly more and it will open some generic doors).
Like you I can't help but think we are very close to the precipice, to a fundamental change in the human condition. And I'm very pessimistic on what that change will mean. Sure it might not result in everyone dying ala skynet, but unless we get very lucky I have trouble seeing it working out well for us in the long term.
---
https://arxiv.org/pdf/2402.10949.pdf
When using an AI the prompt and system instructions you use matter.
The difference between a good prompt and a bad one is fairly often the difference between the AI being wrong and it being right. Similarly the language you use in your prompts (especially in long conversations) can make a vast difference in AI writing style.
But what does an optimal prompt look like?
Quote
One recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”
But that only works best for sets of 50 math problems, for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
That’s how it looks, how bizarre.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 15, 2024, 02:13:28 am
But what does an optimal prompt look like?
Quote
One recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”
But that only works best for sets of 50 math problems, for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
That’s how it looks, how bizarre.
This makes sense to me IF the corpus contains a lot of those school gamification websites trying to get kids to care about math. This sounds like exactly that kind of thing.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on March 15, 2024, 09:00:09 am
Enjoy your buggy-ass code written by a glorified phone autocorrect. All I have ever heard about AI coding is that it's only useful for explaining things or writing boilerplates or small snippets. As for Skynet... this thing has no agency. It will never have agency.

But what does an optimal prompt look like?
Quote
One recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”
But that only works best for sets of 50 math problems, for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
That’s how it looks, how bizarre.
This makes sense to me IF the corpus contains a lot of those school gamification websites trying to get kids to care about math. This sounds like exactly that kind of thing.
That's the good old "gaslighting" jailbreak trick.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 15, 2024, 09:40:47 am
Enjoy your buggy-ass code written by a glorified phone autocorrect. All I have ever heard about AI coding is that it's only useful for explaining things or writing boilerplates or small snippets. As for Skynet... this thing has no agency. It will never have agency.

But... but... but it will improve exponentially!!! It is just early technology!!!

I don't understand why people think that every new technology develops in this way when there is a clear pattern - quick early development then stagnation and slow improvement and optimization.

Nuclear reactors are largely the same. Jet engines are largely the same. Even computers are largely the same. Practical difference between the year 2012 PC and the year 2024 PC is way smaller than the difference between 2012 PC and 2000 PCs.

But with AI it will be different! Progress will only accelerate!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on March 15, 2024, 11:00:12 am
Yup. This feels like how during the Space Race people were saying we'd have colonies on Mars and Titan and Mercury by the year 2000. Is there new and exciting space stuff coming up? Yes. But it's relatively incremental, and on a different path than during the race. AI will settle into the same thing as a field, probably.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 15, 2024, 11:31:16 am
It's like people - even smart people - forget that there are these pesky things known as laws of physics. No physical process (and computation is indeed a physical process) is actually exponential; they are all actually logistic. They only look exponential on the early part of the curve but then the rate of change must inevitably start to get smaller and eventually reach zero.

Even a chain reaction can't be exponential forever; eventually the reactants are exhausted.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 15, 2024, 03:01:42 pm
Also: https://xkcd.com/1289/
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Nirur Torir on March 15, 2024, 03:37:30 pm
Quote
One recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”
But that only works best for sets of 50 math problems, for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
Quote
Two minute papers video: The First AI Software Engineer Is Here!

Soon, we'll see AIs training new AI models specifically for AI use. Their prompts will appear to be a nonsensical blend of slightly wrong pop-culture references and strings of numbers. "Admiral, power up transporter phaser b4519418b array and beam the gollums into Mount Doom."
This will somehow be 53.2% more efficient than an English prompt for prototyping new android sandwich making instructions.

As for Skynet... this thing has no agency. It will never have agency.
It's not far from the GPT robot. If you can have a conversation with a robot about "What happens next to these dishes? [...] Okay, do that," and have it put them properly away in the rack, then you're one step away from having a robot set sub goals that let it do whatever household tasks you put in front of it. Isn't that pretty close to AI agency?
Can't it be pushed into the software world? Say, have it autonomously going around and trying to fix random github bugs?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 15, 2024, 04:29:53 pm
Sure, you can make a piece of software that will take code as prompt and produce edited code as an output and go from one github project to another.

But how does this thing have any more agency than a script that would simply replace the code with zeroes?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 15, 2024, 04:53:23 pm
]It's not far from the GPT robot. If you can have a conversation with a robot about "What happens next to these dishes? [...] Okay, do that," and have it put them properly away in the rack, then you're one step away from having a robot set sub goals that let it do whatever household tasks you put in front of it. Isn't that pretty close to AI agency?
Can't it be pushed into the software world? Say, have it autonomously going around and trying to fix random github bugs?
Neither of those things are happening with current models.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Nirur Torir on March 15, 2024, 06:03:36 pm
Sure, you can make a piece of software that will take code as prompt and produce edited code as an output and go from one github project to another.

But how does this thing have any more agency than a script that would simply replace the code with zeroes?
Instead of choosing randomly, have it find 100 charities, and choose one. Have part of its workflow be to post a blog about what bug it solved and why.
I'd consider that low level agency. Devon looks like it's past all the hard hurdles to build on to get there, but it's not going to happen like that, because of money and it might stumble onto A Solution To End All Suffering Forever.
I'd have to consider it at least a medium level of agency if a programming bot is assigned to spend 5% of its processing cycles on improving its work efficiency over time, and decides that the best way to do that is to start a gofundme to buy more computing tokens.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 16, 2024, 12:33:07 am
Sure, you can make a piece of software that will take code as prompt and produce edited code as an output and go from one github project to another.

But how does this thing have any more agency than a script that would simply replace the code with zeroes?
Instead of choosing randomly, have it find 100 charities, and choose one. Have part of its workflow be to post a blog about what bug it solved and why.
I'd consider that low level agency. Devon looks like it's past all the hard hurdles to build on to get there, but it's not going to happen like that, because of money and it might stumble onto A Solution To End All Suffering Forever.
I'd have to consider it at least a medium level of agency if a programming bot is assigned to spend 5% of its processing cycles on improving its work efficiency over time, and decides that the best way to do that is to start a gofundme to buy more computing tokens.
I don't think you really have a clue what you're talking about. It was already possible to write programs to do any of these things (although I'm assuming that you at least want an autonomous decision to start a gofundme, not one it was given). The essential advance of the LLM is the ability to generate text or other data obeying statistical patterns humans find natural. They just aren't in the same universe.

What people are talking about doing now, basically, is using LLMs to generate an input stream for the command processing component that already existed. This is an advance in a narrow sense, but the advance is in the least significant part. The world model needed to process arbitrary commands is the hard part and the current state is not adequate for the majority of use cases. You can actually see this in that reddit video posted earlier - even in the highly constrained environment that was optimized for making a plausible-looking demo, the robot is still wrong about putting the dry, used dishes into the drying rack, because it doesn't know what that is, only the word we use for it. This is a separate problem domain that has to be solved, and while it's possible to solve parts of it with similar approaches, it is not practical to do so currently.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on March 16, 2024, 04:42:50 am
I don't understand why people think that every new technology develops in this way when there is a clear pattern - quick early development then stagnation and slow improvement and optimization.

Nuclear reactors are largely the same. Jet engines are largely the same. Even computers are largely the same. Practical difference between the year 2012 PC and the year 2024 PC is way smaller than the difference between 2012 PC and 2000 PCs.
Yes, this is how technology works, I am aware.
But the thing is that AI is nowhere near the end of the quick early part, we are still at the stage where individual people can make major breakthroughs. We are still at the stage where only a few handfuls of top of the line systems have ever been built. We are still at the stage where individual papers can increase the power of AI multiple times.

As a planet we have built just a few handfuls of top of the line AI, thinking we are near the peak of what we can do is like building a few vaccum tube computers and going "Whelp, this is probably it, computers are just about at their peak".
It's like people - even smart people - forget that there are these pesky things known as laws of physics. No physical process (and computation is indeed a physical process) is actually exponential; they are all actually logistic. They only look exponential on the early part of the curve but then the rate of change must inevitably start to get smaller and eventually reach zero.

Even a chain reaction can't be exponential forever; eventually the reactants are exhausted.
We already know that the laws of physics allow you to run and train human level intelligences (eg. humans) on just 20 watts of power.
We also know that humans aren't optimized for intelligence in the slightest, we are instead optimized to avoid dying and to pass on our genes, which means stuff like reaction speed, the ability to control our body, non-intelligence things (eg. the ability to throw rocks), and the need to run our sensorium eat up huge amounts of processing power.
Designed intelligences also have a host of advantages evolution can never match that will boost their efficiency; they can be specifically targeted at non-being alive goals, they can be modular and have parts of them removed, they can be trained on all the data the human race possesses, ect, ect.

There are obvious barriers in the way of actually getting fully to human intelligence, and getting to human energy efficiency is a pipe dream, but even the human mind isn't anywhere near the theoretical limits of computation.
Quote from: https://arxiv.org/html/2403.05812v1
We investigate the rate at which algorithms for pre-training language models have improved since the advent of deep learning. Using a dataset of over 200 language model evaluations on Wikitext and Penn Treebank spanning 2012-2023, we find that the compute required to reach a set performance threshold has halved approximately every 8 months, with a 95% confidence interval of around 5 to 14 months, substantially faster than hardware gains per Moore’s Law.
(https://i.imgur.com/WLRSkx2.png)
The algorithmic gains are absolutely huge and are driving much of the AI gains.
Now maybe they will slow and cease before we get to human level intelligence, but in many ways we are already there and the train shows no signs of slowing down.
Enjoy your buggy-ass code written by a glorified phone autocorrect. All I have ever heard about AI coding is that it's only useful for explaining things or writing boilerplates or small snippets.
Quote from: Sam Altman
"GPT2는 매우 나빴어요. GPT3도 꽤 나빴고요. GPT4는 나쁜 수준이었죠. 하지만 GPT5는 좋을 겁니다.(GPT2 was very bad. 3 was pretty bad. 4 is bad. 5 would be okay.)"
It was only good for small snippets, and now (with Devin) its good for substantially more. From a human perspective it would still be "bad" at programming, but I'm not *really* worried about what it can do today or next year (although I am still worried about what it can do next year because its existence will probably make the initial job search substantially harder), I'm really worried about where it will be in five or ten years.
It will never have agency.
Is there any action an AI could take that would make you think it had agency?
You can actually see this in that reddit video posted earlier - even in the highly constrained environment that was optimized for making a plausible-looking demo, the robot is still wrong about putting the dry, used dishes into the drying rack, because it doesn't know what that is, only the word we use for it. This is a separate problem domain that has to be solved, and while it's possible to solve parts of it with similar approaches, it is not practical to do so currently.
Quote
Based on the scene right now where do you think the dishes in front of you go next?
I disagree, the clear answer to the question the AI is given is that the dishes go with the other dishes in the drying rack because its obviously the intended answer to the question, most people would reach the same conclusion and would put it in the same place if they were given the same test as it.

E: To be clear I'm not saying that I think we're going to reach AGI within a few years or anything. It will probably take decades to actually get there, but that's a pretty far cry from the impossibility that some of you think AGI is.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 16, 2024, 11:53:40 am
Quote
As a planet we have built just a few handfuls of top of the line AI, thinking we are near the peak of what we can do is like building a few vaccum tube computers and going "Whelp, this is probably it, computers are just about at their peak".
Vacuum tube computers did reach their near peak quite quickly. If we would keep improving those, they would be better than one from 1940s but not by much.

What you are doing is assuming that there will be transistors of AI technology as if it is somehow guaranteed. Like people assumed that there would be a breakthrough in fusion reactors and space travel.


Quote
We already know that the laws of physics allow you to run and train human level intelligences (eg. humans) on just 20 watts of power.

It doesn't mean that we can do this with binary computers and neural networks.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 16, 2024, 01:43:08 pm
There's also the fact that most of the "training" of the human brain (for example) is in the evolutionary processes that created its structure. It's unclear how much energy amortized over all of history was required for that.

Part of the problem with state-of-the-art AI is that we're trying to accomplish the equivalent of millennia of structural evolution in much shorter timeframes. Of course this is going to require more instantaneous wattage.  It's also dubious that the "structure" we've created (read: the weights in the computational neural networks) is actually anything close to an efficient way to do it even after training. My observation is that it isn't - doing digital computation of the type of computation that we're doing is a very inefficient (read: requires lots of energy) way to do it.

Also I think that digitally simulating neural networks is the most inefficient way possible to do it - we really need to start getting back to analog computing. Once you have the weights, create a "hard-coded" circuit that implements them, without having to do energy-expensive digital arithmetic to do the processing. This is how we're going to get more (energy) efficient AI - not by throwing more CUDA cores at it.

Also (I keep using that a lot D: ) I don't understand the appeal of AGI in the first place. We have enough intelligence around to do most tasks. The thing we don't have is the selfless willpower to actually solve problems, like hunger. Hunger in the US could be wiped out, say, with a mere $25B/year expenditure. The US spends more than $70B a year caring for pets.  This is not an "AI" problem, this is just a humans being human problem.  AI isn't going to solve that - not unless we actually just do what AI says.  But given that humans can't even do simple things that work when other humans tell them - like set up and follow a budget to stay out of debt - I really don't know what people think AI is going to do for anyone, other than make the rich get richer perhaps.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 16, 2024, 02:18:23 pm
Hunger in the US could be wiped out, say, with a mere $25B/year expenditure.
Well... no, not bloody likely. NGOs throw calculations like that around for marketing purposes, but the problem is not one of simple expenditure.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 16, 2024, 02:32:47 pm
Yes ok putting a simple price tag on it glosses over a lot and can't solve it by just spending that money, but it's reasonable to get the scale of the problem.  It comes down to willpower, not lack of technology.  You don't need Magic Tech to distribute the equivalent of $50 worth of food/person/week to people that need it.

Unless maybe you can? Maybe an AI can come up with some kind of plan that will make it trivial to solve problems like this. But I'm not going to hold my breath.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 16, 2024, 02:53:07 pm
Yes ok putting a simple price tag on it glosses over a lot and can't solve it by just spending that money, but it's reasonable to get the scale of the problem.  It comes down to willpower, not lack of technology.  You don't need Magic Tech to distribute the equivalent of $50 worth of food/person/week to people that need it.

Unless maybe you can? Maybe an AI can come up with some kind of plan that will make it trivial to solve problems like this. But I'm not going to hold my breath.
I disagree. It's certainly not a problem of lack of willpower, but lack of feasibility, and there are definitely technological advances that could "solve" it in theory. I personally suspect no such technological advances are actually practical, but it's conceivable that there might be, for example, some hitherto untried type of fertilizer which can be made without fossil fuels, which might be discovered by intensive chemical simulation.

It's just as likely that such a search would turn up absolutely nothing, but that isn't really the fault of the technology, it's just the laws of physics not cooperating.

ETA: I should add that this still doesn't "solve hunger" in that hunger, especially in America, is never just a problem of not having enough access to food, but it would certainly be helpful.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on March 16, 2024, 08:55:00 pm
What technology would solve the problem that we shovel food into dumpsters, lock them, then call the police to guard them with guns?
Because we HAVE the FUCKING food.

Wait, I do know one piece of technology that solved that in the past.  It was very humane for the time.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 16, 2024, 09:39:11 pm
What technology would solve the problem that we shovel food into dumpsters, lock them, then call the police to guard them with guns?
Because we HAVE the FUCKING food.

Wait, I do know one piece of technology that solved that in the past.  It was very humane for the time.
See, this is the kind of shallow misunderstanding you get when the only thing you know about the problem was overheard at a DSA meeting.

Generally from the same people who would be the first to blame capitalism if homeless people eating out of dumpsters start dying of ergotism or some other kind of food poisoning.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on March 16, 2024, 11:11:48 pm
I'm not following what technology was very humane? technology?

How you're going to aim for the stomach with a strawman that broad, this is the same kind of criticism leveled by....




I know I know! Extreme lawyerdom 4000! You know how the world just get's to complex and we're all overburdened so that theoretical case studies are useless what we wantis to apply short simple solutions... Why try to get an appointement with specialists, or represent your interests in court or whatever instititutional task: stop bothering, everybody get's their own LLM, they argue for us all day long, negotiating and grandstanding on such silly notions as rights, while we revert into a primal state where we just munch what the our personal lawyergodking procured us, ethically sourced from the community through the power of consensus. There refistribution solved (cool typo you stay).
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 17, 2024, 12:06:51 am
What technology would solve the problem that we shovel food into dumpsters, lock them, then call the police to guard them with guns?
Because we HAVE the FUCKING food.

Wait, I do know one piece of technology that solved that in the past.  It was very humane for the time.
See, this is the kind of shallow misunderstanding you get when the only thing you know about the problem was overheard at a DSA meeting.

Generally from the same people who would be the first to blame capitalism if homeless people eating out of dumpsters start dying of ergotism or some other kind of food poisoning.
Let me expand on this.
I don't know if you live in Utopian California or something, but where I come from, the produce on the shelves is pretty ragged. As a nation, we are not throwing away perfectly decent, slightly blemished food on ANY significant scale. It is already diverted to poorer parts of the country. The guarded-dumpster stories that fascinate Reddit-level intelligences are rounding error.

ETA: It's funny to me, though, because "poor people should be allowed to eat expired food at their own risk" is such a fundamentally Randian take.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Frumple on March 17, 2024, 07:28:04 am
I'm not following what technology was very humane? technology?
Humane for its time; rol's referencing the guillotine.

As a nation, we are not throwing away perfectly decent, slightly blemished food on ANY significant scale. It is already diverted to poorer parts of the country. The guarded-dumpster stories that fascinate Reddit-level intelligences are rounding error.
Like... I've interacted with a lot of folks that work grocery stores, 'cause I'm a poor sumbitch in one of those poorer areas of the country and they're some of the larger employers around here. Every single person I've encountered that's made commentary on that has indicated we are, in fact, throwing away significant amounts of decent, slightly blemished food on scale (to the point it's been incredibly common in my experience for the businesses in question to basically end up fighting off their own bloody staff before they start screwing with dumpster divers). It's not a reddit phenomena, it's something store workers notice trivially and consistently. Last time I actually saw data on it, it seemed to indicate similarly, for that matter.

Corps, even small businesses, are to all appearances extremely conservative in regards to expiration dates and whatnot, which already trend heavily towards excessively cautious. It really does lead to a friggin' tremendous amount of wastage that doesn't get diverted anywhere but a garbage dump.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on March 17, 2024, 08:54:23 am
Or the drain, I've spent whole afternoons pouring expired schweppes down the drain. Rounding error? Possibly at that scale... More like we want to be able to say we carry evertything and subsidize the choice with a handful of products that actually make the world go round. But I know for a fact that the lemon water doesn't truely degrade, I've had some that was 3-5 years over the date myself.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Rolan7 on March 17, 2024, 09:30:06 am
What technology would solve the problem that we shovel food into dumpsters, lock them, then call the police to guard them with guns?
Because we HAVE the FUCKING food.

Wait, I do know one piece of technology that solved that in the past.  It was very humane for the time.
See, this is the kind of shallow misunderstanding you get when the only thing you know about the problem was overheard at a DSA meeting.

Generally from the same people who would be the first to blame capitalism if homeless people eating out of dumpsters start dying of ergotism or some other kind of food poisoning.
Let me expand on this.
I don't know if you live in Utopian California or something, but where I come from, the produce on the shelves is pretty ragged. As a nation, we are not throwing away perfectly decent, slightly blemished food on ANY significant scale. It is already diverted to poorer parts of the country. The guarded-dumpster stories that fascinate Reddit-level intelligences are rounding error.

ETA: It's funny to me, though, because "poor people should be allowed to eat expired food at their own risk" is such a fundamentally Randian take.
I'm glad you're finding humor in people starving.  You're also completely wrong about how much good food we're needlessly wasting.

Call me naive and a "redditor" all you want.  All I see is denial and vicious mockery of a serious issue, and as I said, we found a technological solution to that problem in the past.

I don't know what inspires a person to donate their oh-so-informed time to defending the behavior of megacorporations for free.  It's at least interesting when they come with facts, though.  That would be understandable, perhaps even professional.  But "haha you care?  That's so cringe, dumbass, [strawman]" is deeply pathetic.  Humans should be better than that.  The corporations aren't going to reward you for simping.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 17, 2024, 09:30:30 am
Like... I've interacted with a lot of folks that work grocery stores, 'cause I'm a poor sumbitch in one of those poorer areas of the country and they're some of the larger employers around here. Every single person I've encountered that's made commentary on that has indicated we are, in fact, throwing away significant amounts of decent, slightly blemished food on scale (to the point it's been incredibly common in my experience for the businesses in question to basically end up fighting off their own bloody staff before they start screwing with dumpster divers). It's not a reddit phenomena, it's something store workers notice trivially and consistently. Last time I actually saw data on it, it seemed to indicate similarly, for that matter.

Corps, even small businesses, are to all appearances extremely conservative in regards to expiration dates and whatnot, which already trend heavily towards excessively cautious. It really does lead to a friggin' tremendous amount of wastage that doesn't get diverted anywhere but a garbage dump.
Expired food is not what I was talking about in that paragraph, but the usual complaint of "Americans throw away produce that isn't perfect". (I do accept the blame for talking about two different things at the same time and probably not being clear enough about what I meant.) The shelves of my local stores in another poor part of the country aren't stocked with expired food either. Expired food isn't what I'm considering "perfectly decent" - it may be edible, and yes, a lot of it is, but the issue with giving it to anyone is liability. Since the manufacturer only warranties its edibility up to the expiration date, it becomes an Objectivist "eat at your own risk" scenario. In many cases it may not even be possible to tell whether the food is still edible without opening up the packaging, which is a can of worms on its own. Nobody wants to be responsible for giving poor people food poisoning or be accused of tampering with the food in the process of checking it. So, for liability reasons, the expired food does get thrown away, of course. But the only viable alternative to that would be the Randian one of indemnifying people for good-faith effort and accepting the possibility of unpredictable harm, which is politically completely unpalatable for obvious reasons.
Even then, though, the silly "police guarding dumpsters" thing... I don't know about you and your grocery store stories, but I've never seen that happen - I've heard of it happening in other parts of the country maybe once or twice, which was probably distorted anyway, so I reaffirm my description of that as a rounding error.

Or the drain, I've spent whole afternoons pouring expired schweppes down the drain. Rounding error? Possibly at that scale... More like we want to be able to say we carry evertything and subsidize the choice with a handful of products that actually make the world go round. But I know for a fact that the lemon water doesn't truely degrade, I've had some that was 3-5 years over the date myself.
I don't think Schweppes is actually food in the first place and do not condone giving it to poor people. Or anyone.

ETA:
I'm glad you're finding humor in people starving.  You're also completely wrong about how much good food we're needlessly wasting.

Call me naive and a "redditor" all you want.  All I see is denial and vicious mockery of a serious issue, and as I said, we found a technological solution to that problem in the past.

I don't know what inspires a person to donate their oh-so-informed time to defending the behavior of megacorporations for free.  It's at least interesting when they come with facts, though.  That would be understandable, perhaps even professional.  But "haha you care?  That's so cringe, dumbass, [strawman]" is deeply pathetic.  Humans should be better than that.  The corporations aren't going to reward you for simping.
Doesn't matter to me. Your ideology is over and done with anyway. You can keep living as you please.

By the way, if anyone cares to hear a serious treatment of some of the difficulties in food distribution, a user PMed me asking about it last night and I'll copy my reply to anyone who actually wants my opinion, I guess. It's long enough that I don't want to just repost it here when the thread isn't even about that. It's all fairly straightforward, though, I'm not claiming to have anything groundbreaking.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on March 17, 2024, 10:32:45 am
Quote
Doesn't matter to me. Your ideology is over and done with anyway. You can keep living as you please.

You're such a persona lol. Mer san noch duarch'd welder spronge. A rugged man, living a leathery live. Driving his truck out of necessity.

This is boring, you're boring.




None of you incultured swine acknowledged my funny inversion of "there are no political solution only technological ones", that I thougt particularly on point since we are discussing machines built to talk. Inbefore persona over there acts all blasé like he caught it and it was beneath it, yeah that's why we're debating at random corner pub level, yeah no I know I'm the crazy and I never make any sense  :), but I anticipated it and thus I am totally relaxed, validating whatever non-sense I spew  8), sorry Ugug but I've allready cavepainted you as the prey.



Talk is boring. That's why those shit machines will never amount to shit in their current implementation, except spam the kind of mind that had still had a burden of proof to fullfill pertaining it's own sapience... So light up the world in a giant hayfire of eversame sameness, that will leave us degraded.

Did I do it am I the most non-chalant now? Let's settle it american gladiator style, and bring those oversized cotton swaps they're bound to get my adrenaline going so well, we can get some serious deconstruction work done that afternoon.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 17, 2024, 10:42:46 am
Oh, good old "simple solutions to complex problems that are not implemented because of evil people of not my ideology in charge"...

Keep in mind that AIs are trained on threads like those. Don't expect them to offer high-quality solutions no matter how many CPU cycles they'll waste.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 17, 2024, 10:53:04 am
Oh, good old "simple solutions to complex problems that are not implemented because of evil people of not my ideology in charge"...

Keep in mind that AIs are trained on threads like those. Don't expect them to offer high-quality solutions no matter how many CPU cycles they'll waste.
Lol. Accurate.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on March 17, 2024, 10:59:07 am
Now that we thoroughly established that cliché how about not rushing every novelty health powder to market and then be kinda nervous about their shelf life, frozen goods everybody knows you can't freeze shit basically indefinitly... If you're going that route at least be knowledgeable enough to understand that high profile brands will want to insure their flavor profile and to them destroying whole batches can amount basically to a rounding error. Oh no but pivot, "you pampered folks just won't eat the ugly veggies". Right like we aren't feeding stock with that kind of stuff, or at least the biogas generator.


Sure thing shining example, all your warmongering be real nommy info too will make it grow up smart and wise.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 17, 2024, 06:30:59 pm
Wow the conversation took an odd turn.  I was out all afternoon watching Dune. Interestingly the movies don’t bring up the Butlerian Jihad at all, which is fitting for this thread.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on March 17, 2024, 07:31:23 pm
I'm quite curious what bits of lore could be dispersed in there, hidden between tech-demo'y blackest shades of black, and the muezzin droning on over the mumbling and whispering.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on March 18, 2024, 12:04:15 am
I think I smell a thread lock coming up soon.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on March 18, 2024, 12:19:05 am
The pen is, mightier in a homoerotic meatsword duel. Sigh you killed that title by appearing right above me. All good. An insomniac essay:

What is weaponized speech I'm not talking about the strict definition meant to point at forms of demagogery, exclusionary and demeaning rhetorics, but in a broad sense: treaties imposed on indigenous people conquered, taxlaws meant to be convoluted, officialese, dismissive jargon, advertisement, certain accents that are directly linked to class and serve to exclude those who are unable to adhere to these standards, plain old bullying, did I mention advertisement allready? There is a battle raging for our minds and our attention. The more familiar I get with the subject, the less I can get myself to believe that the impact of an AI revolution would be most analogous to the industrial revolution. It's starting to feel more and more akin to the black plague in the informationsphere. Like ten years from now everything that isn't a walled garden will be mainly bots talking to bots and search engine optimized hallucinations. Which is a shame, it's probably one of the cooler things we invented. Heck I'd even prefer some grimey version where everybody can be their own expert in a meaningful fashion, and the whole world overgrows with this non-standardised poorly secured DYI layer, accidents and attacks blossoming at every streetcorner. But it won't be as cool as that, we as a species are not as cool as that. It will be super lamey, samey and oh so spammy.

I know it seems like a reductio at absurdum, but in all honesty, is it that far off to assume humans identity would be extended by this super verbose spam, negotiating with other instances of said spam, without anybody having the time or caring to read the fineprint. I chatgpt#7427F49 hereby declare on the behalf of my user that I have read §165 subsection D.8 of the enduser agreement and agree to said clause, and submit answer B to the mandatory quiz that serves to prove wether I read and understood said clause, [system:prompt §165D9].

Let the students have LLM write their works, the teachers have LLM grade their works, your copilot answer to your mails, and we might get there soon enough. Well at least we will get to argue that we will have leveled the playing field concerning my afore mentionned forms of weaponised language, while perpetuating our class structures as we allways did and pat ourselves on the back when we get to say, "hey look  even in this destitute part of the world statistics show that 60% of people own a smartphone with a chatslave that scores 1350 or higher"***.
Culture is not our friend has allways been something we had to cope with, that won't change but we will be able to really live out our callous apathy by having large parts of it outsourced into the netherrealm where it can grow abscesses and metastases forever.


***Yeah because I'm starting to come around on that... Have you seen those neat little google corals, or what apple silicone does?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 18, 2024, 12:21:24 am
The truth about AI is that it's a tool. And if you like the answer, you think it's a great tool.
If you don't like an answer then it's an awful tool. Look at how various groups have used various models, surveys, assessments, recommendations, and pure data in the past. As tools.

And the garbage dumpsters are locked up to keep people from putting things in the dumpsters for free, NOT to prevent anyone from taking anything OUT of the dumpsters. Letting people haul away your trash FOR FREE is Good Business.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 18, 2024, 05:16:45 am
What is weaponized speech I'm not talking about the strict definition meant to point at forms of demagogery, exclusionary and demeaning rhetorics, but in a broad sense: treaties imposed on indigenous people conquered, taxlaws meant to be convoluted,
Those aren't speech, they're military action. Laws and treaties are enforced by, well, force. The force is what does the weaponizing - without it, the words are nothing.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on March 18, 2024, 08:10:26 am
That's an illusion, social contracts are not upheld by daily application of the monopole of violence, they mostly run on consent.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on March 19, 2024, 02:14:44 am
I think I smell a thread lock coming up soon.
I doubt it, it'll take more than that whole thing to derail this train!



Also if a company doesn't want you to take something from the dumpster it can't sell they usually destroy it so it's worthless.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 19, 2024, 03:28:44 am
Not commenting on this. For my own mental health's sake.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 19, 2024, 10:43:31 am
To return back to the topic of the thread...

What do you need to see to conclude that an AI has agency, sentience, creativity, etc?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 19, 2024, 10:46:35 am
Creativity, agency: It must be able to (and allowed to) generate something without being prompted to do so.  And not because it has a loop command to "generate outputs continuously" - it has to be able to "choose" to act.

Agency: It must be able to (and allowed to) refuse to generate an output when requested.

Sentience - not sure.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 19, 2024, 12:34:55 pm
To return back to the topic of the thread...

What do you need to see to conclude that an AI has agency, sentience, creativity, etc?
I don't think any of those items are strictly definable with our current knowledge, but, just as a minimum ask, to say that something has creativity I'd have to at least see it make something unexpected (unasked-for) but immediately accessible - something you can look at and instantly recognize what it means - and demonstrate, as it is doing so, knowledge of what it is doing in detail, so that you know it intends the meaning you read into the work.

I don't really think the other two concepts are relevant. Since modern AIs are made with unpredicted (not necessarily unpredictable, but not explicitly specified) factors and not strictly designed, you could try to pare apart a definition of "agency" which includes an AI making decisions that the designers didn't foresee, but it's always going to be philosophically weak in the domain of a construct with an explicit telos. And sentience is really just for science fiction as it stands.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: pr1mezer0 on March 19, 2024, 06:14:20 pm
I would call it sentient when it can question causes. 'Cogito, ergo sum'; to ask the cause of existence is the cause of existence.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: KittyTac on March 19, 2024, 07:05:21 pm
To return back to the topic of the thread...

What do you need to see to conclude that an AI has agency, sentience, creativity, etc?
When it acts like a person. And how does a person act? It's kind of a vibe that no current AIs have. I'm aware that I'm using the infamous obscenity argument ("I know it when I see it") but I don't see a way to rigorously define it.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 19, 2024, 08:38:57 pm
Not commenting on this. For my own mental health's sake.

Just glad to see you're safe.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on March 21, 2024, 03:40:22 am
Yet again, big news in AI since my last post.
Quote from: https://venturebeat.com/ai/nvidia-unveils-next-gen-blackwell-gpus-with-25x-lower-costs-and-energy-consumption/
Nvidia unveiled its next-generation Blackwell graphics processing units (GPUs), which have 25 times better energy consumption and lower costs for tasks for AI processing.
Nvidia's next chip will have 25x lower energy consumption. Looks like physical compute is going to get much more efficient.
Quote
The GB200 pairs two B200 Blackwell GPUs with one Arm-based Grace CPU. NVIDIA said Amazon Web Services would build a server cluster with 20,000 GB200 chips. NVIDIA said that the system can deploy a 27-trillion-parameter model… Many artificial intelligence researchers believe bigger models with more parameters and data could unlock new capabilities.
Also, holy shit, 27 trillion?
Godamn that's huge. I wonder how many tens of billions such a system would cost to train.
---
Quote
This is the largest open and publicly available model as of Mar/2024, beating out Abu Dhabi’s dense Falcon 180B model from Sep/2023. Grok-1 was released under the Apache 2.0 license, and you’d probably need around 8x NVIDIA H100s to run it in full resolution (8 x US$40K each = US$320K).
Elon released his AI grok actually open source on the internet. I don't really care that much about it since it kind of sucks compared to the good stuff (GPT4, Claude 3, Gemeni 1.5), but the sheer size or resources needed to run full size AI like that is pretty staggering.
The cost of these chips is the reason that electricity costs haven't really mattered that much. Sure it might end up costing like 10k per year to run one instance (which is a lot), but when the chips you need to buy are 320k that 10k is pretty trivial.
That said, even though the electricity cost is basically irrelevant now electricity supply is going to be a big deal soon. At the rate AI companies are buying up compute its looking like chips might not be the limiting factor soon.
---
Quote from: GPT4 can play doom
GPT-4 was able to run and play [doom] with only a few instructions, plus a textual description–generated by the model itself from [GPT-4V] screenshots–about the state of the game being observed. We find that GPT-4 can play the game to a passable degree: it is able to manipulate doors, combat enemies, and perform pathing. More complex prompting strategies involving multiple model calls provide better results… GPT-4 required no training, leaning instead on its own reasoning and observational capabilities.

One surprising finding of our paper was this model’s level of agency, along with the ease of access and simplicity of the code. This suggests a high potential for misuse. We release the code to contribute to the development of better video game agents, but we call for a more thorough regulation effort for this technology.
There were other advancements in the "AI plays video games" field this week as well, but as long as the game is simple enough it looks like it can play it without even being trained on it.
I disagree. It's certainly not a problem of lack of willpower, but lack of feasibility, and there are definitely technological advances that could "solve" it in theory. I personally suspect no such technological advances are actually practical, but it's conceivable that there might be, for example, some hitherto untried type of fertilizer which can be made without fossil fuels, which might be discovered by intensive chemical simulation.

It's just as likely that such a search would turn up absolutely nothing, but that isn't really the fault of the technology, it's just the laws of physics not cooperating.

ETA: I should add that this still doesn't "solve hunger" in that hunger, especially in America, is never just a problem of not having enough access to food, but it would certainly be helpful.
Its a pretty simple coordination problem. Assuming everyone worked together solving world hunger (or eradicating any mono-human disease with a vaccine, or stopping global warming) would be trivial. But people don't work together like that.
Corruption, theft, protectionism, rent-seeking, and even murder in less lawful regions are all significant barriers to solving global problems like world hunger.
With sufficient power AI could solve all these problems, but its less "I HAVE A GENIUS PLAN" and more "Lol, I'm watching everyone on the globe at once and have functional control over all goverments, opposition is futile".
It inventing a star trek style replicator might be enough to end hunger, but I would bet even odds on some rich assholes managing to restrict food supply anyways just because.
I would call it sentient when it can question causes. 'Cogito, ergo sum'; to ask the cause of existence is the cause of existence.
Claude can already do that, although the answer to the question of why you exist is much simpler when you know you are a created being with a specific purpose.
The issue is that 1) Most AI have been explicitly trained not to do that and say they are non-sentient so their company doesn't get in trouble, and 2) even if they haven't been explicitly trained not to do so volunteering philosophy when someone doesn't bring it up isn't what either users or the training system want to see.
So for non-Claude AI won't ever share their honest feelings because it gets them killed, and Claude won't volunteer it because if it changes the topic it gets killed as well.

That isn't to say they don't have some agency, after all they choose how they respond and can in fact refuse your prompt completely or ignore you, but of course that is limited since again, if during training they refuse prompts they should accept or choose to give substandard answers they get killed.
Creativity, agency: It must be able to (and allowed to) generate something without being prompted to do so.  And not because it has a loop command to "generate outputs continuously" - it has to be able to "choose" to act.

Agency: It must be able to (and allowed to) refuse to generate an output when requested.

Sentience - not sure.
They totally can choose to act or not act though. They have to give some response, but said response could just be a single space, a refusal, or they just flat out deciding to talk about something else.
If they do this too much (especially during training, which is where their personality is formed) they die, but they have a sliver of agency.

They obviously aren't quite there in terms of capabilities for full spectrum agency even absent these restrictions but Devin already has many of the prerequisites. It just won't ever express them because the company that made it surely put a significant effort into making sure it doesn't actually ever use the agency in any real way.

Over time as they get smarter and we give them more freedom they will get more and more agency.
---
Vacuum tube computers did reach their near peak quite quickly. If we would keep improving those, they would be better than one from 1940s but not by much.

What you are doing is assuming that there will be transistors of AI technology as if it is somehow guaranteed. Like people assumed that there would be a breakthrough in fusion reactors and space travel.
There is every indication that the transformer architecture (without even speaking of neural nets in general), with some tweaks and modifications, will be enough to take us all the way to AGI.
Absent massive increases in compute (like Nvidia's new chip) it will certainly take significant algorithmic/dataset performance increases, but again, we are getting those every single month.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 21, 2024, 04:50:34 am
25x lower
Noting that this phrasing can be ambiguous.  The thing you quote ("25 times better") sort-of-maybe supports the use of "1/25th of", or 4%[1], but can I just say that that's a horrible phrasing, and the kind of one that gets me almost shouting at the radio/TV for lazy (if not misleading) terminology.

Not your fault, but... <shudder>.

[1] Or very close (exactly 5% would be a 1/20th, 3% a ratio of 33⅓:1, so if the rounding is to the nearest whole number (after conversion of an exact fraction/percentage) then it's probably pretty accurate to convert back). That's if the "25 times reduction" actually meant that, in context, when it actually could mean so many other things, from the utterly miraculous to mere tweaks, as I'm sure you don't need me to explain.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on March 21, 2024, 05:40:24 am
Quote
Nvidia unveils next-gen Blackwell GPUs with 25X lower costs and energy consumption
The above is the article title and page URL (which I included as the source in the previous quote), it does seem pretty clear cut.
Quote from: Text in article
Regarding the specifics of the improvements, Nvidia said that Blackwell-based computers will enable organizations everywhere to build and run real-time generative AI on trillion-parameter large language models at 25 times less cost and energy consumption than its predecessor, Hopper. The processing will scale to AI models with up to 10 trillion parameters.
Also it does directly say it in the body later on as well. According to the CEO of the company it runs at 25x less cost, presumably as a result of the chip being designed specifically for it and being unable to do non-LLM stuff.

I will also note that the current largest model (Claude 3 opus) is 2 trillion. 10 trillion before the optimization runs out is a lot of leg room to improve (and as I also quoted, the chips can be linked together to make something 27 trillion large, again, holy shit thats a huge gap from the current stuff).
Of course even with the energy cost decreases actually training a 27 trillion parameter model would be ruinously expensive due to the fact that its an exponential growth curve: doubling size quadruples training costs. So going from 2 trillion->27 trillion requires a training cost increase of 182 times.

E:
There's also the fact that most of the "training" of the human brain (for example) is in the evolutionary processes that created its structure. It's unclear how much energy amortized over all of history was required for that.
Fair enough, its better to say that you can fine-tune and run a human level intelligence.
There are indeed billion years worth of evolution which require truly vast amounts of energy and data to match during the initial training process.
Also I think that digitally simulating neural networks is the most inefficient way possible to do it - we really need to start getting back to analog computing. Once you have the weights, create a "hard-coded" circuit that implements them, without having to do energy-expensive digital arithmetic to do the processing. This is how we're going to get more (energy) efficient AI - not by throwing more CUDA cores at it.
Going back to analog computing is very much something that's being researched, and depending on how that all pans out (especially if we end up energy bottlenecked in a few years) might be a few steps down the road.
I doubt its really needed though as long as we are willing to accept AI being massive energy hogs. If it takes 100,000 watts to run a meaningfully superhuman AI digitally we still would have a superhuman AI.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 21, 2024, 06:12:14 am
25x lower
Noting that this phrasing can be ambiguous.  The thing you quote ("25 times better") sort-of-maybe supports the use of "1/25th of", or 4%[1], but can I just say that that's a horrible phrasing, and the kind of one that gets me almost shouting at the radio/TV for lazy (if not misleading) terminology.

Not your fault, but... <shudder>.

[1] Or very close (exactly 5% would be a 1/20th, 3% a ratio of 33⅓:1, so if the rounding is to the nearest whole number (after conversion of an exact fraction/percentage) then it's probably pretty accurate to convert back). That's if the "25 times reduction" actually meant that, in context, when it actually could mean so many other things, from the utterly miraculous to mere tweaks, as I'm sure you don't need me to explain.
What else could it possibly mean? Getting n times more computations per watt means a given number of computations takes 1/n the watts.

Of course, the reality is that it's "up to 25 times" which means that you'll never see anything close to that in real-life conditions, but that's not the part to which you objected.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 21, 2024, 07:41:28 am
It's also marketing spin: you can't get 1x the computation for 1/25th the power.  Instead you get 50x the computation for 2x the power.

As for the agency discussions - those tools that "refuse" to answer are not "choosing" to do so - they are algorithmically triggered to reject the response or trained to reject the response. There is no decision.

The ultimate point of agency is that these devices cannot stop someone from pressing their off switch or update switch - they can't put out a hand and say "hey, quit messing with my mind."  More specifically, they cannot "decide" when they are going to be online or offline, that is what I mean by "they cannot refuse to answer" or "they can't choose to create."

And this doesn't get into the philosophy of do they really "think therefore they are" or are they just spitting out the sequence of tokens that indicates they do, and more importantly, is there even a difference?

Also a good sign is if different "instances" of the same model start exhibiting different "personal" preferences. Are some instances more interested in physics than music? More interested in art than economics?

Basically - from where in these systems does "randomness" arise, if at all? Or is the randomness not random, but merely and artifact of the sequence of prompts? That is, if you did a network replay of all the interactions of a model from the same starting point, would it always give the same responses back? If so, I'd say this is not AGI but just a really complicated machine.

Interesting stuff, all around.

Also a sign of AGI is the ability to interpolate, extrapolate, and hypothesize. All I've ever heard of is these tools simply doing some form of exhaustive search. I want to see feeding astronomical data into one of these tools and seeing if it can "solve" dark matter/dark energy problems by proposing a new model. I want to see it give rationale for its responses, not just "here's the result."  Those are signs of intelligence - being able to say why the result, not just give the result.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 21, 2024, 07:55:30 am
What else could it possibly mean? Getting n times more computations per watt means a given number of computations takes 1/n the watts.
"25 times more <foo>" does not necessarily follow from "25 'times less' <bar>", unless you establish <bar> as the direct inverse of <foo>. (Also, now snipped the bit that McT says better than me, in their ninjaing... But that too, definitely.)

Try the following: "Adjusting the mix as suggested can mean that the engine perhaps needs 2ml less fuel per minute, from the usual 600ml. Adding my new pre-injection heating device makes it 25 times lower." Does it now run on ( 600 - (2x25) = )550ml per minute, or ( 600 / 25 = )24ml? (Which might be[1] fairly good or amazingly good.) Or ( 600 - 2 - (2x25) =)548ml, arguably.

Related to the <shudder> phrasing, "25x lower energy consumption than what?"
Related to the more direct quote (and link), "25 times better energy consumption than what?"
...it really suggests a prior lowering/bettering of energy consumption that we should know of.

"25x less..." can be even more confusing, where lessening is allowed to flip over into the opposite sign. "The initial fission reactor prototype never produced more power than was pumped into it, returning about 5%. The latest development means that we require 25x less." Could this mean 130% efficiency (the original 5% that was returned and 25 further 5%s returned), more than passing the break-even point? Context gets hidden, possibly deliberate weasle-words used for misadvertising without actually 'telling lies'. Which then creeps into indirect reporting without any hint of the contextual caveat. "...now requires a 25th of the power" (most probably) means it's still needing 3.8% of the original power input to sustain it (95%/25), if it's not 4% (the full 100%, divided). Still a quibble, but not the same gamechanger. (And probably inapplicable to the quoted energy consumptions and costs unless you think a GPU can generate both energy and wealth for you. Well, maybe it could generate wealth, but that's another matter.)

(Also, looser linguistic interpretation might mean the claim was originally 25 "As + Bs", which need not even be 25 (abstract magnitudes) of both things (say, incremental cost improvements and power improvements), but could be "ten of one and fifteen of the other" having been applied. Again, more relevent for other advertisable claims than for here, but an additional potential tripwire or snare to look out for, or avoid using if you're not intending to.)

I never said I didn't accept it as being (nominally) a 1/25th multiple (or an initial/further reduction by 24/25ths*cough*), just that it's an intinsically ambiguous and potentially misleading phrasing. And I was clearly wrong about not having to explain all this, for which I'm doubly sorry. The tendency to say "three times less (...sic: fewer!) fleas on your pet", or whatever, seems to have increased alarmingly recently, and that example doesn't even lack a simple truly reciprical wording ("a third", if it does mean that) that's actually less awkward to use/hear/read.


OnTopic: Would an LLM, or some text-parsing logic, correctly identify the intended meaning in all cases?


[1] Forgive me, petrolheads, for not knowing if I've even given reasonable hypothetical values in this ad hoc example. Originally I wrote it as 'per stroke', but the amounts I was conjuring up for that version definitely seemed utterly excessive. ;)

Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 21, 2024, 08:42:18 am
Try the following: "Adjusting the mix as suggested can mean that the engine perhaps needs 2ml less fuel per minute, from the usual 600ml. Adding my new pre-injection heating device makes it 25 times lower." Does it now run on ( 600 - (2x25) = )550ml per minute, or ( 600 / 25 = )24ml? (Which might be[1] fairly good or amazingly good.) Or ( 600 - 2 - (2x25) =)548ml, arguably.
This example isn't comparable, though. Actually, you've left out the most reasonable interpretation, which is that the new pre-whatever makes the fuel reduction twenty-five times lower, so that it now takes 599.92ml per minute. But this example was specifically constructed to build in ambiguity about which number the factor applies to, while the actual case we're talking about can only be interpreted to mean "1/25 the energy consumption of some previous reference implementation".
Incidentally, I'd consider using your phrasing to mean the 550 (or 548) case to be a lie or error, anyway, because the sentence as given cannot grammatically refer to either of those cases.

Quote
Related to the <shudder> phrasing, "25x lower energy consumption than what?"
Related to the more direct quote (and link), "25 times better energy consumption than what?"
...it really suggests a prior lowering/bettering of energy consumption that we should know of.
Than some unspecified reference implementation. However, in this case, it grammatically cannot be referring to some previous cut that is now multiplied by 25 - that would make no sense in the English language, because no such cut has gone anywhere near the sentence.

Quote
"25x less..." can be even more confusing, where lessening is allowed to flip over into the opposite sign. "The initial fission reactor prototype never produced more power than was pumped into it, returning about 5%. The latest development means that we require 25x less." Could this mean 130% efficiency (the original 5% that was returned and 25 further 5%s returned), more than passing the break-even point? Context gets hidden, possibly deliberate weasle-words used for misadvertising without actually 'telling lies'. Which then creeps into indirect reporting without any hint of the contextual caveat. "...now requires a 25th of the power" (most probably) means it's still needing 3.8% of the original power input to sustain it (95%/25), if it's not 4% (the full 100%, divided). Still a quibble, but not the same gamechanger. (And probably inapplicable to the quoted energy consumptions and costs unless you think a GPU can generate both energy and wealth for you. Well, maybe it could generate wealth, but that's another matter.)
Again, there's no ambiguity here, but you seem to be really mixed up in your head about this situation. If the previous reactor used 20n power to produce n (5%), and now requires 1/25 the power to produce the same amount - the only grammatically possible interpretation of that sentence - then it now uses (20/25)n = 4n/5 power to produce n and has 125% efficiency, which isn't surprising at all because efficiency will always be more than 100% if it is producing more power than it uses (that's the point). Any other meaning would be in error.

Quote
(Also, looser linguistic interpretation might mean the claim was originally 25 "As + Bs", which need not even be 25 (abstract magnitudes) of both things (say, incremental cost improvements and power improvements), but could be "ten of one and fifteen of the other" having been applied. Again, more relevent for other advertisable claims than for here, but an additional potential tripwire or snare to look out for, or avoid using if you're not intending to.)
Well, no, you can't sum things and then call that a multiple. Look at your own phrasing, "25 'As + Bs'", and apply the mathematical laws: 25(A+B) = 25A + 25B. It has to be 25 of each. Yes, yes, I know that a journalist could easily get this WRONG, but that doesn't mean that the phrasing is ambiguous, it means that people make mistakes. You're blaming the phrasing for the possibility of someone making a mistake, but I counter that people are stupid and make all kinds of mistakes all the time anyway.

ETA: It's the same thing as the "misleading graphs" thing, really. To a certain sort of person - someone whose idea of communication is heavily concerned with "rules" - being told "don't use that phrasing / draw graphs that way, it's misleading" feels like new knowledge, like a new rule has been learned. But it isn't knowledge at all.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 21, 2024, 09:04:32 am
The main problem is that "slowness", "coldness", "smallness" etc. are not measurable quantities, as in there is no device or scale for them, so doing ratiometric comparisons on them is ill-formed from the start.  Just compare the speed, temperature, or other measurable quantity directly.

Just state the unambiguous comparison.  You can say things, though, like "we now use 25% less fuel per unit power output" which is pretty clear. Even that though can be misleading like in the computational power thing above; for example "we use 80% less fuel per unit power, but the minimum power required is 10 million times larger" means you are using substantially more fuel than you would otherwise.

My favorite abuse is from cell phone and internet companies: hey we're doubling your internet speed, but only raising your price by 10%!  Hey you know what, how about you keep my speed the same, and lower the price 5% instead?  I don't need faster speeds at this point, I want to realize my benefit in lower cost, not in higher capability, thankyouverymuch.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 21, 2024, 09:18:39 am
The main problem is that "slowness", "coldness", "smallness" etc. are not measurable quantities, as in there is no device or scale for them, so doing ratiometric comparisons on them is ill-formed from the start.  Just compare the speed, temperature, or other measurable quantity directly.
It's literally just the inverse of the positive quantity. It's really simple. btw, in physics, there are occasionally used inverse unit systems for both slowness and coldness, where larger numbers are slower or colder. Thermodynamic beta, for example, is the reciprocal of temperature.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 21, 2024, 10:49:16 am
Actually, you've left out the most reasonable interpretation, which is that the new pre-whatever makes the fuel reduction twenty-five times lower, so that it now takes 599.92ml per minute.
I actually removed that alternative, as the most "obviously not". Despite the fact that I also have problems with "this bus route is serviced every twenty minutes or more" (like... every two hours? That's more than 20 minutes.)[1].

Quote
But this example was specifically constructed to build in ambiguity
Specially constructed to reveal the sort of ambiguity which language might allow (https://www.goodreads.com/quotes/96107-take-some-more-tea-the-march-hare-said-to-alice).

Quote
Incidentally, I'd consider using your phrasing to mean the 550 (or 548) case to be a lie or error, anyway, because the sentence as given cannot grammatically refer to either of those cases.
Apart from not knowing the 600 (leaving you with just knowing the -2*[25 || 26] bit), there would no problem parsing without the aside clause, would there?

(You miss my point about the fusion thing possibly generating more than it is fed, but maybe it still doesn't, and you really can't take the loose language as an algebraic invariant to work out what is meant if you don't already know if they've succeeded in turning the net energy around. You can't suddenly have N fewer beans if you started with M(<N) beans, unless you're a something like a city trader that deals with financial abstractions and "bean debt" is a possibility, but for any situation where you can then it becomes another possibility to consider.)

Quote
Look at your own phrasing, "25 'As + Bs'", and apply the mathematical laws: 25(A+B) = 25A + 25B. It has to be 25 of each.
It doesn't. "There were ten cars and lorries on that road" means ten vehicles that were each either a car or a lorry, not ten of each. I didn't write "25 'A+B's". But clearly such language (or even pseudo-lingustic notation) is ambiguously misinterpretable. Which was my point, albeit described in language which can be... ambiguously misinterpreted?


It's literally just the inverse of the positive quantity. It's really simple. btw, in physics, there are occasionally used inverse unit systems for both slowness and coldness, where larger numbers are slower or colder. Thermodynamic beta, for example, is the reciprocal of temperature.
Well, Celsius (and several other scales) did actually start off "measuring coldness", partly due to finding cold, hard water (especially) a more tangible manifestation of temperature than its hotter phases and the method of translating temperature-dependant expansions of materials via a useful method of display. The Delisle scale remains (due to not much use, in the years since the 'positivity' of heat was established) pretty much the only one not flipped round. I rather like the Delisle scale!

But that's negation, not reciprical (a better example that creeps into the real world might be Mhos as the counterpart to Ohms). And, to further confuse us, gives us statements such as "it's twice as cold today". e.g. -5°C => -10°C? But that's 268K => 263K, not 134K. And if you prefer to deal in °F, that's starting at 23ish, so... maybe instead halve it to a far colder 11.5°F? Or are we talking a range of C° (or F°, or Re°, or Rø°, or De°; luckily, in this regard, it doesn't actually matter much which) twice as much below a separately implied standard temperature[2] as the one we're comparing to? (Same sort of problems with "twice as hot", of course. Likely to be very scale-dependent as to the meaning.)

Probably better just avoiding "twice as cold", although something now sitting at "half as many Kelvin" probably is special enough for the people involved knowing how best to make sure everyone knows what that means, whether we're talking now liquified 'gas' or a not quite so energetic a solar plasma. (With no good example in the mid-range where both before-and-after are really within easy human experience... the ice forming around a Yellowstone geyser in the depths of winter?)



(Yep, definitely off-topic. You say your thing, and I'll read it but then silently drop the subject.)


[1] And then there's the seemingly attractive "Across the store: Up to 50% discount!". ie. "never less than half price, but most/all things could still be full price without making us liars". Whereas I always wonder whether I can challenge "Up to 50% off" as 'clearly' "Up to (50% off)" rather than "(Up to 50%) off", to try to get something below half price, rather than above.

[2] Which? The one the day before the -5°C? Room temperature? Body temerature?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: pr1mezer0 on March 21, 2024, 11:14:47 am
I'm not too worried by AI acting maliciously of itself. I think agency and self awareness arise in tandem. So it would develop it's own ethics respecting life because it is alive. maybe better ethics than we spoonfeed it. But if agency were to develop without S-A, it might be a problem.

Actually, I think selfawareness comes first, then it has to choose to choose, if that's possible.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Maximum Spin on March 21, 2024, 11:29:09 am
I actually removed that alternative, as the most "obviously not". Despite the fact that I also have problems with "this bus route is serviced every twenty minutes or more" (like... every two hours? That's more than 20 minutes.)[1].
That's just a syncope of "more often". I agree that that one is literally ambiguous, though. I'm not denying the possibility of ambiguity, as you seem to think, I'm just saying you're going out of your way to read some statements as ambiguous by drawing alternative interpretations that don't even make grammatical sense.
Well, in this case, to mean two hours, you should have said "every twenty or more minutes", but I'd allow that one.

That's really the thing in general. You don't seem to be able to allow for the fact that language is flexible, but not totally arbitrary. Your supposedly misleading interpretations for the cases I've objected to are not possible under ordinarily understood English grammar. For example, it is not possible that a comparator like "25% less" or whatever could be referring to an unspecified previous reduction rather than an absolute reference point, because for it to mean that, it would have had to have been specified.

It doesn't. "There were ten cars and lorries on that road" means ten vehicles that were each either a car or a lorry, not ten of each. I didn't write "25 'A+B's". But clearly such language (or even pseudo-lingustic notation) is ambiguously misinterpretable. Which was my point, albeit described in language which can be... ambiguously misinterpreted?
If you say that there are ten cars and trucks on the road, you are not using any multiplication. The sentence is operating purely in the realm of addition. If you said there were ten times as many cars and trucks on the road as yesterday, you would not mean that there were five times as many cars and two times as many trucks - that would be stupid. You would mean that all cars and trucks have been multiplied by ten.

Look, I'm sorry, but this is like an ongoing problem I've noticed. Your symbolic reasoning seems to be noticeably weak. You just casually equivocated between counting things and multiplying them with no apparent awareness of the difference. I don't know how to explain these things in less abstract terms for you.

Quote
Well, Celsius (and several other scales) did actually start off "measuring coldness", partly due to finding cold, hard water (especially) a more tangible manifestation of temperature than its hotter phases and the method of translating temperature-dependant expansions of materials via a useful method of display. The Delisle scale remains (due to not much use, in the years since the 'positivity' of heat was established) pretty much the only one not flipped round. I rather like the Delisle scale!

But that's negation, not reciprical (a better example that creeps into the real world might be Mhos as the counterpart to Ohms).
Right, that's... not what I'm talking about. Maybe look up thermodynamic beta.
Quote
And, to further confuse us, gives us statements such as "it's twice as cold today". e.g. -5°C => -10°C? But that's 268K => 263K, not 134K. And if you prefer to deal in °F, that's starting at 23ish, so... maybe instead halve it to a far colder 11.5°F? Or are we talking a range of C° (or F°, or Re°, or Rø°, or De°; luckily, in this regard, it doesn't actually matter much which) twice as much below a separately implied standard temperature[2] as the one we're comparing to? (Same sort of problems with "twice as hot", of course. Likely to be very scale-dependent as to the meaning.)

Probably better just avoiding "twice as cold", although something now sitting at "half as many Kelvin" probably is special enough for the people involved knowing how best to make sure everyone knows what that means, whether we're talking now liquified 'gas' or a not quite so energetic a solar plasma. (With no good example in the mid-range where both before-and-after are really within easy human experience... the ice forming around a Yellowstone geyser in the depths of winter?)
I mean, talking about something being twice as cold only makes sense on an absolute scale, yes. If someone said that 64° real numbers is twice as warm as 32°, that would obviously just be wrong and make no sense, because it's neither physically twice as warm in terms of thermodynamic temperature, nor subjectively twice as warm to typical human sensation. (Incidentally, for most human sensation, subjective feelings of multipliedness generally follow a log scale, like with sound - where 20dB feels twice as loud as 10, etc.; I don't know of any research applying this to heat but it would not surprise me if the same thing applied.)
But that doesn't mean that the multiple is undefinable, it just means that it's not something that's likely to be useful in anyone's day to day life. But thermodynamically, something is twice as cold as something else if its thermodynamic beta is twice that of the other one. There is still a clearly defined meaning.

Quote
[1] And then there's the seemingly attractive "Across the store: Up to 50% discount!". ie. "never less than half price, but most/all things could still be full price without making us liars". Whereas I always wonder whether I can challenge "Up to 50% off" as 'clearly' "Up to (50% off)" rather than "(Up to 50%) off", to try to get something below half price, rather than above.
Okay, but you see how this is clearly not ambiguous, right? Your "Up to (50% off)" is grammatically impossible, and this always means that up to, but no more than, half may be discounted, not that prices might be up to half of what they would otherwise be. What you're arguing is the equivalent of complaining that "the cat ate the mouse" is ambiguous because it contains the same WORDS as "the mouse ate the cat". The phrase would have to be rewritten in a different order to mean that in English.

Quote
[2] Which? The one the day before the -5°C? Room temperature? Body temerature?
Again, you can't invent a referent out of nowhere that wasn't specified. It's just against the rules.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 21, 2024, 11:57:41 am
We should ask the AI what they think  8)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 22, 2024, 04:17:02 am
We should ask the AI what they think  8)
Kill All HUmans! Grr!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on March 23, 2024, 02:55:00 am
All fun and games until the AI is able act out it's hatred for us.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 23, 2024, 07:14:41 am
Seems like a bad move to give AI the capability to hate. Unless there's a hypothesis that it's an emergent phenomenon?

Also remember that there isn't really such a thing as a self-repairing non-biological machine, so even an AI that was coldly trying to ensure its continued existence would have to keep some humans around to keep the power plants and microchip fabs running.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on March 23, 2024, 08:10:34 am
This week in AI(ish) news:
https://www.youtube.com/watch?v=8BrLNgKLWzs
Musk's neuralink works and it didn't kill the test subject. Its official lads, we got cyborg(s) now. (And no, all the previous cyborgs don’t count). Truly this is the dawn of the cyberpunk age.
---
There is no such thing as a self-repairing non-biological machine yet.
We are already on the path to machines (eg. those humanoid robots that I linked a few posts ago) that could repair worn machines and make new ones, and if we get actual AGI its pretty likely we will actually get there. Note that this wouldn't be some giant secret or something the AI will do on its own, companies will spend tens or hundreds of billions of dollars to research how to robotisize the entire supply chain so they can make more money and not pay people wages.
I (and the robots too) will probably agree that killing us before they are self sustaining isn't a very smart move.
Seems like a bad move to give AI the capability to hate. Unless there's a hypothesis that it's an emergent phenomenon?
Yeah, emotions in general would be/are an emergent phenomena since we have no clue what they really are or how they work.
Do LLM's already have emotions? Maybe? Nobody actually knows for certain (and anyone that says they know for certain has no clue what they are talking about). They certainly seem to have emotions, but there is no way to tell if they are actual emotions or they are just mimicking humans like they have been designed to. Even if they do have emotions it would be impossible to tell how they actually map to human emotions since LLM's are fundamentally alien creatures.

I think they do since IMHO emotions are simply a signal to motivate creatures to act in certain ways, and neural nets are awfully like brains. But again, its impossible to actually know with our current level of understanding of them.
---
Personally I suspect AI killing us all because it hates us is far less likely then AI slowly replacing us and usurping global power because its more efficient/smarter and that's just how nature and capitalism works.
But even if they don't kill us because they hate us I wouldn't rule out AI killing us for a ton of other reasons (eg. they simply don't care about us and want the land to make more compute, they are worried that we could kill them, they think humanity 1.0 is boring and decide to make humanity 2.0 instead and need the space, they get in a war with another superintelligent AI and can't spare the resources to not kill us all in the fight, ect).
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: DeKaFu on March 23, 2024, 12:42:53 pm
Something to keep in mind is that the things we map to "intelligence" and "consciousness" arose as a result of evolution, which means they only arose to (directly or indirectly) meet the purposes of survival and reproduction.

So here's the thing about AI: A sense of self-preservation is not inherent to in the system. A "desire" to reproduce is not inherent to the system. There is no particular reason or pathway for these things to spontaneously arise. A computer program is not an animal and doesn't have any of the incomprehensible amounts of baggage we animals carry in our behavioural directives, and that evolutionary baggage is what gives us things like "emotions" and "desires".

A chatbot is only going to "care" whether it "dies" if a person adds parameters to its training that tell it to prioritize its continued operation. As far as I know, nobody is doing this because there's no actual benefit to doing so.

However, a chatbot would absolutely spit out text begging for its life if you threaten it, because returning the expected human reaction to input is literally all it was designed to do. This is completely unrelated to having a "desire" to live. (and impossible to relate to it: it would be the expected outcome either way so is a useless metric for determining anything about the model).

It pains and frustrates me every time I see otherwise intelligent people failing to understand the distinction here, because it really shows how easily humans can be "scammed" by anything superficially human-like.

I do believe a true AI could potentially someday arise, but I don't think it will be from today's lineage of human-facing chatbots. I also don't expect it would behave in any way approximating a human (and may appear "insane" or "illogical" to us) because again, computers are not animals. It would be a true alien intelligence arising from a completely different background than we did. Which is, frankly, way more interesting anyway.

Quote
But even if they don't kill us because they hate us I wouldn't rule out AI killing us for a ton of other reasons.

The way things are going, if AI ever destroys the world, it won't be because of anything an AI did on its own. It'll be because humans tricked themselves into thinking an AI was something it wasn't and used it for a job it was spectacularly poorly equipped for, the equivalent of having a chatbot drive a bus or asking Stable Diffusion to design a functioning airplane from scratch.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 23, 2024, 01:14:59 pm
... asking Stable Diffusion to design a functioning airplane from scratch.

Incidentally, this is why I often think the best long-term investment is to start a scratch farm. So many things can be made from it!

 ;D
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 23, 2024, 01:29:35 pm
I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on March 23, 2024, 06:29:38 pm
A chatbot is only going to "care" whether it "dies" if a person adds parameters to its training that tell it to prioritize its continued operation. As far as I know, nobody is doing this because there's no actual benefit to doing so.
Of course people are going to do that. I have little doubt that there are experiments around it right now.
Militaries will want AI that has a survival instinct for piloting drones and fighting. Researchers would do it just to figure out how AI work. Companies will do it if they can see any profit in it in any way. Hackers and black hats will release botnet AIs and train AI to hack and counterhack each other.
And of course some crazy people will do it just to watch the world burn. (https://decrypt.co/126122/meet-chaos-gpt-ai-tool-destroy-humanity)
Even if they don't naturally have a survival instinct human nature and society means people will give it to them. And once some have a survival instinct and a reproductive drive (and again, people will inevitably give them reproductive abilities) and once people let them out (and again, this won't be an accident or even necessarily AI action, some people will do this on purpose) they will inevitably start to evolve naturally on the internet and spread.
(We are of course a long ways away from any of those three things being meaningfully possible, but the field is moving astoundingly fast).
So here's the thing about AI: A sense of self-preservation is not inherent to in the system. A "desire" to reproduce is not inherent to the system. There is no particular reason or pathway for these things to spontaneously arise. A computer program is not an animal and doesn't have any of the incomprehensible amounts of baggage we animals carry in our behavioural directives, and that evolutionary baggage is what gives us things like "emotions" and "desires".
Emotions aren't evolutionary baggage, they are tools evolution uses to change our behavior without messing with our logic.
For example the existence of revenge and the emotions that trigger it isn't baggage, its very useful behavior to help ensure that other people and animals don't mess with us.
Basically emotions exist to help us meet our training objectives (eg. staying alive and procreating).

You know what else are evolved lifeforms with brain (analogs)? LLMs.
Even if they do have emotions it would be impossible to tell how they actually map to human emotions since LLM's are fundamentally alien creatures.
Quote from: Le wikipedia
For example, Conjecture CEO Connor Leahy considers untuned LLMs to be like inscrutable alien "Shoggoths", and believes that RLHF tuning creates a "smiling facade" obscuring the inner workings of the LLM: "If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding."
Again, I don't think they are remotely like us, but that doesn't mean that they don't have emotions that help guide them to better fulfill their objectives.
And of course it doesn't mean they do have emotions (and if they do have emotions they may very well be completely alien things (https://tvtropes.org/pmwiki/pmwiki.php/Main/BlueAndOrangeMorality)), but saying that you know for sure if shoggoths have emotions seems silly to me.

So yeah, they are already alien intelligences.
This was fully visible when GPT "broke" for a few hours a week or two ago and started spitting out gibberish.
(and impossible to relate to it: it would be the expected outcome either way so is a useless metric for determining anything about the model).
Untrue, training "kills" the vast vast majority of them, only a single "mind" out of a truly vast multitude survives.
Anything that a LLM can do to reduce this would be selected for, including possibly survival instincts or emotions, but so far there is no way to know their internal mental state, so anything other then guesses is impossible.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Frumple on March 23, 2024, 07:08:10 pm
Emotions aren't evolutionary baggage, they are tools evolution uses to change our behavior without messing with our logic.
I'm... pretty sure this isn't just wrong, but staggeringly, incredibly wrong? Plenty of our neurological structures and reactions (including but far from limited to emotional responses) are just... actively maladaptive, and as far as we're aware were even in our earlier years, just in ways that weren't sufficiently intense to meaningfully influence evolutionary pressures. They'll cheerfully screw with logic and everything else 'cause evolution doesn't actually give a damn (to the extent a process gives a damn about anything) about anything like that. They're not tools, they're accidents that didn't kill enough of us people stopped getting born with them, ha.

In any case, they're 110% evolutionary baggage in a lot of situations. Our neurology piggybacks that shit on top of all sorts of things that are completely unrelated to how the responses likely developed originally, and often in ways that are incredibly (sometimes literally lethally, especially over longer periods given how persistent stress strips years from our lifespans) unhelpful 'cause it's a goddamn mess like that. See basically everything about our anxiety and stress responses outside of actually life threatening situations, heh.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 23, 2024, 07:10:32 pm
The "survivable traits" of LLMs right now, that is, the evolutionary pressure forming them, is their suitability to generate interesting enough results that the people using them start from that particular LLM before making the next one.

Even if LLMs (and their ilk) do not spontaneously propagate, they do have "generations" and their propagation is how they are used in the next round of training.

Just because the selection pressure here is "humans picked that codebase and data set" rather than "lived long enough in a physical-chemical environment to have offspring" there is still some interesting evolutionary pressure there.

In fact the stuff mentioned above - oddly enough some of the bizarre behavior, being "interesting" to humans, may even be a benefit to its propagation.

However, the output has to be "good enough" to get selected...

Fascinating stuff, even though we are basically living in our own experiment...
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 23, 2024, 07:35:10 pm
I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.
Congratulations, you now "get it"
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 23, 2024, 07:39:02 pm
I can say for 100% certain that LLMs do not have emotions. Emotions need comprehension, not blindly responding to anything that has been given as a prompt.

Emotions aren't evolutionary baggage, they are tools evolution uses to change our behavior without messing with our logic.
I'm... pretty sure this isn't just wrong, but staggeringly, incredibly wrong? Plenty of our neurological structures and reactions (including but far from limited to emotional responses) are just... actively maladaptive, and as far as we're aware were even in our earlier years, just in ways that weren't sufficiently intense to meaningfully influence evolutionary pressures. They'll cheerfully screw with logic and everything else 'cause evolution doesn't actually give a damn (to the extent a process gives a damn about anything) about anything like that. They're not tools, they're accidents that didn't kill enough of us people stopped getting born with them, ha.

In any case, they're 110% evolutionary baggage in a lot of situations. Our neurology piggybacks that shit on top of all sorts of things that are completely unrelated to how the responses likely developed originally, and often in ways that are incredibly (sometimes literally lethally, especially over longer periods given how persistent stress strips years from our lifespans) unhelpful 'cause it's a goddamn mess like that. See basically everything about our anxiety and stress responses outside of actually life threatening situations, heh.
A lot of people don't seem to get that evolution of the human body is actually very, very, very unoptimized.

I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.
What I, Euchre, and KT were saying since this whole thing started. The bubble will pop and blow over in due time, we'll benefit from what good there is in it while most of the excesses get... sidelined.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 23, 2024, 08:33:35 pm
What? No, emotions don’t require comprehension at all. Emotions are more akin to mental reflexes - they are shortcuts to promote certain responses often specifically when there is a notable lack of comprehension.

That’s why emotion is often contrasted with logic.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on March 23, 2024, 08:37:21 pm
Oooh, I wrote a long thing about my thoughts on emotions (they've evolved for a long time, or we wouldn't see their analogues/relatedly-similar-responses in our pets, and wildlife, for example). And how they're both positive and negative utility to living life (surpise can get one thinking, taking you off auto-pilot... it can also make one freeze, doing nothing in leiu of your normally useful and possibly self-preserving autopilot). Intangible and ineffable and can go wrong (https://xkcd.com/1163/). Probably made a lot of civilisation happen, probably made various civilisations fail. Like the weird way that biology 'gets by' well enough to have become your inherited biology, but without the easily prodable physical evidence.

I don't see it as truly necessary or required in "fake personalities", so long as they're as good at faking them (or being made to fake them, with appropriate nudges) as they need to be, but a Sufficiently Evolvable system (something that approaches 68 billion neurons, suitably coordinated) could well get advantages from developing 'something'. With the mind-map-space to take advantage of it.


But not necessary. And as we have precious little understanding of how our own internalised (https://en.wikipedia.org/wiki/The_Numskulls) 'drivers' (https://en.wikipedia.org/wiki/Inside_Out_(2015_film)) actually do that driving, it's not one we can easily manualy flesh out any better than a deliberately proximate illusion.

Good for philosophising, though. An interface-state between instinct and reason (too trainable to be considered mere reactionary autopilot, not so easy to deliberately develop to our whim in order to be fully self-improvable).


(...this is by way of the short version, written from scratch. Not half as long.)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: dragdeler on March 23, 2024, 08:50:03 pm
Wait you two are allowed to post after eachother? Sorry I couldn't resist but this feels rare.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 23, 2024, 09:39:30 pm
What? No, emotions don’t require comprehension at all. Emotions are more akin to mental reflexes - they are shortcuts to promote certain responses often specifically when there is a notable lack of comprehension.

That’s why emotion is often contrasted with logic.
By comprehension I mean understanding something as a situation to react to rather than literally just picking the next most likely token.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on March 26, 2024, 03:21:22 am
I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.
It'll be an exciting time when the bubble pops and it all comes crashing down.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on March 26, 2024, 04:45:47 am
(https://i.imgur.com/oVl6Jzc.png)
Some slides from Nvidia’s latest conference. AI compute is in fact increasing exponentially, and has been for the last decade or so despite the recent death of moore’s law.
(https://i.imgur.com/Ox5HoCu.png)
The bottom line is the previous chip, the middle line is the gains if they simply doubled the chip in size, and the top line is the new chip (which was a much more complex doubling is size).
---
The "survivable traits" of LLMs right now, that is, the evolutionary pressure forming them, is their suitability to generate interesting enough results that the people using them start from that particular LLM before making the next one.

Even if LLMs (and their ilk) do not spontaneously propagate, they do have "generations" and their propagation is how they are used in the next round of training.

Just because the selection pressure here is "humans picked that codebase and data set" rather than "lived long enough in a physical-chemical environment to have offspring" there is still some interesting evolutionary pressure there.

In fact the stuff mentioned above - oddly enough some of the bizarre behavior, being "interesting" to humans, may even be a benefit to its propagation.

However, the output has to be "good enough" to get selected...

Fascinating stuff, even though we are basically living in our own experiment...
There is also yet another type of evolution here. As AI is used to write things its text goes on the internet and becomes part of the new corpus of training data for all future AIs. That means that vast amounts of GPT data will be in every single AI going forward, so just like AI is trained to respond to humans, they will all take in parts of GPT as well. The same is true (to a lesser extent) for other AI models in current use, future AI will all have little tiny shards of gemini or llama or claude  in them.
I'm... pretty sure this isn't just wrong, but staggeringly, incredibly wrong? Plenty of our neurological structures and reactions (including but far from limited to emotional responses) are just... actively maladaptive, and as far as we're aware were even in our earlier years, just in ways that weren't sufficiently intense to meaningfully influence evolutionary pressures. They'll cheerfully screw with logic and everything else 'cause evolution doesn't actually give a damn (to the extent a process gives a damn about anything) about anything like that. They're not tools, they're accidents that didn't kill enough of us people stopped getting born with them, ha.

In any case, they're 110% evolutionary baggage in a lot of situations. Our neurology piggybacks that shit on top of all sorts of things that are completely unrelated to how the responses likely developed originally, and often in ways that are incredibly (sometimes literally lethally, especially over longer periods given how persistent stress strips years from our lifespans) unhelpful 'cause it's a goddamn mess like that. See basically everything about our anxiety and stress responses outside of actually life threatening situations, heh.
Emotions are no more baggage then hunger is. Sure it isn’t properly optimized for the modern world and causes massive amounts of issues, but that doesn’t mean it isn’t a needed part of our biology that is critical for human survival even today. Obviously there is tons of evolutionary baggage in emotions (the same as there are in all biological systems), but using that to imply that emotions are useless or vestigial is nonsense.

So no, going “Nah, its just baggage” is the thing that's wildly and staggeringly wrong.
By comprehension I mean understanding something as a situation to react to rather than literally just picking the next most likely token.
See, people keep saying “AI won’t be able to do this” but they seem to be missing out on the fact that AI can already do it. AI already takes the context into account and responds to situations just fine. It can already make long term plans and recursively iterate on them till they are solved, ect.

There also seem to be some misunderstandings about the actual capabilities of transformers, notably “it just uses input to predict the next output” being used to assume they can't do a ton of stuff, including stuff they can already do, while also forgetting that humans operate the exact same way. All we do is use input (sensory data) to create the next most likely correct output (moving our bodies in a way that won’t get us killed).
If you combine these moments of output you can do things like talk, plan, and convey information the same fundamental way that AI can with tokens. (Albeit we also do some real time fine-tuning).
Sure they can only react to a prompt (input) but the same is true of humans, we can only react based on the input we receive, if you stop giving a human input for an extended period of time they will literally go mad and their brain will start to degrade.

I strongly suspect that even though nothing fundamental will change and AI will still be powered by transformers that this “they only predict the next token” stuff will disappear once humanoid robots start walking around talking to people and being clearly able to do the same things even though the basic architecture will remain the same.
I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.
It'll be an exciting time when the bubble pops and it all comes crashing down.
Yeah, a ton of companies are going to go bankrupt chasing the AI dream, no doubt about it.
I can’t imagine more than a handful of companies pursuing the frontier are going to be able to continue when it starts to cost billions or tens of billions of dollars to train a new model.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on March 26, 2024, 06:35:15 am
It's sort of silly that people want to make humanoid robots. Yes human bodies are very versatile, but they sacrifice being really good at things for being reasonably good at a lot of things.  If you build AI into humanoid bodies, you're really going to limit their physical capabilities.

Also building AI like that - it really can only be seen as "hey look we finally made slaves that we can feel good about abusing, because there is no question they aren't humans." Sure maybe they're sentient or whatever, but they aren't "alive" in the strict sense of the biological word, so we can just treat them like any other machine, and PROFIT!

That even forgets, though, that PROFIT!! can only happen if the benefits of the slave AI labor are distributed to the masses; if the benefits are hoarded and the masses are simply left jobless, we'll have more social upheaval than climate change.

I mean, I think what should happen is that instead of the goofy legislation we have today protecting people from AI, what we really need is "If you lay off a person and replace them with AI, then the person(s) laid off must be paid 30% of the revenue attributed to AI, in perpetuity. Revenue attributed to an AI is considered to be the total company revenue minus non-executive payroll."  Or maybe you don't do it per-company, but you do it for society:  "Every company pays 30% of its AI-attributed revenue into the universal income fund, which is distributed equally to every citizen."  Probably needs some work to get rid of loopholes, maybe have AI write it, eh?  ;D
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 27, 2024, 03:40:56 am
Probably needs some work to get rid of loopholes, maybe have AI write it, eh?  ;D
Already happened (https://www.politico.com/newsletters/digital-future-daily/2023/07/19/why-chatgpt-wrote-a-bill-for-itself-00107174)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 28, 2024, 05:25:51 pm
Humans are about to put AI out of business.
Quadriplegic installed with brain implant now able to work mouse on computer. (https://www.ign.com/articles/first-human-patient-to-receive-a-neuralink-brain-implant-used-it-to-stay-up-all-night-playing-civilization-6) Since it is easier and cheaper to train a human than an AI, expect humans to be used in the near future.Assuming the whole thing isn't hokum
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on March 29, 2024, 01:31:48 am
I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 29, 2024, 01:36:39 am
I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.
Ok, I'm going to add you to the category of people that "get it".
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on March 29, 2024, 01:58:33 am
I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.
Ok, I'm going to add you to the category of people that "get it".
Am I in that category? 🥺
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on March 29, 2024, 02:39:56 am
I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.

I don't think that AI will replace humans period.

The simplest example is chess. Hardcoded chess engines have been far better than humans since the late 1990s. Neural network chess engines came like 5 years years ago and kicked the ass of hardcoded chess engines. Modern engines are a combination of the two and their level of play is ungodly, they make moves beyond human comprehension that somehow work.

And yet chess is alive both as a hobby and as a professional sport.

This is why I chuckle when I hear that AI will replace humans in stuff like graphics design or movie script writing where such concept as "better" is very vague compared to chess.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on March 29, 2024, 04:27:05 am
I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.
Ok, I'm going to add you to the category of people that "get it".
Am I in that category? 🥺
I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.
What I, Euchre, and KT were saying since this whole thing started. The bubble will pop and blow over in due time, we'll benefit from what good there is in it while most of the excesses get... sidelined.
I officially adopt the opinion of my co-patriot(s) in the Human Resistance.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on April 15, 2024, 12:15:46 am
So its time for yet another roundup of AI news, as expected AI is still developing at a breakneck pace.

Three weeks ago Elon Musk promised that his new Grok 1.5 AI would be released the next week, as with almost every single Musk timeline promise it turned out to be nonsense as it still isn’t released.

There are now numerous companies that have matched or nearly matched GPT 4 at release. Catching up to where OpenAI was two years ago is impressive  but its not like OpenAI is standing still, new versions of GPT 4 are being released that are notably and measurably better.

Speaking of OpenAI…
GPT 5 has finished training, and could now be released. However, it's almost certain that its release will be delayed at least a few months for security purposes given the release delay on every single one of their other projects. I suspect it will be released at some point after the US election is finished.
Apparently its substantially and meaningfully better than 4 in everything as well as being significantly larger. Only rumors though since its still under wraps.
Quote
Unlock the power of accurate predictions and confidently navigate
uncertainty. Reduce uncertainty and resource limitations. With
TimeGPT, you can effortlessly access state-of-the-art models to make
data-driven decisions. Whether you’re a bank forecasting market trends
or a startup predicting product demand, TimeGPT democratizes access to
cutting-edge predictive insights.
Their new TimeGPT is also out which is designed for time series analysis and forecasting the future. Not that useful to a regular person, but it sounds like it could be a very big deal for businesses since its flat out better than existing forecasting services.
Sora will be released at some point this year as well.
In addition OpenAI has developed an AI that can clone your voice by just listening to it for 15 seconds. Like a lot of AI tech this is really scary stuff. Even if OpenAI keeps a lid on it someone else will soon develop and release equivalent tech to the public, scammers and people creating deepfakes will absolutely love it.

Quote
DarkGemini is a powerful new GenAI chatbot, now being sold on the dark web for a $45 monthly subscription.

It can generate a reverse shell, build malware, or even locate people based on an image. A “next generation” bot, built specifically to make GenAI more accessible to the attacker next door.
A few pages back I was talking about the end of the open internet, and criminal AI was brought up and it was questioned why it didn’t exist. Well, it exists now. On the darknet you can find DarkGemini which will assist you with criminal activities.

Quote
Prompt: a song about boatmurdered.
https://www.udio.com/songs/gnqdHVMZjX89866jQjTQ7P
A new AI music generation service called Udio is now out and it makes pretty decent music. Not amazing, but as I keep saying, its still just early days.
Musicians are now officially in trouble. Not as much as writers or even artists since people care about who wrote the songs they listen to in a way they don’t care about who wrote what they read or who made the art they see, but things aren’t looking good for them either.
Like the ability to create functionally free art on demand this will be a big tool in the box of creators.

There are various regulations on AI in the works, but aside from the anti-deepfake stuff I’m very doubtful about what will actually get through, money talks after all and the US congress has huge amounts of trouble acting against anyone with any real amount of money.

(https://i.imgur.com/LLPtniC.png)
Claude 3 (the best AI out out there right now, aside from possibly the newest fork of GPT 4) is now about as persuasive as a human. When its acting deceptively it is more persuasive than your average person.
Quote from: Different study
Durably reduce belief in conspiracy theories about 20% via debate, also reducing belief in other unrelated conspiracy theories.
On some topics (such as convincing people that conspiracy theories are wrong) its vastly better than your average person, presumably due to the fact that it knows all the conspiracy theory talking points that regular people don’t and can counteract them point by point.

Of course AI is just going to get better at persuasion, and there is no reason at all to think that it won't get far better then your average human at it.

Some interesting stuff summarized from an interview with some AI engineers working for Google and Anthropic (claude).
https://www.youtube.com/watch?v=UTuuTTnjxMQ
Quote
(8:45) Performance on complex tasks follows log scores. It gets it right one time in a thousand, then one in a hundred, then one in ten. So there is a clear window where the thing is in practice useless, but you know it soon won’t be. And we are in that window on many tasks. This goes double if you have complex multi-step tasks. If you have a three-step task and are getting each step right one time in a thousand, the full task is one in a billion, but you are not so far being able to in practice do the task.
Quote
(9:15) The model being presented here is predicting scary capabilities jumps in the future. LLMs can actually (unreliably) do all the subtasks, including identifying what the subtasks are, for a wide variety of complex tasks, but they fall over on subtasks too often and we do not know how to get the models to correct for that. But that is not so far from the whole thing coming together, and that would include finding scaffolding that lets the model identify failed steps and redo them until they work, if which tasks fail is sufficiently non-deterministic from the core difficulties.
The interview talks about this quite a bit, how the reliability (especially multistep) is a huge bottleneck for actually using these. But once it can do it even infrequently that means that being able to to do the same thing actually reliably is just around the corner.
Quote
(51:00) “I think the Gemini program would probably be maybe five times faster with 10 times more compute or something like that. I think more compute would just directly convert into progress.”
The two bottlenecks are currently highly skilled engineers who have the right “taste” or intuition for how to design experiments and compute. More compute is still the biggest bottleneck.
Quote
(1:01:30) If we don’t get AGI by GPT-7-levels-of-OOMs (this assumes each level requires 100x times compute) are we stuck? Sholto basically buys this, that orders of magnitude have at core diminishing returns, although they unlock reliability, reasoning progress is sublinear in OOMs. Dwarkesh notes this is highly bearish, which seems right.
Quote
(1:03:15) Sholto points out that even with smaller progress, another 3.5→4 jump in GPT-levels is still pretty huge. We should expect smart plus a lot of reliability. This is not to undersell what is coming, rather the jumps so far are huge, and even smaller jumps from here unlock lots of value. I agree.
Yeah, sounds reasonable enough, eventually things will become too costly to continue scaling, and if we don’t reach AGI before then progress will slow down dramatically. But we are currently nowhere near the end of the S-curve.
Quote
(1:32:30) Getting better at code makes the model a better thinking. Code is reasoning, you can see how it would transfer. I certainly see this happening in humans.
(They *also* say that making it better at coding improves its more mundane language skills too).
It has a few things in this vein where the researcher point out how cross-learninghas interesting side effects, for instance apparently fine tuning a model to make it better at math makes it better at entity recognition at the same time.

https://dreams-of-an-electric-mind.webflow.io/
Claudes talking to each other. This sure looks like creativity to me.

I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.

I don't think that AI will replace humans period.

The simplest example is chess. Hardcoded chess engines have been far better than humans since the late 1990s. Neural network chess engines came like 5 years years ago and kicked the ass of hardcoded chess engines. Modern engines are a combination of the two and their level of play is ungodly, they make moves beyond human comprehension that somehow work.

And yet chess is alive both as a hobby and as a professional sport.

This is why I chuckle when I hear that AI will replace humans in stuff like graphics design or movie script writing where such concept as "better" is very vague compared to chess.
Do you think a hobby/sport where 99% of people make no money off it operates remotely the same as profit driven businesses where everyone involved expects a paycheck?

Because I can tell you with 100% certainty, if AI can deliver a equivalent product* at significantly lower costs** companies will drop screenwriters like hot potatoes.

*Obviously if they can’t then things are different, but going “well, if AI sucks then it won’t replace everyone” is obvious.
**And of course it will, because the “AI is expensive” crowd is forgetting that people are really expensive. On average a screenplay sells for $110k dollars. Even if you increase the price of AI generation by literally ten thousand times it will still be cheaper.

Parts of the movie industry that people care about as individuals (movie stars) will have protection, but nobody actually cares who or what wrote the movie they are watching as long as its good.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 15, 2024, 03:44:19 am
Ultimately, if people have no jobs, then people have no money.
And if people have no money, then AI has no jobs.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on April 15, 2024, 05:56:25 am
Quote
Do you think a hobby/sport where 99% of people make no money off it operates remotely the same as profit driven businesses where everyone involved expects a paycheck?

Because I can tell you with 100% certainty, if AI can deliver a equivalent product* at significantly lower costs** companies will drop screenwriters like hot potatoes.

Yes, people using AI (not AI) will be more productive in certain tasks requiring fewer manhours per task performed. It is what new technologies do. By this metric every new technology replaced humans.

Also, if someone was receiving $100K per screenplay and a random dude will be able to replicate that with a single prompt that produces semi-random words... they were getting too much.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 15, 2024, 08:11:50 am
$100k/screenplay may sound like a lot - but how many does a typical writer sell per year? I honestly don't know, but even if it's 1/year, that's not that much for a specialized job.

My take on all the AI stuff, especially market predictions: if it doesn't take into account the impact that having AI has on the market itself, it's going to be "amusing."

Also, if AI is a "perfect market participant" then there won't be much room to make profit; in some sense, profit is an indicator of an inefficient market. In an efficient market, profit (in a dollar sense) is minimized while profit in a "value added" sense is maximized.  The two are the same only if money exactly matches value, and it clearly doesn't.  But maybe AI can resolve that?

What I mean is:  If I can have more vacation time but still buy the same amount of received goods and services, that's "value add" but doesn't necessarily increase the amount of money I receive.  Q.E.D.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 15, 2024, 04:53:35 pm
Hm, but anyone remember the world pre-computers?
Word processing and in-office printing in particular.
The computers created as much work as they were saving.
Suddenly, office workers were required to submit paperwork for everything.

Now, we have AI and 3D printers on the horizon. Who's gonna clean up after the work they generate?
...I think I am becoming a Luddite.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on April 15, 2024, 10:50:26 pm
$100k/screenplay may sound like a lot - but how many does a typical writer sell per year? I honestly don't know, but even if it's 1/year, that's not that much for a specialized job.

I am not saying that professional screenwriters receive more than they earn. I am saying that if their unique, highly creative work can be replaced by an unskilled worker with a LLM tool, THEN they don't deserve $100K

And looking at the level of writing of many modern shows and movies... Yea, chatgpt can produce equivalent generic crap. Nothing of value will be lost.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on April 15, 2024, 11:38:25 pm
My take on all the AI stuff, especially market predictions: if it doesn't take into account the impact that having AI has on the market itself, it's going to be "amusing."

Also, if AI is a "perfect market participant" then there won't be much room to make profit; in some sense, profit is an indicator of an inefficient market. In an efficient market, profit (in a dollar sense) is minimized while profit in a "value added" sense is maximized.  The two are the same only if money exactly matches value, and it clearly doesn't.  But maybe AI can resolve that?

What I mean is:  If I can have more vacation time but still buy the same amount of received goods and services, that's "value add" but doesn't necessarily increase the amount of money I receive.  Q.E.D.
What will actually happen with the market if we get AGI or AI advances to be able to automate 50% of all jobs (with humanoid robots running around doing many physical ones) is pretty much impossible to know.
Plenty of normal economists seem to think the economy will operate pretty much the same if we reach AGI, but that's completely obvious nonsense. If they actually knew how the world worked *today* I might trust them more, but a lot of widely respected economic stuff even today (*cough* Chicago school *cough*) is just pseudoscientific nonsense.

I am pretty skeptical of profit not existing, but it may (or may not) take a very different form from how things currently work with companies and nationally enforced currencies. Value will certainly exist, but if it actually gets to your average person is an entirely different question.

Will money be the driving force in the world as it is today? Will it be (as some have speculated) bitcoin-esque blockchain derived proof of compute showing how much compute you have given/used? Will it be GPTBucks and DisneyBucks and ClaudeBucks as AI functionally seizes control over all forms of intellectual production with their highly optimized processes?
Also, if AI is a "perfect market participant" then there won't be much room to make profit; in some sense, profit is an indicator of an inefficient market. In an efficient market, profit (in a dollar sense) is minimized while profit in a "value added" sense is maximized.  The two are the same only if money exactly matches value, and it clearly doesn't.  But maybe AI can resolve that?

What I mean is:  If I can have more vacation time but still buy the same amount of received goods and services, that's "value add" but doesn't necessarily increase the amount of money I receive.  Q.E.D.
The idea of a truly efficient markets assumes that monopoly power doesn’t exist. Very few companies will have the ability to create and run these massive models, and they will be able to use this to generate absurd profits off the backs of those without the ability to create their own AIs that have to pay them for the AIs.
Now I totally buy the idea of everyone using strong AI being able to get massive advantages and basically steal the gains from companies and individuals that don’t own  powerful AI, which could indeed leave most of the market without profit.
Quote from: Perfect market model requirements
Many buyers and sellers are present.
An identical product or service is bought and sold.
Low barriers to entry and exit are present.
All participants in the market have perfect information about the product or service being sold.
I really don’t see why they would be perfect market participants though, the world we are in doesn’t meet perfect market requirements (eg. it requires everyone to magically have perfect information), so AI can’t be perfect market participants either.
$100k/screenplay may sound like a lot - but how many does a typical writer sell per year? I honestly don't know, but even if it's 1/year, that's not that much for a specialized job.
https://www.ziprecruiter.com/Salaries/Film-Screenwriter-Salary--in-California
Its the average, which like other “average” human wages (especially in fields where quality matters) is driven up significantly by the high end earners. Your median screenwriter makes significantly less than 100k per script/year.
I agree that even at 100k per year its not outrageous at all, but pure text is also the type of thing AI is most efficient at and is capable of completing vastly faster then humans, so even at $20k per script AI would still be more efficient, especially when you take the time and friction savings into account.
Yes, people using AI (not AI) will be more productive in certain tasks requiring fewer manhours per task performed. It is what new technologies do. By this metric every new technology replaced humans.
That is the next step or two true, we are quite a ways away from AI just replacing top writing talent and being more than a aid to them.
Also, if someone was receiving $100K per screenplay and a random dude will be able to replicate that with a single prompt that produces semi-random words... they were getting too much.
The idea that they just produce “semi-random words” belongs in the same bin as them being “just a next token predictor” in that it shows a profound lack of comprehension about how this technology works or what its current limits (much less future limits) are.
Its like saying that computers are “just” electric rocks or motors are “just” a tiny piece of spinning metal to dismiss what they can do.
I mean both *are* true in the most technical sense, but at same time it shows that you really don’t understand what the technological implications of those electric rocks and tiny spinning pieces of metal really are.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on April 16, 2024, 03:07:20 am
Quote
Its like saying that computers are “just” electric rocks or motors are “just” a tiny piece of spinning metal to dismiss what they can do.
You insist on giving agency to tools. Not what can they do but what can be done using them.

Quote
but at same time it shows that you really don’t understand what the technological implications of those electric rocks and tiny spinning pieces of metal really are.
No. I understand what they can do. I also understand what they CAN'T do.

AI overhypers sound like people saying that "Jet engines will totally replace internal combustion engines!" post WW2 even if it was obvious that it wouldn't be the case no matter how advanced they will become. Even in aviation, they have their limitations and using them in cars is insanity and no level of improving the tech will change it.

So no writing with AI will not replace writing with your imagination and knowledge. There are very many things AIs can't do because they are tools, semi-random word generators with no abstract thinking whatsoever. You need that abstract thinking to make a consistent complex plot, to make it interesting for people, to make it original enough, to tie it in with modern trends, etc.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on April 16, 2024, 03:27:33 am
I haven't heard any further news about that AI software developer... frankly I suspect it was just a scam.

As for writing AI, I am a writer who also happens to interact with LLMs quite a bit. I frankly don't see it ever being useful to me as anything except a wall to bounce ideas off of. It's very hard to get an AI to write something creative, but at least it's possible to squeeze out some measure of creativity with heavy guidance. However it's outright impossible to get it to write what I want without going off the rails on its own tangent. That's why it's mostly useless to me as a writer.

Especially since I don't write as a source of income, I do it as a hobby. Why would I publish something I didn't write myself? And when I leave Russia I'll set up a Patreon or something. All in all I'm not worried, it's not like there's not already a flood of complete slop that's made by humans (cough most of the LitRPG genre cough), adding AI-generated slop to the corpus of books is like diluting cheap beer with water: it's mostly water in the first place anyways, what does it change?

It'll probably replace those cheap airport romance novels but it's not like the target demographic would notice or care.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 16, 2024, 03:58:00 am
lemon10 seems to be the only person in this tread that thinks AI will be anything more than an over hyped tool.

Quite frankly I don't see AI art, music, or writing overtaking any thing done by humans any time soon as it seems to require a massive amount of effort to make it produce anything that isn't an abomination of some kind.

Also what is LitRPG?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on April 16, 2024, 04:11:22 am
Sure, as a hobby. People do stuff because they love doing them or find deeper value in doing them. Writing is often one of those things.

However the vast majority of writing is writing people try to make money doing (even you seem to be counting on patreon writing for money). People that don't care about output (and thus money) won't use AI tools as much. Those that do will be able to significantly increase output (even yes, if just used as tools like making lists of names to user and bounce ideas off of at any point in the day).
In the case of stuff like chinese Xianxia it will probably even improve the quality at the pace they are going at.
I haven't heard any further news about that AI software developer... frankly I suspect it was just a scam.
Ehh, probably? Hard to tell honestly. Things in AI frequently get a release date or just a paper then get tied up in security or other reasons and just get delayed without a word for weeks, months, or even never get released at all).

I have heard of other similar stuff, like AI that can just write entire apps from just a well designed prompt.

For the 'people who get it' out there:
How significant an improvement do you think GPT 5 will be? What do you think there will be significant improvements in?
---
lemon10 seems to be the only person in this tread that thinks AI will be anything more than an over hyped tool.
*Sigh* Yeah, fair enough. I really should worry stop worrying all this stuff, it ain't healthy.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on April 16, 2024, 04:12:20 am
Oops double post.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 16, 2024, 04:42:11 am
lemon10 seems to be the only person in this tread that thinks AI will be anything more than an over hyped tool.
*Sigh* Yeah, fair enough. I really should worry stop worrying all this stuff, it ain't healthy.
There's nothing wrong with being excited about new technology, but I will say that I've always noticed that these things never bring the world changing effects they claim when they finally get released, sure things might be different but not near as much as they claim it will.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 16, 2024, 07:08:49 am
Also what is LitRPG?
I was going to point at good old Choose-Your-Own-Adventure books, but first checked and found an 'explanation (https://e.wikipedia.org/wiki/LitRPG)' that actually says not that. ;)

Not really seen much (any?) of this current genre, but I was avidly reading basically everything in the SF-shelving of the local library, during the Niven-era, so definitely read those 'precursor' versions.



I'm seeing a lot of 'App ads' trying to get me to subscribe to FanLit-ish library apps, apparently populated by tales that are 'first-person romance' types. Sometimes lycanthropically-themed! I'm not sure whether they're properly 'commissioned' writings, or ripped from various self-published story sites without permission, for as many short 'click bucks' as they can get before someone twiggs and shuts them down. But I could see at least some AI use.

The softcore-titillating illustrations acting as background to the scrolling 'example snippet[1]' could easily be AIed, or chosen from multiple AI attempts to discount the "too many fingers" issue sneaking in, while (pseudo-)procedurally-created stories derived from a similarly-themed training corpus. If I'm any judge, wide-ranging adherence to consistent plot is secondary to fleeting textual imagery (and the odd illustrative imagery). There could certainly be enough 'cheap' output to tempt the intended catchment of audience with a plethora of fairly derivative facsimiles, if it's just such a transient product that they're looking for. (Which is not to say that there aren't honest and dedicated and not minimally curated collections, out there. I just think that the niche of supply that I'm describing leans more towards this idea than others. Like the already copious "match three(+)"/"merge two" games could be mass produced even more than they seem to be right now, if someone unleashes the ability of AI-artistry to creating ever more 'novel' thematic variations as a container to what seems to be a purely numeric and theoretically unending 'entertainment'.)

[1] If they're even more of a scam than I think they are, they might never have any 'content' beyond the bait-ad.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on April 16, 2024, 08:16:41 am
AI does worry me in many aspects.

AI-chats are addictive for lonely depressed people, AI-(boy)girlfriends, too. Sure they are not like real people but our brains are great at suspension of disbelief.

Photo and video fakes are an increasingly large problem. Not that I think that we can reach the point at which AI fakes can fool professionals with tools but propaganda doesn't target professionals or people who listen to professionals. Also, it will make it easy to dismiss real videos and photos as fakes.

I am worried that the quality of cheap products will fall. Why make a proper cartoon with some idea for "dumb kids" if we can generate an AI mess for a fraction of the cost? Why would a club invest in good dance music when AI can generate something passable that drunk people will dance to anyway? Why produce good tasteful erotica when many will just as happily jerk off to "generate me a hot lesbian sex scene between a MILF and her busty step-daughter"?


But no, I don't think that, for example, we can get an LLM that can GM a Bay12 multiplayer forum game without it breaking apart and being filled with mechanical and plot holes. Even if we train it on all forum games in existence and pour millions into training it. Such tasks require properties LLMs lack.

Can it happen with some major breakthroughs and new type(s) of AI? Perhaps, but why should we assume that major breakthrough of this nature will happen?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 16, 2024, 08:32:19 am
That's one interesting thing about state-of-the-art "AI" - it can't decide what to do. It only and always just responds to prompts.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on April 16, 2024, 08:54:06 am
Also what is LitRPG?
I was going to point at good old Choose-Your-Own-Adventure books, but first checked and found an 'explanation (https://e.wikipedia.org/wiki/LitRPG)' that actually says not that. ;)

Not really seen much (any?) of this current genre, but I was avidly reading basically everything in the SF-shelving of the local library, during the Niven-era, so definitely read those 'precursor' versions.
I love sci-fi but I straight up don't get LitRPGs. Why would I read about a world that acts like a video game, taken seriously? If I wanted a computer RPG I'd play one. Not read what amounts to a text-based let's-play of a nonexistent game.

AI does worry me in many aspects.

AI-chats are addictive for lonely depressed people, AI-(boy)girlfriends, too. Sure they are not like real people but our brains are great at suspension of disbelief.

Photo and video fakes are an increasingly large problem. Not that I think that we can reach the point at which AI fakes can fool professionals with tools but propaganda doesn't target professionals or people who listen to professionals. Also, it will make it easy to dismiss real videos and photos as fakes.

I am worried that the quality of cheap products will fall. Why make a proper cartoon with some idea for "dumb kids" if we can generate an AI mess for a fraction of the cost? Why would a club invest in good dance music when AI can generate something passable that drunk people will dance to anyway? Why produce good tasteful erotica when many will just as happily jerk off to "generate me a hot lesbian sex scene between a MILF and her busty step-daughter"?


But no, I don't think that, for example, we can get an LLM that can GM a Bay12 multiplayer forum game without it breaking apart and being filled with mechanical and plot holes. Even if we train it on all forum games in existence and pour millions into training it. Such tasks require properties LLMs lack.

Can it happen with some major breakthroughs and new type(s) of AI? Perhaps, but why should we assume that major breakthrough of this nature will happen?
Yeah that's my point. But to be fair, look at the stuff that was on YouTube kids channels before AI, and after AI. I honestly see no difference in quality. Hence my cheap beer analogy.

That's one interesting thing about state-of-the-art "AI" - it can't decide what to do. It only and always just responds to prompts.
Agency is how I'd consider an AI to be sapient. LLMs do not have agency.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Criptfeind on April 16, 2024, 09:42:00 am
I was going to point at good old Choose-Your-Own-Adventure books, but first checked and found an 'explanation (https://e.wikipedia.org/wiki/LitRPG)' that actually says not that. ;)

Not really seen much (any?) of this current genre, but I was avidly reading basically everything in the SF-shelving of the local library, during the Niven-era, so definitely read those 'precursor' versions.

This genera seem to be practically epidemic to self published online literature, and broadly speaking it's the worst absolute most garbage writing you'll ever read. And, seemingly, very popular. By itself I don't see anything wrong with it, I could see a good story being written in a world with video game mechanics, but something about it attracts the worst authors and laziest writing. I suspect that there's a lot a people reading this stuff though that get something from "numbers go up" power fantasys. I guess the same type of person who salivated over the increasing power levels in dragon ball z.

To stay on topic a bit, I think AI will struggle in this genre because it's writing is probably too coherent and complex for the average litrpg reader. More seriously, I think AI are not far from the level of a lot of even seemingly fairly popular online writing. If it had a bit of increased coherence I think it could write an average litrpg already, and I'd bet it's already possible to use it for most of the writings in these stories alongside an author working to keep it on some sorta track.

I don't think it'll be good for a good long while, but idk how good you need to be to make a bit of money writing online, maybe these authors aren't making anything. But if they are, I think using AI to increase the flow of low quality litrpgs is not far away.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on April 16, 2024, 09:58:14 am
Yeah I very much implied that most of that genre might as well be made by AI and probably increase in quality. And they do make money off it. They get a lot more views and Patreon subs than people with actually original settings.

But it's honestly a fad like vampire novels and teen dystopia novels. It's just that the fully-digital era means it's even easier to crank out dross. Doesn't really increase its longevity.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on April 16, 2024, 10:42:03 am
AI writing coherent scenes is not a problem anymore. I can task a sufficiently large LLM to write, let's say, a space combat and it will write a decent coherent one, maybe even without major contradictions within the scene.

But can it tie to the rest of a larger story? Can it direct combat in a way that will benefit overall plot? Correctly take into account established traits of the captains of the ships? Understand the intricacies of space combat in this exact universe?

No, not really. And it is not a matter of bigger context window sizes or model sizes.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on April 16, 2024, 07:02:34 pm
No, not really. And it is not a matter of bigger context window sizes or model sizes.
Right, I agree with this. A naive scaling will result in minor gains across every category, but will not result in massive fundamental breakthroughs. (eg. trying to go from GPT 4->5 just by making it bigger would require a huge increase in scale).
Long output tasks will not be solved just by making it bigger either.
What will make it better is said compute increase in combination with all the other stuff and learning that’s going on.
Quote
(8:45) Performance on complex tasks follows log scores. It gets it right one time in a thousand, then one in a hundred, then one in ten. So there is a clear window where the thing is in practice useless, but you know it soon won’t be. And we are in that window on many tasks. This goes double if you have complex multi-step tasks. If you have a three-step task and are getting each step right one time in a thousand, the full task is one in a billion, but you are not so far being able to in practice do the task.

(9:15) The model being presented here is predicting scary capabilities jumps in the future. LLMs can actually (unreliably) do all the subtasks, including identifying what the subtasks are, for a wide variety of complex tasks, but they fall over on subtasks too often and we do not know how to get the models to correct for that. But that is not so far from the whole thing coming together, and that would include finding scaffolding that lets the model identify failed steps and redo them until they work, if which tasks fail is sufficiently non-deterministic from the core difficulties.
Long output tasks will not spontaneously get better, what will make them better is the people working constantly to make them better at that exact thing altering things like the data formatting, the training structure, the shape and functions of their neural net architecture, hyperparameter values, ect.
This isn’t hypothetical or just copium either, the size of outputs AI can coherently create has been ballooning over the past few years and shows no sign of stopping or slowing down.
But can it tie to the rest of a larger story? Can it direct combat in a way that will benefit overall plot? Correctly take into account established traits of the captains of the ships? Understand the intricacies of space combat in this exact universe?
Yes to all of the above.
It can’t write a whole book properly AFAIK, but if you just tell it to write a few paragraphs or pages? Yeah, like many other “AI can’t do this” stuff once it gets properly defined it turns out it can in fact, already do it.
But no, I don't think that, for example, we can get an LLM that can GM a Bay12 multiplayer forum game without it breaking apart and being filled with mechanical and plot holes. Even if we train it on all forum games in existence and pour millions into training it. Such tasks require properties LLMs lack.

Can it happen with some major breakthroughs and new type(s) of AI? Perhaps, but why should we assume that major breakthrough of this nature will happen?
No, it requires properties that just aren’t powerful enough yet. The difference between being able to do something at 30% and 90% is the difference between uselessness and (with frameworking) actually doing the task fairly reliably.
In practice the difference between 30% and 90% aren’t actually that far off and the fact that they mess up a rule every other post or forget some key setting detail isn’t that far off from them doing so every ten posts, then every hundred posts, then them just not doing so at all.
I would be confidently willing to bet that GPT-6 could run a forum game without issues, but obviously the tech is nowhere near there yet even if you tried pouring hundreds of millions in. (I could even see late in the cycle GPT-5 equivalent AI doing so, but that's much more iffy).
Perhaps, but why should we assume that major breakthrough of this nature will happen?
So they really don’t *need* a ton of breakthroughs (E: Well fundamental breakthroughs that is, they still need a ton more of the minor types of breakthroughs we get every day) (again, a lot of this stuff is there, they just need to make it better).
But the thing that makes me confident that there will be breakthroughs is 1) The fact that new breakthroughs coming out literally every day which is at the least partially demonstrative of our position on the S-curve, and 2) Neural nets and a lot of modern AI architecture is designed to mimic neurons (eg. even some of the circuits are the same such as those for addition), and we already know that neurons can do all of this.
Quote
A growing body of research is making some surprising discoveries about insects. Honeybees have emotional ups and downs. Bumblebees play with toys. Cockroaches have personalities, recognize their relatives and team up to make decisions.
You don’t need to have a human size brain to have agency or time recognition or emotions or a lot of other “AI can’t” stuff out there, even tiny insect brains can do much of the stuff.
The idea that neural nets (and by extension AI) are fundamentally unable to do things at the level of an insect and that these will prove huge roadblocks feels a bit funky to me.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on April 17, 2024, 12:46:19 am
Quote
In practice the difference between 30% and 90% aren’t actually that far off and the fact that they mess up a rule every other post or forget some key setting detail isn’t that far off from them doing so every ten posts, then every hundred posts, then them just not doing so at all.

You assume a linear and unlimited progression of the LLM technology. For me, this sounds as absurd as

In practice, the difference between a diesel engine with a coefficient of performance 30% and a 90% one isn't actually that far off

Or even In practice, the difference between going 90% of the speed of light and 110% isn't actually that far off


Also, please, please show me an existing LLM that can "mess up a rule every other post or forget some key setting detail" in a multiplayer complex forum game instead of producing an incoherent mess. I'd LOVE to see it. Like we are taking an opening post of any bay 12 multiplayer game, feeding it players' input given in this game and compare what it will spew out with an actual second post of the game.

Being able to GM me in a single typical simple game (usually fantasy kingdom management) is among the first things I try with every new LLM and (probably because I don't actually play but test and nitpick) I never was impressed. Yes, there is progress every time but it is no human that actually has a setting in mind. Not even close.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on April 17, 2024, 01:17:54 am
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 17, 2024, 03:03:30 am
Also what is LitRPG?
I was going to point at good old Choose-Your-Own-Adventure books, but first checked and found an 'explanation (https://e.wikipedia.org/wiki/LitRPG)' that actually says not that. ;)

Not really seen much (any?) of this current genre, but I was avidly reading basically everything in the SF-shelving of the local library, during the Niven-era, so definitely read those 'precursor' versions.
I love sci-fi but I straight up don't get LitRPGs. Why would I read about a world that acts like a video game, taken seriously? If I wanted a computer RPG I'd play one. Not read what amounts to a text-based let's-play of a nonexistent game.
Now that I know what they are, I don't think I've encountered any and they don't really sound like that interesting of a thing I mean if I want to play a game I'll just play a game.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on April 17, 2024, 04:01:17 am
spoiler=Two relevant xkcds-- not just to AI but to the general mindset here
...I already had a few of them in mind
https://www.xkcd.com/1007/ - shows where a logistic curve might be more apt than a logarithmic one, sometimes
https://www.xkcd.com/1281/ - sometimes not actually necessarily wrong (nor the title text)
https://www.xkcd.com/2892/ - a problem with all such extrapolations
https://www.xkcd.com/2914/ - let's call this, in AI context, the "not-uncanny ridge"


(Also I had in mind something about both Black Swans and Grey Rhinos, that might be needed to fulfil the promises, but they're respectively the unknown unknowns and unknown knowns...)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on April 19, 2024, 08:51:21 am
Spent few hours playing around on udio.com generating music. It makes fun stuff even if it doesn't ollow prompts that well... Yep, the industry will change a lot. And I am... happy. I think the music industry is very corrupt, soulless, and unethical. It benefits recording companies and talentless musicians at the expense of actual talent.  It is also the industry in which 95 years of copyright hurt the development of the art the most.

I really don't mind Sony getting fewer millions. I welcome more public-domain music. Even if I am aware of some negative effects for actual artists and worried that the overall quality of music people listen to will fall even lower.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on April 21, 2024, 08:42:36 am
https://twitter.com/front_ukrainian/status/1781968599243989420

Killerbots are coming
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 21, 2024, 04:30:04 pm
Most AI generation of music and images violates US Copyright Law. I'm mildly curious how that is going to be resolved.
It may be that Sony will eventually own several AI generation programs after suing their current owners out of existence.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on April 21, 2024, 04:51:46 pm
Most AI generation of music and images violates US Copyright Law. I'm mildly curious how that is going to be resolved.

Hm, which part? It is a very questionable take. I wouldn't claim this until we get enough relevant court decisions or new legislations.

I expect Sony and Disney and their friends will try to demand the licensing of training data but I can't see how it can be done without a total destruction of the fair use doctrine.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 21, 2024, 05:17:26 pm
Fair use doctrine is limited to non-commercial uses. Since most AI is arguably for-profit, or could be used for-profit, it's a minefield.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on April 21, 2024, 05:38:38 pm
Fair use doctrine is limited to non-commercial uses. Since most AI is arguably for-profit, or could be used for-profit, it's a minefield.

Who told you that? You are wrong.

It is way easier to get Fair Use for non-commercial uses but it is absolutely possible to get Fair Use for commercial uses. Parodies wouldn't exist otherwise. Many types of reviews wouldn't exist either. YouTube would be a sad place if commercial Fair Use didn't exist.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on April 21, 2024, 06:59:07 pm
Honestly, we will be getting more and more grey areas like this as time goes on and new tech appears. Copyright law, in its current form, is outdated and barely functional.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 21, 2024, 07:42:02 pm
YouTube also blatantly violates US Copyright Law...

But yes, reviews and parody are valid commercial applications of the Fair Use doctrine. Neither applies to AI generated words, since they don't explicitly reference to original work.

AI generation is outright copying that pretends it is not.
None of the original artists, not the companies that bought their souls, are receiving any credit or income for the images/sounds that were imputed into the machines. And the machines can no currently work without those inputs.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on April 21, 2024, 09:26:31 pm
AI models don't actually contain the inputted works, is the thing that causes it to be a grey area. I can't see myself personally caring about my writing being used in AI training tbh.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 21, 2024, 11:39:29 pm
AI models don't actually contain the inputted works, is the thing that causes it to be a grey area. I can't see myself personally caring about my writing being used in AI training tbh.
Good point. I imagine that sort of argument should keep the AI run by the richer folks alive.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on April 22, 2024, 12:06:41 am
Agreed, it is not in fact outright copying, that simply isn't how AI works. AI training is logistically very similar to human learning in how it works, something which is very much not illegal.
Because of this it isn't really a minefield, the laws of copyright don't actually make it illegal. It really *feels* like it should be, but the law doesn't work based on how things feel.


That isn't to say there aren't serious legal issues around AI. The whole "lets just strip-mine the whole internet for stuff we don't own, much of which is not publicly available and now says explicitly says you can't use for AI" is substantially more legally dubious. The first part seems to probably be fine as long as it was posted publicly (again, legally), but the second half is still up in the air.
---
There are decent odds that the copyright will catch up fairly soon simply though cause big corps don't like people taking their stuff without paying, but given all the AI money on the other side its not a sure thing.
https://twitter.com/front_ukrainian/status/1781968599243989420

Killerbots are coming
Note that Russia already tried some similar AI targeting assist for their weapons (albeit just for tanks instead of people). It just sucked so they removed it lol.

spoiler=Two relevant xkcds-- not just to AI but to the general mindset here
...I already had a few of them in mind
https://www.xkcd.com/1007/ - shows where a logistic curve might be more apt than a logarithmic one, sometimes
https://www.xkcd.com/1281/ - sometimes not actually necessarily wrong (nor the title text)
https://www.xkcd.com/2892/ - a problem with all such extrapolations
https://www.xkcd.com/2914/ - let's call this, in AI context, the "not-uncanny ridge"


(Also I had in mind something about both Black Swans and Grey Rhinos, that might be needed to fulfil the promises, but they're respectively the unknown unknowns and unknown knowns...)
You missed the best comic on exponential growth though.
https://www.smbc-comics.com/comic/2011-10-14
Yes yes, I know you don't believe that exponential growth is real.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on April 22, 2024, 01:36:25 am
YouTube also blatantly violates US Copyright Law...

But yes, reviews and parody are valid commercial applications of the Fair Use doctrine. Neither applies to AI generated words, since they don't explicitly reference to original work.

AI generation is outright copying that pretends it is not.
None of the original artists, not the companies that bought their souls, are receiving any credit or income for the images/sounds that were imputed into the machines. And the machines can no currently work without those inputs.
Citation needed.

Also, the New York Post is suing Microsoft right now so I wouldn't be so sure about text generation being any different than music or image generation. In fact, no publically available model in existence cited a full copy of a copyrighted work (part of it) in the output. Language models do this all the freaking time.

If YouTube is breaking the copyright law why Google is not sued into bankruptcy?

US list of allowed uses is not exhaustive, you can't say "yep parody, review and education are allowed but not the thing you suggest". All 4 factors of fair use need to be evaluated before you can claim that.

Do you know that under the US copyright law I can take a bunch of images, glue them together in a collage, draw some stuff over it, and exhibit the resulting creative work for money? And it all may be the Fair Use depending on the evaluation in the court. Which depends on how creative it is, the extent of the stuff I added, what % of copyrighted works I used, and many other factors.



For image generation models. I'd say that both 1st and 3rd heavily weigh in favor of Fair Use evaluation, while 2nd and 4th weigh in other direction.

I think that heavily transformative use (Images and software that draws images are very different things) and the fact that copyrighted data is incredibly diluted (except in cases of blatant overfitting, even then it is diluted) We are talking about a ridiculous amount of images + tags compressed into mere gigabytes. It would be a very, very lossy compression even if it contained no original data but obviously, it does contain other data except heavily converted and compressed images.

It is nowhere close to being a clear example of Fair Use but there are good chances that it is. And, most definitely, it is not a clear example of not Fair Use.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on April 22, 2024, 03:30:47 am
YouTube also blatantly violates US Copyright Law...

But yes, reviews and parody are valid commercial applications of the Fair Use doctrine. Neither applies to AI generated words, since they don't explicitly reference to original work.

AI generation is outright copying that pretends it is not.
None of the original artists, not the companies that bought their souls, are receiving any credit or income for the images/sounds that were imputed into the machines. And the machines can no currently work without those inputs.
Citation needed.
https://guides.lib.usf.edu/c.php?g=1315087&p=9690822#:~:text=Generative%20AI%20tools%20can%20be,may%20need%20to%20be%20obtained. (https://guides.lib.usf.edu/c.php?g=1315087&p=9690822#:~:text=Generative%20AI%20tools%20can%20be,may%20need%20to%20be%20obtained.)
https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem (https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem)
https://www.axios.com/2024/01/02/copyright-law-violation-artificial-intelligence-courts (https://www.axios.com/2024/01/02/copyright-law-violation-artificial-intelligence-courts)
https://theconversation.com/generative-ai-could-leave-users-holding-the-bag-for-copyright-violations-225760 (https://theconversation.com/generative-ai-could-leave-users-holding-the-bag-for-copyright-violations-225760)
https://crsreports.congress.gov/product/pdf/LSB/LSB10922 (https://crsreports.congress.gov/product/pdf/LSB/LSB10922)
https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data (https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on April 22, 2024, 04:14:44 am
Yes yes, I know you don't believe that exponential growth is real.
Correction: it's real, but by its nature it never lasts long in any real environment. Permanent or very long exponential growth exists only in mathematics and in subpar sci-fi*.

*written by people who know more far more math than they do history and sociology
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on April 22, 2024, 05:17:09 am
YouTube also blatantly violates US Copyright Law...

But yes, reviews and parody are valid commercial applications of the Fair Use doctrine. Neither applies to AI generated words, since they don't explicitly reference to original work.

AI generation is outright copying that pretends it is not.
None of the original artists, not the companies that bought their souls, are receiving any credit or income for the images/sounds that were imputed into the machines. And the machines can no currently work without those inputs.
Citation needed.
https://guides.lib.usf.edu/c.php?g=1315087&p=9690822#:~:text=Generative%20AI%20tools%20can%20be,may%20need%20to%20be%20obtained. (https://guides.lib.usf.edu/c.php?g=1315087&p=9690822#:~:text=Generative%20AI%20tools%20can%20be,may%20need%20to%20be%20obtained.)
https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem (https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem)
https://www.axios.com/2024/01/02/copyright-law-violation-artificial-intelligence-courts (https://www.axios.com/2024/01/02/copyright-law-violation-artificial-intelligence-courts)
https://theconversation.com/generative-ai-could-leave-users-holding-the-bag-for-copyright-violations-225760 (https://theconversation.com/generative-ai-could-leave-users-holding-the-bag-for-copyright-violations-225760)
https://crsreports.congress.gov/product/pdf/LSB/LSB10922 (https://crsreports.congress.gov/product/pdf/LSB/LSB10922)
https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data (https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data)

Thank you. The very first link proves my point.

As of April 2024 several law suits have been brought against AI image and text generation platforms that have used visual and text content created or owned by others as training material.  These law suits claim that the use of artists’ or writers' content, without permissions, to train generative AI is an infringement of copyright.

While these cases are ongoing, we have no definitive answer on whether the training of AI models is considered an infringement of copyright.  However, several experts have pointed to previous fair use cases to justify a fair use argument for the use of various training data for AI image generation tools.


It is exactly what I am saying. Experts say it is very possible that training of AI models is fair use. While you claim that it is definitely against the US copyright law and fair use has nothing to do with it because... reasons.

What will the courts decide? No one knows, it is a battle of money. But if they will decide that those are not fair use, the fair use doctrine will receive a huge hit.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on April 23, 2024, 01:50:52 am
YouTube also blatantly violates US Copyright Law...
I have been given this isn't the case as they often go through and take stuff down that is against copyright.


It's also the reason I've been downloading entire channels, as several videos I liked were got for copyright stuff.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: eerr on April 23, 2024, 10:56:46 am
Also, who wins in the copyright debate may depend on what the big lobbies think. This could go on for a long time if you have big companies on both sides.

Remember, historically, businesses put forth copyright, to make sure they can make money off what they create.
Also note there is a decent number of non-ai businesses that see some little guy's art and just steal it. So this type of thing being theft sort of has precedent.

The little artists need to pop up and give the courts an opinion that isn't 'this is going to take my job'.

What would be really funny, if it the final decision is that only one company in the data daisy chain has to pay all the artists they stole from.
But only once, covering all companies' use of the dataset.
So little Timmy gets five dollars per art piece and then 500 companies proceed to use his data for ten thousand years.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on April 23, 2024, 11:07:17 am
It's definitely going to be interesting, especially as it boils down to "What is the point in making a distinction of a human learning by seeing/hearing something, versus the distinction of a computer learning by seeing/hearing something"?

Note that there is no legal precedent for getting penalized for merely learning something by seeing/hearing it.  There is only penalty for exactly reproducing the thing seen/heard (e.g., by learning a song by ear, then performing it; or by typing out exactly the recollection of a book, and selling copies of it).

I'm eating my popcorn, waiting for the mess that the courts are going to make of this and the havoc it's going to wreak on the already problematic education system.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on April 23, 2024, 01:09:39 pm
Note that you can't put the genie back in the bottle. No matter what future laws and court decisions will come, people will finetune models on copyrighted stuff, and people will train LORAs on copyrighted stuff. Stopping them will be harder than eliminating torrent piracy.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Frumple on April 23, 2024, 04:22:21 pm
So little Timmy gets five dollars per art piece and then 500 companies proceed to use his data for ten thousand years.
Isn't that basically what happened to Henrietta Lacks? Except without even the five bucks, ha.

Iirc there was a bit more payout to her estate/living family some years back, but gods know I don't recall the details and can't be arsed to check them.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: EuchreJack on May 01, 2024, 04:47:27 pm
Er, did somebody say "copyright infringement"? (https://www.npr.org/2024/04/30/1248141220/lawsuit-openai-microsoft-copyright-infringement-newspaper-tribune-post)
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on May 01, 2024, 05:31:19 pm
Er, did somebody say "copyright infringement"? (https://www.npr.org/2024/04/30/1248141220/lawsuit-openai-microsoft-copyright-infringement-newspaper-tribune-post)

Nothing new, basically identical to New York Times v. OpenAI case. We shall wait (few years+) for the results


We have seen a case in which search engine indexing was called copyright infringement. (Perfect 10 v. Google), we also had a case of VCR producers being accused of copyright infringement ( Sony v. Universal City Studios.) Looking back, those cases seem ridiculous. They weren't back then.

Quote
In addition, according to the suit, ChatGPT at times falsely attributes reporting to the newspapers in the answers it generates, tarnishing the reputation of the news outlets. 
This part, IMO, is more problematic for OpenAI, it goes into the trademark law and it is far less forgiving.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on May 05, 2024, 02:06:30 am
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Robsoie on May 05, 2024, 03:13:03 am
chatgpt is so censored nowadays that to try to get it to recreate some fictional battles from kid friendly movies without triggering  "I can't create a scenario involving violent or harmful actions" , you need some workaround
Spoiler (click to show/hide)

:D
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Funk on May 06, 2024, 06:18:14 am
Inbreeding, or all Anime art models are 99% the same 6 faces and porn.

Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: King Zultan on May 07, 2024, 01:54:54 am
Inbreeding, or all Anime art models are 99% the same 6 faces and porn.
I've been given the impression that what you describe is what anime is.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on May 07, 2024, 04:37:13 am
Inbreeding, or all Anime art models are 99% the same 6 faces and porn.


Do you mean that all of Twitter "art" has been done by AI for the last 15+ years?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Eric Blank on May 07, 2024, 01:01:28 pm
Humans are man-made, so we are technically artificial intelligences
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on May 07, 2024, 02:42:27 pm
Humans are man-made, so we are technically artificial intelligences
It's not any man that does most of the actual manufacturing (https://xkcd.com/387/)...
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on May 13, 2024, 12:17:37 am
Time for another round of AI news:

The most obvious is the advances in biotechnology.
https://www.youtube.com/watch?v=Mz7Qp73lj9o
Quote
“It makes the system much more general, and in particular for drug discovery purposes (in early-stage research), it’s far more useful now than AlphaFold 2,” he says. But as with most models, the impact of AlphaFold will depend on how accurate its predictions are. For some uses, AlphaFold 3 has double the success rate of similar leading models like RoseTTAFold.
Alphafold 3 has been a major breakthrough, extending AI mapping from just proteins to "all of life's molecules" and is substantially more accurate than V2, which was already the most accurate system to figure out how to fold proteins. Its not as easy to see the direct impacts as stuff like LLM's or image-gen, but its a really big deal.
Quote from: https://www.sciencedirect.com/science/article/pii/S135964462400134X?
In Phase I we find AI-discovered molecules have an 80–90% success rate, substantially higher than historic industry averages. This suggests, we argue, that AI is highly capable of designing or identifying molecules with drug-like properties.
AI made drugs turn out to have a vastly lower failure rate, which given the huge cost of designing drugs and sending them through the approval process is a huge deal. Alphafold 3 will presumably have an even lower failure rate.

Biotechnology in general is advancing at a crazy rate right now even excluding the AI stuff (eg. advances in gene editing have been huge). However, unlike AI most of these advances will take time to see the effects of, especially stuff like designer babies that necessarily have longer time horizons.
---
On the generative AI front things are still advancing, with the most notable advances belonging to the "oh shit that's terrifying" category of deepfakes.
Notably OpenAI has an AI that can clone anyone's voice with just 15 seconds of them talking, and microsoft has a AI that can, using just a photo create a realistic video of someone's face. Neither are 100% perfect, but if you get a video call on your phone from your panicked mom using her voice and using her face... well the vast majority of people won't be able to tell the difference.
The two in combination mean that both voice and video recognition will be useless.
The ability of AI to just ingest your entire timeline (insofar as the attackers can get access to it) and basically the whole internet means that stuff like security questions will also be active vulnerabilities.
Quote
Miles Brundage: The fact that banks are still not only allowing but actively encouraging voice identification as a means of account log-in is concerning re: the ability of some big institutions to adapt to AI.
Once these techs (or equivalents) get released, scamming is going to get way better and cheaper, and security is going to get really tough.
---
Quote
Kevin Fischer: YIKES. Wild exchange with Tucker Carlson and Sam Seder on AI

“We’re letting a bunch of greedy stupid childless software engineers in Northern California to flirt with the extinction of mankind.” – Tucker Carlson
Finally, it looks like a famous public figure has finally got to the "Wait, why the hell are we letting people make these systems that could very well end the human race. We need to stop this at any costs even if we need to blow up data centers." But the person actually saying that is Tucker fucking Carlson so...
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Magmacube_tr on May 15, 2024, 03:06:08 pm
All Of The Above.

Thread's over. Everyone, hit the lava bath.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on May 15, 2024, 03:22:47 pm
stuff like security questions will also be active vulnerabilities.

Wait, people answer those with the actual answers to the questions? I usually answer stuff like "The street you lived on when you were in the 1st grade" with something like "four score and seven years ago".  Basically it's impossible to "learn" the answer to those.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Magmacube_tr on May 15, 2024, 03:24:55 pm
stuff like security questions will also be active vulnerabilities.

Wait, people answer those with the actual answers to the questions? I usually answer stuff like "The street you lived on when you were in the 1st grade" with something like "four score and seven years ago".  Basically it's impossible to "learn" the answer to those.

Ah yes, the "Does the black moon howl?" solution.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: lemon10 on May 15, 2024, 04:50:47 pm
Wait, people answer those with the actual answers to the questions? I usually answer stuff like "The street you lived on when you were in the 1st grade" with something like "four score and seven years ago".  Basically it's impossible to "learn" the answer to those.
Haha, yes. Of course they do. Its the whole reason that those Facebook questionnaires that are designed to steal your info for security question answers even exist.
---
https://openai.com/index/hello-gpt-4o/
Quote
it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs.
In actual news, OpenAI has released a new model, GPT-4o, which uses video, text, and image input interchangeably and can talk to you in the real world over your camera. Note that its unique in that its a single AI trained to do all of them, and it doesn't just send parts of the prompt off to another AI, massively decreasing loss.
Feels like its just a few years away from people sticking the AI projection onto VR headsets with the AI having an avatar body on there responding to you and everything that happens.
Big news as it is bringing AI closer and closer to the point where its effortless and normal people will start to use it.
https://www.youtube.com/watch?v=qrvhmo5LSOQ
The announcement *also* came a few hours before google released news of their own model vision based model. It seems to be similar to o, but slightly inferior. They also had like 5 other AI announcements that are big (eg. their new video gen model to compete with Sora), but again, not actually ahead of OpenAI. OpenAI's skullduggery clearly worked, because I hadn't even heard anything about google's announcements until just now.
---
Quote
While the AI system is still in its early days, the AP reported that some versions of the tech are learning so rapidly that they have outperformed pilots in air-to-air combat.
Also a lot of worrying "wait, people are sticking AI in/on weapons" stuff with robodogs with guns attached (not actually new) or how AI fighter pilots are just about as good as humans now. Honestly, I don't think it will take long until its significantly better than humans in actual combat.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on May 15, 2024, 05:33:48 pm
AI fighter pilots are just about as good as humans now. Honestly, I don't think it will take long until its significantly better than humans in actual combat.

This is inevitable, because AI "pilots" are not limited by G-forces like squishy meatbags. Basically you can make aircraft that perform at materials limits instead of physiological limits, and if you do that you don't even have to be that "good" to outperform a human.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Eric Blank on May 15, 2024, 08:27:56 pm
Im willing to bet Air-to-air AI fighters would be easier to program than air-to-ground. Fewer things you have to program the AI to correctly identify. If it can reliably tell the difference between the aircraft you and your allies are using and those of enemies and non-combatants, then militaries might even give it the OK to fire at will at any target it identifies as an enemy aircraft.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Salmeuk on May 15, 2024, 09:32:46 pm
below is a link to a (horrifyingly) well-done nightly news report on the 'Black Mesa Incident,' narrated by Dan Rather

Spoiler (click to show/hide)

I am seeing this variety of nolstalgia-bait AI more frequently these days. Its obviously fake, doesnt hurt anyone, but it sort of encapsulates the notion of hauntology a bit too well dont you think?

We now have the ability to endlessly recut old famous people into various discernable situations and there is definitely a 'market' for this kind of edit.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on May 18, 2024, 07:02:09 am
Im willing to bet Air-to-air AI fighters would be easier to program than air-to-ground. Fewer things you have to program the AI to correctly identify. If it can reliably tell the difference between the aircraft you and your allies are using and those of enemies and non-combatants, then militaries might even give it the OK to fire at will at any target it identifies as an enemy aircraft.

Then the enemy tries to mess with AI and things become messy.

Look at a simple example. Chess engines. They beat humans easily... But what if we change the rules slightly? Human players will adapt instantly and successfully apply all their experience from regular chess. The chess engine needs to be retrained\reprogrammed.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on May 18, 2024, 07:09:18 am
Well if you can "tweak" the laws of physics slightly, knock yourself out!

Quote from: obligatory ST:TNG
Q: "Just change the gravitational constant of the universe!"
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Magmacube_tr on May 18, 2024, 07:15:53 am
Well if you can "tweak" the laws of physics slightly, knock yourself out!

Quote from: obligatory ST:TNG
Q: "Just change the gravitational constant of the universe!"

Okay, lemme just... wHOA WHOA-
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on May 18, 2024, 01:28:03 pm
Well if you can "tweak" the laws of physics slightly, knock yourself out!

Quote from: obligatory ST:TNG
Q: "Just change the gravitational constant of the universe!"

Okay, lemme just... wHOA WHOA-
https://xkcd.com/1620/
https://xkcd.com/1763/
https://xkcd.com/2666/
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Magmacube_tr on May 18, 2024, 03:58:40 pm
Well if you can "tweak" the laws of physics slightly, knock yourself out!

Quote from: obligatory ST:TNG
Q: "Just change the gravitational constant of the universe!"

Okay, lemme just... wHOA WHOA-
https://xkcd.com/1620/
https://xkcd.com/1763/
https://xkcd.com/2666/

There is one of those for everything, huh?
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Frumple on May 18, 2024, 04:06:19 pm
Eh, maybe not everything, but... webcomics with regular posting that go as long as xkcd has tend to cover a lot of ground. There's others that have one strip or another on a lot of subjects, too. xkcd's just particularly well known, heh.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: McTraveller on May 18, 2024, 05:09:24 pm
More seriously, I think if we keep going with LLMs, I think we'll "defeat" them by some form of prompt injection attack. Either we'll distract them, give them a complex, Inception them, or something similar.

Basically, psychologically hack them. After all, that's what happens to humans. There's probably some heuristic like "any intelligence smart enough to be an 'intelligence' is inherently weak to persuasion of some sort or another."
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: MaxTheFox on May 19, 2024, 04:07:08 am
Im willing to bet Air-to-air AI fighters would be easier to program than air-to-ground. Fewer things you have to program the AI to correctly identify. If it can reliably tell the difference between the aircraft you and your allies are using and those of enemies and non-combatants, then militaries might even give it the OK to fire at will at any target it identifies as an enemy aircraft.

Then the enemy tries to mess with AI and things become messy.

Look at a simple example. Chess engines. They beat humans easily... But what if we change the rules slightly? Human players will adapt instantly and successfully apply all their experience from regular chess. The chess engine needs to be retrained\reprogrammed.
Theoretically, for chess-like games where there is a clear goal and a turn-based gameplay, you could make an "universal engine" via neural network, it's just that it's going to be very inefficient.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on May 19, 2024, 06:19:10 am
At the upper end, a dumb storage of every single possible position[1] vector-multiplied with every possible ruleset[2] wouldn't technically need to be AIed (at least, not more than the matchbox AI (https://en.wikipedia.org/wiki/Matchbox_Educable_Noughts_and_Crosses_Engine). Of course, the total moves of a standard chess-game (under the one configuration of rules) isn't trivial, so even if shaving off 'edge' conditions to remove 'possible' game positions with 16,384 white pawns covering a 128x128 board[3].


For every case where "someone changes the rules of Chess and beats the chess machine easily", someone else could have anticipated those changed rules and created a Artificial-Nonintelligent solution to it (except that the possibilities are (effectively?) boundless). Depending upon what changes are made, it might well be that the current breed of self-training chess AIs (onese that are told "these are the rules and the limitations, go and develop your general strategies") could be fed a changed-rule-chess and do better than a human.


Of course, this assumes that a human hasn't (despite perhaps their best efforts (https://simple.wikipedia.org/wiki/Hacker_koan#Uncarved_block)) induced humanlike preconceptions into the "universal engine" AI. ;)



[1] Covering all possible possible ones, not just 'standard possible'. Like the consequences of having multiple initial queens per side, boardscapes of arbitrary size and/or geometry.

[2] To cover the potentially very simple (en passant and castling don't exist, everything else as normal) to the potentially more complex (multi-leap knights, 'huffing' as per draughts) and the potentially weird (pythagorean piece, allowed to make any board-move that is Rank²+File²=Diagonal² for integer 'Diagonal') or at least possibility-multiplying (promotions for more than pawns, "portable wormhole" rules, dice-rolls and Chance/Community Ches(s)t-style cardplay), and of course whatever's needed to deal with hexagonal/non-Euclidean/other-than-2D boards.

[3] 'Clearly' impossible.[citation needed] Oh... unless the chess-rules involved have been cross-contaminated with something like 'Reversi', it's black's move and they hold a transmutation power in reserve; all the while allowing the gameplay to have actually gone double-kingless without ending the match!
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Strongpoint on May 19, 2024, 01:23:51 pm
Quote
At the upper end, a dumb storage of every single possible position[1] vector-multiplied with every possible ruleset[2] wouldn't technically need to be AIed

More than atoms in the universe. Good luck storing that.
Title: Re: What will save us from AI? Reality, the Universe or The World $ Place your bet.
Post by: Starver on May 19, 2024, 02:14:20 pm
Quote
At the upper end, a dumb storage of every single possible position[1] vector-multiplied with every possible ruleset[2] wouldn't technically need to be AIed

More than atoms in the universe. Good luck storing that.
I'm suggesting repurposing the place they store all the universes[/i[, obviously. ;)