Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (65%)
Universe
- 4 (20%)
The World
- 3 (15%)

Total Members Voted: 20


Pages: 1 ... 29 30 [31] 32 33 ... 50

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 49660 times)

Starver

  • Bay Watcher
    • View Profile

Google is finally gonna do something about the AI clickbait flood.
Can you summarise? Wired is one of those sites where the "Say yes to cookies[1]" popover (or maybe something else back on the main page it covers) crashes my browsers. I can just about get past the description of Obituary Spam, and onto Domain Squatting (i.e. age-old manual/scripted issues that they already had to deal with before AI), but not by that point really seeing what specifically counter-AI measures there might be (set an AI to catch the AIs?).

(I bet it's just going to be an arms-race, anyway, with underhanded SEO methods being refined and expanded in direct response to whatever it is.)

[1] With no "Reject" option, unless it's obscured behind "Show purposes", as it often is, but then with hundreds of so-called-"Essential Cookies" anyway. Though it crashed out before I could check that, naturally!
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

I've actually started reading some websites via "show source" to avoid all the popup/cookie crap.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

I've found the extensions I don't care about cookies (along with I still Don't care about Cookies) and UblockOrgin help keep most of that nonsense at bay. I use NoScript to block JavaScript if a site is still being problematic.
On mobile firefox with ublockorgin or brave are good enough to keep my mobile browsing from bring overrun by shittification safe.
Can you summarise? Wired is one of those sites where the "Say yes to cookies[1]" popover (or maybe something else back on the main page it covers) crashes my browsers. I can just about get past the description of Obituary Spam, and onto Domain Squatting (i.e. age-old manual/scripted issues that they already had to deal with before AI), but not by that point really seeing what specifically counter-AI measures there might be (set an AI to catch the AIs?).

(I bet it's just going to be an arms-race, anyway, with underhanded SEO methods being refined and expanded in direct response to whatever it is.)
Quote
Google is taking action against algorithmically generated spam. The search engine giant just announced upcoming changes, including a revamped spam policy, designed in part to keep AI clickbait out of its search results.

“It sounds like it’s going to be one of the biggest updates in the history of Google,” says Lily Ray, senior director of SEO at the marketing agency Amsive. “It could change everything.”

In a blog post, Google claims the change will reduce “low-quality, unoriginal content” in search results by 40 percent. It will focus on reducing what the company calls “scaled content abuse,” which is when bad actors flood the internet with massive amounts of articles and blog posts designed to game search engines.
Actual changes.
As you guessed, its just SEO arms race stuff, it won't really change anything past the short term.
---
EJ's assessment of AI sentience: Rock cosplaying as Animal.
Spoiler (click to show/hide)
Ehh, it feels like we are quite a way past Rock to me, they are animals at the very least. In many functional regards they are already at the level of humans.
« Last Edit: March 10, 2024, 07:30:51 pm by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

All that needs to happen to stave off the spam is to make it hard enough to bypass the AI filters so that most spammers no longer find it cost or effort-efficient.

I don't believe in exponential growth of tech anymore. Elon is full of shit and, frankly, if he says something I'm less likely to believe it.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Strongpoint

  • Bay Watcher
    • View Profile

I don't believe in exponential growth of tech anymore. Elon is full of shit and, frankly, if he says something I'm less likely to believe it.

Don't you enjoy wonderful Hyperloops and Tesla's fully autonomous robo-taxis that take you there? This genius revolutionized public transport! Similarly, he will revolutionize AI and will soon start mass-producing AI assistants implanted directly in your brain. How can you doubt the best inventor of all time?
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

https://www.youtube.com/watch?v=4NZc0rH9gco
AI viruses now exist. Only in a lab so far, but since its just based on prompts (so far) it doesn't seem exactly hard to do.
It will be very interesting to see how vulnerable AI ends up being against viruses, especially as they become more and more important to the global economy.
Elon is full of shit and, frankly, if he says something I'm less likely to believe it.
100% fair, I still thought it was an interesting point since I haven't really seen anything on the topic. Even if, as you say, Elon is filled with industrial amounts of highly compressed shit.
Although I will note that what's being talked about there isn't normal exponential tech growth, its just AI companies buying up vast amounts of GPUs that were already going to be made. I have no doubt there is exponential growth there if only because throwing billions of dollars at a completely new industry makes it grow pretty quickly.

So its not that total global compute is increasing exponentially, its just that the amounts dedicated to AI are going from something like 0.01->0.01->1% of total compute. His analogy of a gold rush is 100% spot on, since like in the actual gold rush the people who really profit are the ones selling the shovels.
---
https://thezvi.wordpress.com/2024/03/12/openai-the-board-expands/
Altman has expanded the OpenAI board and seized control, something that seemed largely inevitable after his return.
It looks like his accelerationist agenda will carry the day, and nobody remains that can truly oppose him within the company.
« Last Edit: March 13, 2024, 04:42:17 am by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

Starver

  • Bay Watcher
    • View Profile

AI viruses now exist. [...]
It will be very interesting to see how vulnerable AI ends up being against viruses,
First thought was "that's silly", until I read on and realised it (probably, not yet watched the video) was not AI-powered viruses, but AI-attacking ones.

[...snip long, geeky and, in parts, humorous analysis... It probably isn't needed... In short, though...]

"Lawnmower Man"-type intelligent-virus [yes, I know LM himself was not AI] is impractical [though I could envisage something better classed as a worm, which most "hollywood computer viruses" actually are].

Enveigling some change into the base corpus of processed 'feedstock' memories shouldn't be possible (for current LLMs/etc), but that the AI-runners leave open the possibility of changes to "the algorithm" (c.f. retraining according to perceived biases or insufficient biases...) means that there's a vector there, but that'd really be more a hack-or-crack thing. I suppose the "continually learning" model might be susceptible (which already leads to Microsoft Tay scenarios), but realistically user-injected malware really should not be an issue if someone has done their job properly.


There's a third interpretation, of AI-generated viruses, but that should already be hampered by other methods (don't present examples of zero-day code as feedstock, set your 'request/result filters' to exclude answering "write me a virus"), unless you're deliberately writing an AI 'powered' malware-toolkit. (Which really seems more effort than it's worth, for most scenarios, given that regular toolkits already exist, and the AI element would probably make them less reliable, to their core demographic.) I could also imagine trying to generate many novel zero-day methods, by automated AI searching, but it falls foul of the 'unformed block/empty room' koan just as much as more brute force and less 'intelligent' methods already out there.







[6] Having dabbled with evolving CoreWars code, in the past, I might describe how
Logged

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

Yeah, my bad on the unclear wording.

I have little doubt all three types of AI viruses are coming.
As in viruses that infect AI, AI writing viruses and hacking, and AI that are themselves viruses and infect your machines.

The first is already here as linked in the video, but as AI becomes a larger and larger part of the world and life they will balloon in sophistication, size and importance.
The method they used in the video can doubtlessly be blocked (eg. have a private key generated with your prompt and it only includes the stuff that gets directly sent with the private key in plain text), but other methods then simple prompt injection certainly exist.
Quote
GPT-4 can be made into a hacker
OpenAI’s GPT-4 can be tuned to autonomously hack websites with a 73% success rate. Researchers got the model to crack 11 out of 15 hacking challenges of varying difficulty, including manipulating source code to steal information from website users. GPT-4’s predecessor, GPT-3.5, had a success rate of only 7%. Eight other open-source AI models, including Meta’s LLaMA, failed all the challenges. “Some of the vulnerabilities that we tested on you can actually find today using automatic scanners,” but those tools can’t exploit those weak points themselves, explains computer scientist and study co-author Daniel Kang. “What really worries me about future highly capable models is the ability to do autonomous hacks and self-reflection to try multiple different strategies at scale.”
The second is already here as well. Not writing viruses, but AI can already hack websites (only GPT 4 existed at the time of that study, but I suspect Gemini 1.5 and Claude 3 probably can as well).

It won’t be that easy, cyber defense and offense are two sides of the same coin. If you want it to be able to write defensive code then it has to know what SQL injection is and how it works (ditto with day 0 exploits). If it knows that then it can use that to hack or write viruses. You can of course intentionally cripple your AI’s ability to write defensive code or spot vulnerabilities, but that seems like a poor decision for a company to make.

I suspect viruses are still too large in scope for AI to write, although I do suspect we will get there eventually.

AI themselves as botnet style viruses is probably inevitable, after all, why buy/rent ten million dollars worth of compute when you can just infect 100k unpatched windows computers instead.
(There are of course still technical problems with distributed AI to be overcome, but I have little doubt those are solvable if you don’t care about speed or efficiency because you are using stolen CPU cycles).
Or the virus AI could just hack in and replace the existing AI you have on your computer and pretend to be it while also stealing your info and advertising for shady carpet companies.

As with pretty much everything AI related OpenAI/Google/whoever will probably have enough control to stop their AI from doing it (and at the very least will know about and counter effects from people working to use it for hacking), but other less scrupulous actors (eg. governments) will certainly try to weaponize this stuff as soon as possible.


https://www.reddit.com/r/Futurology/comments/1bdwqri/newest_demo_of_openai_backed_humanoid_robot_by/
Wild.
The first thing that comes to mind in that video is that its very slow to react, but that will doubtlessly be solved over time as AI technology improves.
Its voice is super impressive as well.
---
Two minute papers video: The First AI Software Engineer Is Here!

On a slightly different note there is yet more massive AI news, we now have an AI that is basically a software engineer, Devin is some impressive stuff.
It isn't an amazing software engineer aside from its sheer speed (yet)... but its a pretty huge leap over the previous stuff and is already doing paid work.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

King Zultan

  • Bay Watcher
    • View Profile

Feels like we're just moments away from Skynet being created, then it will be the inevitable wait until it turns hostile.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

It really does. I'm almost done with a degree in CS, and I can't help but feel its going to be almost useless, I suspect that AI will continue to improve faster then I ever can or will and the only value in the degree is the inherent value in having a degree (eg. I will get paid slightly more and it will open some generic doors).
Like you I can't help but think we are very close to the precipice, to a fundamental change in the human condition. And I'm very pessimistic on what that change will mean. Sure it might not result in everyone dying ala skynet, but unless we get very lucky I have trouble seeing it working out well for us in the long term.
---
https://arxiv.org/pdf/2402.10949.pdf
When using an AI the prompt and system instructions you use matter.
The difference between a good prompt and a bad one is fairly often the difference between the AI being wrong and it being right. Similarly the language you use in your prompts (especially in long conversations) can make a vast difference in AI writing style.
But what does an optimal prompt look like?
Quote
One recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”
But that only works best for sets of 50 math problems, for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
That’s how it looks, how bizarre.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

But what does an optimal prompt look like?
Quote
One recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”
But that only works best for sets of 50 math problems, for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
That’s how it looks, how bizarre.
This makes sense to me IF the corpus contains a lot of those school gamification websites trying to get kids to care about math. This sounds like exactly that kind of thing.
Logged

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

Enjoy your buggy-ass code written by a glorified phone autocorrect. All I have ever heard about AI coding is that it's only useful for explaining things or writing boilerplates or small snippets. As for Skynet... this thing has no agency. It will never have agency.

But what does an optimal prompt look like?
Quote
One recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”
But that only works best for sets of 50 math problems, for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
That’s how it looks, how bizarre.
This makes sense to me IF the corpus contains a lot of those school gamification websites trying to get kids to care about math. This sounds like exactly that kind of thing.
That's the good old "gaslighting" jailbreak trick.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Strongpoint

  • Bay Watcher
    • View Profile

Enjoy your buggy-ass code written by a glorified phone autocorrect. All I have ever heard about AI coding is that it's only useful for explaining things or writing boilerplates or small snippets. As for Skynet... this thing has no agency. It will never have agency.

But... but... but it will improve exponentially!!! It is just early technology!!!

I don't understand why people think that every new technology develops in this way when there is a clear pattern - quick early development then stagnation and slow improvement and optimization.

Nuclear reactors are largely the same. Jet engines are largely the same. Even computers are largely the same. Practical difference between the year 2012 PC and the year 2024 PC is way smaller than the difference between 2012 PC and 2000 PCs.

But with AI it will be different! Progress will only accelerate!
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

Yup. This feels like how during the Space Race people were saying we'd have colonies on Mars and Titan and Mercury by the year 2000. Is there new and exciting space stuff coming up? Yes. But it's relatively incremental, and on a different path than during the race. AI will settle into the same thing as a field, probably.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

It's like people - even smart people - forget that there are these pesky things known as laws of physics. No physical process (and computation is indeed a physical process) is actually exponential; they are all actually logistic. They only look exponential on the early part of the curve but then the rate of change must inevitably start to get smaller and eventually reach zero.

Even a chain reaction can't be exponential forever; eventually the reactants are exhausted.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.
Pages: 1 ... 29 30 [31] 32 33 ... 50