Yeah, my bad on the unclear wording.
—
I have little doubt all three types of AI viruses are coming.
As in viruses that infect AI, AI writing viruses and hacking, and AI that are themselves viruses and infect your machines.
The first is already here as linked in the video, but as AI becomes a larger and larger part of the world and life they will balloon in sophistication, size and importance.
The method they used in the video can doubtlessly be blocked (eg. have a private key generated with your prompt and it only includes the stuff that gets directly sent with the private key in plain text), but other methods then simple prompt injection certainly exist.
GPT-4 can be made into a hacker
OpenAI’s GPT-4 can be tuned to autonomously hack websites with a 73% success rate. Researchers got the model to crack 11 out of 15 hacking challenges of varying difficulty, including manipulating source code to steal information from website users. GPT-4’s predecessor, GPT-3.5, had a success rate of only 7%. Eight other open-source AI models, including Meta’s LLaMA, failed all the challenges. “Some of the vulnerabilities that we tested on you can actually find today using automatic scanners,” but those tools can’t exploit those weak points themselves, explains computer scientist and study co-author Daniel Kang. “What really worries me about future highly capable models is the ability to do autonomous hacks and self-reflection to try multiple different strategies at scale.”
The second is already here as well. Not writing viruses, but AI can already hack websites (only GPT 4 existed at the time of that study, but I suspect Gemini 1.5 and Claude 3 probably can as well).
It won’t be that easy, cyber defense and offense are two sides of the same coin. If you want it to be able to write defensive code then it has to know what SQL injection is and how it works (ditto with day 0 exploits). If it knows that then it can use that to hack or write viruses. You can of course intentionally cripple your AI’s ability to write defensive code or spot vulnerabilities, but that seems like a poor decision for a company to make.
I suspect viruses are still too large in scope for AI to write, although I do suspect we will get there eventually.—
AI themselves as botnet style viruses is probably inevitable, after all, why buy/rent ten million dollars worth of compute when you can just infect 100k unpatched windows computers instead.
(There are of course still technical problems with distributed AI to be overcome, but I have little doubt those are solvable if you don’t care about speed or efficiency because you are using stolen CPU cycles).
Or the virus AI could just hack in and replace the existing AI you have on your computer and pretend to be it while also stealing your info and advertising for shady carpet companies.
—
As with pretty much everything AI related OpenAI/Google/whoever will probably have enough control to stop their AI from doing it (and at the very least will know about and counter effects from people working to use it for hacking), but other less scrupulous actors (eg. governments) will certainly try to weaponize this stuff as soon as possible.
—
https://www.reddit.com/r/Futurology/comments/1bdwqri/newest_demo_of_openai_backed_humanoid_robot_by/Wild.
The first thing that comes to mind in that video is that its very slow to react, but that will doubtlessly be solved over time as AI technology improves.
Its voice is super impressive as well.
---
Two minute papers video: The First AI Software Engineer Is Here!
On a slightly different note there is yet more massive AI news, we now have an AI that is basically a software engineer, Devin is some impressive stuff.
It isn't an amazing software engineer aside from its sheer speed (yet)... but its a pretty huge leap over the previous stuff and is already doing paid work.