Way back in this thread I posted about how we would need actual verification via ID (or some similar method) to tell if someone was a bot because it would soon be basically impossible otherwise due to the advancement in AI capabilities (even at that point they could already basically break captchas).
Everything since then has only convinced me I was more correct.
https://www.reddit.com/r/mildlyinfuriating/comments/1hsqe2z/metas_aigenerated_profiles_are_starting_to_show/I expected this to be a massive issue by external forces to make money/acquire influence/scam, but holy shit, it turns out the call is coming from inside the building?
Someone at the company went "hey, lets make hundreds of thousands of AI accounts filled with fake pictures that will easily trick a ton of people" and Meta just rolled with it.
Now
they backed off from it because this is just about everyone instantly went "oh my god this is terrible, what is wrong with you", but its amazing it got this far.
---
A strange phenomenon I expect will play out: for the next phase of AI, it's going to get better at a long tail of highly-specialized technical tasks that most people don't know or care about, creating an illusion that progress is standing still.
Researchers will hit milestones that they recognize as incredibly important, but most users will not understand the significance at the time.
Robustness across the board will increase gradually. In a year, common models will be much more reliably good at coding tasks, writing tasks, basic chores, etc. But robustness is not flashy and many people won't perceive the difference.
At some point, maybe two years from now, people will look around and notice that AI is firmly embedded into nearly every facet of commerce because it will have crossed all the reliability thresholds. Like when smartphones went from a novelty in 2007 to ubiquitous in the 2010s.
It feels very hard to guess what happens after that. Much is uncertain and path dependent. My only confident prediction is that in 2026 Gary Marcus will insist that deep learning has hit a wall
(Addendum: this whole thread isn't even much of a prediction. This is roughly how discourse has played out since GPT-4 was released in early 2023, and an expectation that the trend will continue. The long tail of improvements and breakthroughs is flying way under the radar.)
@Starver
AI just isn't quite there yet. As you say, currently its very much in the "badly deliver" and "just give me a normal non-trash interface damnit" category for many things.
Its too unreliable (you want far superhuman levels of accuracy for stuff like replacing google search results or giving it free control over your computer, even 99.9% would mean 1/1000 people post people of the AI being stupid on social media, and giving it control of your computer is just a bad idea), its too expensive if you want to put it in everything (the good stuff costs significant money to run, and at even 1 cent per google search the cost will build up exceedingly fast), anything you can run locally on your phone is still quite stupid, it can't interface properly with other programs and technology (eg. it can't do a bunch of web searches to find the answer and answer your question correctly).
But. All of those are rapidly improving, and are far less of an issue now then they were at the start of 2024, and will be far less of an issue again by the end of the year.
Compared to a year ago they are far more reliable on many topics (but still not there yet), far cheaper (AI that vastly eclipses start of 2024 AI is 20 times cheaper), small models are far smarter then larger models from a year ago, and some of them can already do stuff like search the web and write detailed meticulously sourced reports for you.
---
Question for the AI skeptics here (re: agency).
Does AI actively attempting to escape count as it showing agency?