People who have the hardware/wherewithall to run a realistic fully automated bespoke-spam-tailoring AI will probably have no problem getting past the Authentication stage.
(Side note:
https://xkcd.com/810/ ..!)
Though they'll probably choose the lowest-hanging fruit. I'm regularly on a wiki which gets
loads of clear-lyspammer accounts that get past the account creation filter but then seem comparatively rarely to get past the further hurdle to posting. Real people
can quite easily create (reversible) vandalism, but some machines are clearly pushing all their energy into automatically creating spam-accounts for very little result, probably because they have a whole list of target sites that they don't really care about except that they statistically get their messages in a few places, a few times, for a long enough amount of time before being reverted away. The classic "spam a million, hope to 419 just a few lucrative and creduluous targets" economy of scale.
And that doesn't need sophistication (better, in fact, to have most people never tie up your team of phishermen because the 'hook' is so blatant that it selects only those
really naïve recipients for further involvement), unlike ensuring that everyone is simultaneously having a personalised Artificial conversation which is intended to nudge them towards whatever position of political chaos is the ultimate desire of the 'botmaster, twisting "perceived realities" to order. Yes, a good GPTlike engine could hook more people than the typical copypasta Nigerian Prince screed, or even the combined Trolls From Olgino each handling a range of 'different' Twitter handles to play off left-leaning against right-leaning, with fake bios, and vice-versa.
Theoretically, Neo could be trapped in his own
individual Matrix, never meeting anyone else in the system (or visitors with handy "pills"), though of course that works best if you have never met anyone non-artificial and so you could live in your own Pac-Man world and
this seems entirely normal... The less ultimate control the Controllers have, the more difficult it is to hide the artificiality (unless you also have Dark City memory-modification abilities, but that's off
beyond mere all-emulating abilities). And it needs an impractical amount of resources, but then so already does an omni-Matrix, for all, so if you're already blind to the first degree of seemingly infeasible complications, naturally you could be kept ingnorant of the possibility, just to keep your observable world simple enough to be emulated by what
is possible. (Speed of light/Relativity? That's just an abstract, allowing a
...I digress. A long way from the original point. The idea I started to try to say is that the potential for AI to fool people both en-mass
and individually isn't necessarily that impossible, but may be more trouble than is strictly necessary when all you want to do is push and prod and nudge people enough to enact some imperfect form of Second Foundation manipulation upon society. (Imperfect, because (e.g.) surely Putin initially wanted a weakened Hillary presidency rather than what he got with her opponent... But his meddling may have pushed things over that balance point and meant he had to deal with the result, instead.) And the cost/benefit for using hired workerdrones, with very little instruction, probably outweighs trying to make an MCP fielding many instances of AI, and all the programming necesary to bootstrap and maintain it.
(Another side note:
https://xkcd.com/1831/ ...)
((Edit to correct run-on formatting error.))