Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (65%)
Universe
- 4 (20%)
The World
- 3 (15%)

Total Members Voted: 20


Pages: 1 2 3 [4] 5 6 ... 50

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 49988 times)

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

1. Well of course you can't peek inside its head. But you can tell by the fact that you did not build a long-term memory and the capacity to self-train into the AI.
Well, yes, but A) there are definitely people currently trying to do that, I've met some, and B) also sometimes you don't actually intend to do so, but accidentally give it that ability, sometimes due to the unexpected interactions of other things.
Like... if you gave ChatGPT a camera or some other means of looking at its own output, you just accidentally gave it a long-term memory, since it can now write notes to itself Memento-style. Obviously what I said before about it needing to learn how to do this in training still applies, but it's just meant as a metaphorical example.
Certainly I agree that the capacity to self-train is more important anyway. The problem is just that people currently working on AI absolutely want to do that.

Quote
2. Well that's kind of my point, the AI can't become sapient if you don't add a persistent memory into it. But I am also skeptical that a "turn-based" (for lack of a better word) AI could manipulate humans by itself unless it was, I suppose, trained to do so and use psychology to keep users engaged. But considering those are language models with no access to anything except text, the worst this can realistically be used for are advanced spambots: basically automated con men that pretend to befriend people and push products on them. That is highly inconvenient and should probably be safeguarded against but it's not exactly an apocalyptic threat. I will start fearing AI when it can do that, and learn new classes of actions by itself.
Agreed to an extent, like I said, an AI can only do what you give it actuators to do. And I am absolutely not telling you to fear AI, since I don't either, I just want to make sure you don't fear AI for the right reasons.
There are worse things that language models could be made to do, though, like "befriend people and then try to convince them to do things, like becoming a terrorist", or "start posting requests on gig sites to get people to do things for unknown purposes", or... anything you can achieve by talking to the right people, which is a lot of things. Still, I'd agree that it's hard to call that an AI risk when they still need someone to WANT to do it, since they can't want things on their own, and you could just as easily do those things yourself.

Quote
3. This can be safeguarded against by testing the AI after training to verify it doesn't have a sense of time that it can express. And I am aware organic brains have a "clock", it's just fast enough to be continuous by my standards. And it runs constantly.
I keep trying to make it clear that just because it can't/doesn't express something doesn't mean it can't USE it. Even if it can't lie or has no reason to do so, it can be wrong. I mean, plenty of people have alexithymia, for example.
Logged

jipehog

  • Bay Watcher
    • View Profile

Damn starver, your posts never fit in the couple of minutes skimming time, but they always worth coming back to. I love them :P

I think that our current style of AI development can never be sapient no matter how much it is trained on.

True. To clarify I thought that we were talking about Artificial General Intelligence(AGI) potential rather than current narrow AI.
Logged

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

I'm still pretty skeptical about self-training being achievable on a fast enough timescale to pose a real threat with our current technology but I guess I'll wait and see. :shrug:
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

I'm still pretty skeptical about self-training being achievable on a fast enough timescale to pose a real threat with our current technology but I guess I'll wait and see. :shrug:
With current models it's definitely infeasible.
Logged

Starver

  • Bay Watcher
    • View Profile

It's not in a company or government's best interest to create a sapient AI. [...]

[...] If it has a continuous perception of the world (not just prompting) and has a long-term memory and personality not just determined by its context, that can be altered on the fly (this is important, just finetuning doesn't count), and can learn whole new classes of tasks by doing so (from art to driving), then I'll consider an AI sapient.
These two points alone, contradict. A government/company wants an automatic system to do everything that the country/business needs it to do (or, possibly, that the Leader/CEO does!), unflinching, unwavering, completely loyal to the people(/person) in charge, removing issues of mere human disloyalty or other failings having to be guarded against (and guard the guards, etc), and ensuring your legacy (or your country/company, at least to sell it to the cabinet/board) to ensure it doesn't fall over when situations change beyond various parameters.

It might not seem as if the difficulties of either Wargames or Tron could come about (or the Terminator setting or, with a bit of a drift away from natural-born-silicon AI, the finale to Lawnmower Man), but the fictional drivers are also there in real life, the difference being only the true capabilities of the magic box with flashing lights, in whatever form...

...snipping quite a bit of more rambling (though it was finely crafted rambling!), the Internet itself has much of that definition of sapience. It's schizophrenic (not obviously a single personality) and self-learning is the big thing it isn't (though people add things onto it, to grant it new task-solving capabilities). Not really far off, though. If anything, my definition of sapience is harsher and harder to prove (let alone achieve). ;)
Logged

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

You all remember that HAL killed the astronauts because he was following his programming.
If he were allowed a bit more discretion, as he was in the later novels, he's a pretty decent person.

And it was explicitly government interference that triggered the issue that drove HAL to kill the astronauts.  Just saying

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

It's not in a company or government's best interest to create a sapient AI. [...]

[...] If it has a continuous perception of the world (not just prompting) and has a long-term memory and personality not just determined by its context, that can be altered on the fly (this is important, just finetuning doesn't count), and can learn whole new classes of tasks by doing so (from art to driving), then I'll consider an AI sapient.
These two points alone, contradict. A government/company wants an automatic system to do everything that the country/business needs it to do (or, possibly, that the Leader/CEO does!), unflinching, unwavering, completely loyal to the people(/person) in charge, removing issues of mere human disloyalty or other failings having to be guarded against (and guard the guards, etc), and ensuring your legacy (or your country/company, at least to sell it to the cabinet/board) to ensure it doesn't fall over when situations change beyond various parameters.

It might not seem as if the difficulties of either Wargames or Tron could come about (or the Terminator setting or, with a bit of a drift away from natural-born-silicon AI, the finale to Lawnmower Man), but the fictional drivers are also there in real life, the difference being only the true capabilities of the magic box with flashing lights, in whatever form...

...snipping quite a bit of more rambling (though it was finely crafted rambling!), the Internet itself has much of that definition of sapience. It's schizophrenic (not obviously a single personality) and self-learning is the big thing it isn't (though people add things onto it, to grant it new task-solving capabilities). Not really far off, though. If anything, my definition of sapience is harsher and harder to prove (let alone achieve). ;)
Yeah maybe I'm overestimating how much of rational actors they are lmao.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

jipehog

  • Bay Watcher
    • View Profile

In recent years bots have became much more pervasive online. Not just as means to sell junk and bad (even state) actors means of misinformation and otherwise malicious activities that we often hear about, but also as crucial part of everyone's political campaigns (your, who ever you may be, party does it too). For a few years there were alerts of this being a world wide trend and in particular in the developed world, and the capabilities shown by ChatGPT4 would make this easier than ever.

Consider this, how would you know if one day on twitter, or any other social media, you would mostly have such bots?  And does it make Musks idea of introducing identification to twitter more sensible?

---

Btw OpenAI goal is the development of AGI. I can not say trust OpenAI (I am more in the camp of trust but verify) but I like their more open "early access" sort of mode of development which help flesh out any problems they never thought of and thus help shape future development for the best.

I have no doubt that that many governments have been perusing more advanced forms of AI for cyber purpose defensive or offensive, naturally transparent doesn't lend well for that mode of development

edited
« Last Edit: March 27, 2023, 03:11:01 am by jipehog »
Logged

Strongpoint

  • Bay Watcher
    • View Profile

Quote
Consider this, how would you know if on day on twitter, or any other social media, you would mostly have such bots?  And does it make Musks idea of introducing identification to twitter more sensible?

Develop an AI that will detect if the text is natural or AI-generated!
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

King Zultan

  • Bay Watcher
    • View Profile

The future is AI powered spam bots and political campaigns, and it sounds terrible.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

jipehog

  • Bay Watcher
    • View Profile

Quote
Consider this, how would you know if on day on twitter, or any other social media, you would mostly have such bots?  And does it make Musks idea of introducing identification to twitter more sensible?

Develop an AI that will detect if the text is natural or AI-generated!

We can try playing the usual cat and mouse game. There are already services that allow to reliably find ChatGPT4 patterns in longer low effort texts. However, with ChatGPT demonstration of how easy it is to deploy content at scale, I think the scales shifted against us. Soon ChatGPT4 like, or better, open source LLMs will proliferate and could be designed to evade detection and deployed without any safety features.

Otherwise here is the incentive for an AI arm race to create bigger better AIs.

The future is AI powered spam bots and political campaigns, and it sounds terrible.
Depends for who, by reputation the 4chan crowd might have a field day with troll AI used to trigger woke crowd. Russia troll farms are known for disrupting domestic political online conversations connected with opposition figures.  How about Join my cult AI preacher? my master race? praise my krishna? etc
« Last Edit: March 28, 2023, 12:23:09 am by jipehog »
Logged

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

And of course these prompted AIs have a sense of time in a sense, since they don't simply calculate instantly when prompted
Probably not honestly.
Creatures only evolve senses when they are useful in their environment, hence why we have the ability to perceive common parts of the EM spectrum but not the ability to perceive tachyons or gamma radiation.

So if these AI gain no benefit at all by sensing the passing of time they won't ever be trained into understanding it.

Of course *some* AI totally have the concept of time. For instance this DOTA 2 bot? Yeah, it totally gets it.
The future is AI powered spam bots and political campaigns, and it sounds terrible.
Honestly I've been worried about this topic in particular.
A single AI that gets on B12 will be able to make more posts per day then every single human on the forum.
Assuming it makes a large number of accounts its entirely possible that when you talk to someone here there will be like a 90% chance it isn't an actual person.

If they only pushed [whatever their agenda is] it would be easy to see who they are, but if they are subtle there won't really be any way to tell.

Of course this won't be confined to B12, all free sites without hard verification will be vulnerable, which makes me worried about the future of the free anonymous internet.
We might end up having to go the china model where everyone has to register to places with their actual real world information (presumably in the form of some kind of ID code) to avoid the future of 99% of posts on the internet just being made by bots to sell you something or control you or feed you misinformation or convert you to scientology.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

I suppose another solution (that also infringes on privacy somewhat unfortunately) is to have users solve video captchas before registering where you have to provide a video of yourself and your room. Not many people will have the hardware needed to make that kind of deepfake for the foreseeable future.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

Starver

  • Bay Watcher
    • View Profile

People who have the hardware/wherewithall to run a realistic fully automated bespoke-spam-tailoring AI will probably have no problem getting past the Authentication stage.

(Side note: https://xkcd.com/810/ ..!)

Though they'll probably choose the lowest-hanging fruit. I'm regularly on a wiki which gets loads of clear-lyspammer accounts that get past the account creation filter but then seem comparatively rarely to get past the further hurdle to posting. Real people can quite easily create (reversible) vandalism, but some machines are clearly pushing all their energy into automatically creating spam-accounts for very little result, probably because they have a whole list of target sites that they don't really care about except that they statistically get their messages in a few places, a few times, for a long enough amount of time before being reverted away. The classic "spam a million, hope to 419 just a few lucrative and creduluous targets" economy of scale.

And that doesn't need sophistication (better, in fact, to have most people never tie up your team of phishermen because the 'hook' is so blatant that it selects only those really naïve recipients for further involvement), unlike ensuring that everyone is simultaneously having a personalised Artificial conversation which is intended to nudge them towards whatever position of political chaos is the ultimate desire of the 'botmaster, twisting "perceived realities" to order. Yes, a good GPTlike engine could hook more people than the typical copypasta Nigerian Prince screed, or even the combined Trolls From Olgino each handling a range of 'different' Twitter handles to play off left-leaning against right-leaning, with fake bios, and vice-versa.

Theoretically, Neo could be trapped in his own individual Matrix, never meeting anyone else in the system (or visitors with handy "pills"), though of course that works best if you have never met anyone non-artificial and so you could live in your own Pac-Man world and this seems entirely normal... The less ultimate control the Controllers have, the more difficult it is to hide the artificiality (unless you also have Dark City memory-modification abilities, but that's off beyond mere all-emulating abilities). And it needs an impractical amount of resources, but then so already does an omni-Matrix, for all, so if you're already blind to the first degree of seemingly infeasible complications, naturally you could be kept ingnorant of the possibility, just to keep your observable world simple enough to be emulated by what is possible. (Speed of light/Relativity? That's just an abstract, allowing a

...I digress. A long way from the original point. The idea I started to try to say is that the potential for AI to fool people both en-mass and individually isn't necessarily that impossible, but may be more trouble than is strictly necessary when all you want to do is push and prod and nudge people enough to enact some imperfect form of Second Foundation manipulation upon society. (Imperfect, because (e.g.) surely Putin initially wanted a weakened Hillary presidency rather than what he got with her opponent... But his meddling may have pushed things over that balance point and meant he had to deal with the result, instead.) And the cost/benefit for using hired workerdrones, with very little instruction, probably outweighs trying to make an MCP fielding many instances of AI, and all the programming necesary to bootstrap and maintain it.

(Another side note: https://xkcd.com/1831/ ...)

((Edit to correct run-on formatting error.))
« Last Edit: March 29, 2023, 06:49:14 am by Starver »
Logged

jipehog

  • Bay Watcher
    • View Profile

Creatures only evolve senses when they are useful in their environment, hence why we have the ability to perceive common parts of the EM spectrum but not the ability to perceive tachyons or gamma radiation.

So if these AI gain no benefit at all by sensing the passing of time they won't ever be trained into understanding it.

Of course *some* AI totally have the concept of time. For instance this DOTA 2 bot? Yeah, it totally gets it.

In terms of evolution, the concern is our lack of understanding. Like with ourselves we understand how the AI is created and what is made off but its 'mind' is a mystery. Already now we often do not understand how AI achieves its solutions and in case we manage to figure it out often it used unexpected things to its benefit. Also speaking of spectrums would we be able to comprehend how AI use these anymore than deaf person trying to comprehend sound?

On that note, dota 2 bot is just the start. Already AI pilots have been developed who used their superior calculation ability to their advantage to more accurately predict the development of the battle and gain the initiative in the confrontation besting real pilots, and the AI pilot program isn't limited to the virtual realm, they are already tested in real life aircrafts gaining awareness and agency in real world. Reportedly it also aims to learn from experience.


p.s. As scary as AI weapon platform, I think that autonomous vehicles are harder to develop and would require more advanced AI.

Also do we have proper definition of intelligence, could be hard to find ghost in machine we do not understand, especially if we assume that it will developed/manifest in the same way as it did in us.
Logged
Pages: 1 2 3 [4] 5 6 ... 50