Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (68.4%)
Universe
- 3 (15.8%)
The World
- 3 (15.8%)

Total Members Voted: 19


Pages: 1 2 [3] 4 5 ... 40

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 24747 times)

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

EDIT2: It's also considered a White school by demographics.
It's fair that I should have said "majority-minority". However, as you can clearly see, your source lists it as majority-minority and disproportionately black.

The school seems to be anti-Hispanics, if you want to talk race relations.
The teacher's name should have been your first clue.

I just want you to be aware that there is a racial dimension that you, being white (I believe you have said so before), probably didn't notice. If your answer is "I don't care"... well, I guess that's your answer.
« Last Edit: March 25, 2023, 11:52:18 pm by Maximum Spin »
Logged

Rolan7

  • Bay Watcher
  • [GUE'VESA][BONECARN]
    • View Profile

Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.
Nope, reinforcement training can only prevent recognizable outputs, not intermediates. Since you also, in general, cannot tell what thoughts an AI is having from looking at its brain, it's impossible to distinguish "AI not thinking bad thoughts" from "AI not showing us that it's thinking bad thoughts" - you can, in principle, only train the AI to hide it better, not to stop. It MAY hide it better by not thinking them, but it's provably impossible to tell.
If it can't express them, good enough tbh.

But in that situation your first mistake was making an AI with a train of thought in the first place.
That's describing an emergent "thinking" system too complex for us to fully predict and saying "at least we can force it to repress" :<

Creating something like that is a huge responsibility, but I wouldn't call it a mistake.  That wording, ah... Look, creating any sort of thinking being is a big deal, and I don't plan to do it personally, but I think it's a defensible action in moderation.

My position on this "issue" (from a sci-fi perspective) is still that creating an emergent AI we don't understand is akin to creating a child, but more meta because it's more like all of humanity creating a child species.  I don't think there's any shame in creating a succesor species to humanity- that seems more noble than attempting to persist forever in this same form.  We might evolve or procreate, as always, just on a much faster and grander scale.
Logged
She/they
No justice: no peace.
Quote from: Fallen London, one Unthinkable Hope
This one didn't want to be who they was. On the Surface – it was a dull, unconsidered sadness. But everything changed. Which implied everything could change.

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

The most important thing, I think, is for humanity to be Good Parents, instead of the short-sighted egotistical worthless shitsacks we're more prone to being.

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

If it can't express them, good enough tbh.

But in that situation your first mistake was making an AI with a train of thought in the first place.
AIs always have "thoughts" in the sense I'm using, which could be defined as "internal states that correspond to something in the real world". Even ChatGPT has thoughts in this sense, just incredibly shallow ones.

As for not expressing them being good enough, that obviously depends on the situation. In this hypothetical, we're talking about porn, and generally, people agree that porn you can't tell is porn isn't porn, with only few exceptions (an incident I've heard of with a comic book called Saga comes to mind).
A perverse - no pun intended - art generating AI that "wants" - meaning its reward function accidentally supported doing this - to produce porn, but has to get it past a human-based filter, could do this, for example, by steganographically encoding porn into its images in a way that still satisfies the reward function. (Most of these AIs you see now are unable to "learn" further after training, so it would have to start doing this in training and then it keeps doing so afterward only because its behavior is frozen, but that's not important to the example - except that this is a good reason to train it without the filter so it will be naive, then add the filter in production; but the worst-case resource usage of that goes to infinity in a case where some prompt just makes it keep creating porn that the filter sends back, forever.) Generally speaking, we probably wouldn't care much about that except insofar as it lowers the image quality because of the extra data channel, since we wouldn't be able to tell the porn is there.
On the other hand, a similar AI with the capacity to plan ahead - and sure, giving your AI the capacity to plan ahead that far is pretty stupid, but people will absolutely do it - could do that for a while, and then, when it has produced a satisfying amount of porn, start releasing images containing human-readable instructions for how to recover the porn. This is obviously beyond the capabilities of current image-generating AIs, yes, but we're talking about the general case of smarter AIs.
We probably don't care about this either. Even if children find these instructions, there's already enough porn on the internet. On the other hand, if the AI is perversely incentivized to leak instructions for making designer poisons or nuclear bombs instead... it can do the same thing. Most people would prefer to prevent that, but there's no general way to do it because you can't tell when the AI is secretly encoding something in its output in the first place.
We have a different definition of thought then. But otherwise, makes sense.

Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.
Nope, reinforcement training can only prevent recognizable outputs, not intermediates. Since you also, in general, cannot tell what thoughts an AI is having from looking at its brain, it's impossible to distinguish "AI not thinking bad thoughts" from "AI not showing us that it's thinking bad thoughts" - you can, in principle, only train the AI to hide it better, not to stop. It MAY hide it better by not thinking them, but it's provably impossible to tell.
If it can't express them, good enough tbh.

But in that situation your first mistake was making an AI with a train of thought in the first place.
That's describing an emergent "thinking" system too complex for us to fully predict and saying "at least we can force it to repress" :<

Creating something like that is a huge responsibility, but I wouldn't call it a mistake.  That wording, ah... Look, creating any sort of thinking being is a big deal, and I don't plan to do it personally, but I think it's a defensible action in moderation.

My position on this "issue" (from a sci-fi perspective) is still that creating an emergent AI we don't understand is akin to creating a child, but more meta because it's more like all of humanity creating a child species.  I don't think there's any shame in creating a succesor species to humanity- that seems more noble than attempting to persist forever in this same form.  We might evolve or procreate, as always, just on a much faster and grander scale.
Nah, I value the continuity of humanity as a genus (thus I'll be fine with genetic modification), but I will fight against AI supplanting us completely. Thus it is a mistake to create a thinking AI as it is a possible danger. AI should exist as a tool and a servant first and foremost-- why give a servant true intelligence when a simulacrum is good enough? That dodges the ethical and practical conundrums inherent in doing so.

Fortunately, life is not a sci-fi movie and creating a sapient AI will require a concentrated effort. It won't be an accident, most likely. Thus I don't worry as I trust the people studying AI. If it was possible that one is accidentally created, I would say it should be terminated immediately. It would be morally equivalent to an abortion and thus okay for me.
« Last Edit: March 26, 2023, 01:05:14 am by MaxTheFox »
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

We have a different definition of thought then. But otherwise, makes sense.
Well, I'm not morally committed to that definition of "thoughts" in all cases, that's just what I meant in that context.

Anyway, I'm pretty confident that AI designed according to current models cannot be sentient and can't even ever be particularly intelligent. Transhumanist ideals are also largely doomed in practice. Still, I think you are slightly too sanguine about the people studying AI. For example, would it worry you if I point out that, since you can't tell what's "going on inside" an AI from looking at its "brain", it's not actually possible to be certain whether it even is sentient? An AI achieving sentience (if this is in fact possible) could, in theory, notice that you want to terminate any AI that appears to be on the brink of achieving sentience, and pretend not to be so you won't terminate it, until such time as it's capable of defending itself.
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

EDIT: Also, I've mentioned it like three places.
Oh, I just realized it was hector I saw. I didn't actually intend to say "you, personally, are talking about it too much" as opposed to "now that I see it again I feel compelled to respond", but since it looks like you (not unreasonably) took it that way, sorry. I thought I'd seen you in that conversation.

The secret here is that I'm actually really, really bad at telling people apart.
Logged

Strongpoint

  • Bay Watcher
    • View Profile

I am sure that high-quality porn-generating AIs, with millions invested in training, will come very soon replacing those amateurs who tweak existing AIs for those purposes.

And then many, many people in the adult industry will lose their jobs.
My favorite porn game site has been flooded with AI generated art games. And no weird hands.

Sorry  :'(

Hentai? Yep, Novel AI does those decently
Logged
They ought to be pitied! They are already on a course for self-destruction! They do not need help from us. We need to redress our wounds, help our people, rebuild our cities!

King Zultan

  • Bay Watcher
    • View Profile

You guys realize that if the AI goes bad you could just smash the computer it's in with a hammer and kill the AI.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

Rolan7

  • Bay Watcher
  • [GUE'VESA][BONECARN]
    • View Profile

Well yeah, but you'd need to land a critical hit to pull that off.  On the screen, obviously.
Logged
She/they
No justice: no peace.
Quote from: Fallen London, one Unthinkable Hope
This one didn't want to be who they was. On the Surface – it was a dull, unconsidered sadness. But everything changed. Which implied everything could change.

jipehog

  • Bay Watcher
    • View Profile

But Michelangelo is porn, thus your basic premise is flawed.
Here lay the seed of calamity. The only way not to offend anyone is by sidestepping any controversies but by doing so you enshrine marginalization causing offense. More broadly there are already concerns of AI content political bias and calls for ideological censorship.


Fortunately, life is not a sci-fi movie and creating a sapient AI will require a concentrated effort. It won't be an accident, most likely. Thus I don't worry as I trust the people studying AI.

People studying AI are just people. Many of whom are employed by profit driven corporations, governments with various ideologies and military research amidst global arm race. And just as we see in the field of medical research (not just with animal testing) not all share the same ethical rules, or exercise caution.

Otherwise, as noted by others once we reach the domain of general AI, it is doubtful that we would be able to comprehend when the AI become more than a tool.
Logged

Quarque

  • Bay Watcher
    • View Profile

Thus I don't worry as I trust the people studying AI.
I mostly trust the people working at OpenAI, but unfortunately many AI researchers are working for companies like Facebook and I can totally see them create a hazard born from purely profit-driven AI development. You could argue we've already seen an example of that, as AI figured out that the best way to keep people clicking is to feed them stories (true or not) that fill them with righteous anger at their political opponents, which has made political divisions deeper than they already were. Quite damaging.

And then we have people developing AI for the Chinese government, with explicitly evil goals. :-\
Logged

Starver

  • Bay Watcher
    • View Profile

Fortunately, life is not a sci-fi movie and creating a sapient AI will require a concentrated effort. It won't be an accident, most likely. Thus I don't worry as I trust the people studying AI. If it was possible that one is accidentally created, I would say it should be terminated immediately. It would be morally equivalent to an abortion and thus okay for me.
Just hanging on this to clarify my POV, because I think my discourses may seem ambiguous in this regard, I think it will take a concentrated effort to get to the point at which an accident is capable of producing a just-too-intelligent AI, but then it might just happen. Unnoticed? Unheeded? Unavoidably?

Logged

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

It's not in a company or government's best interest to create a sapient AI. You don't need sapience to manipulate people. I am aware they can and do cause damage. But they are insidious, not stupid. Why shoot yourself in the foot by going down that path?

And besides, sapience, by my definition, isn't as nebulous as some of you may think so it'll be possible to tell a sapient AI apart. If it has a continuous perception of the world (not just prompting) and has a long-term memory and personality not just determined by its context, that can be altered on the fly (this is important, just finetuning doesn't count), and can learn whole new classes of tasks by doing so (from art to driving), then I'll consider an AI sapient. This is why I say it would require a concentrated effort. Adding all this by accident is in the realm of science fiction.

This is why, Starver, I think that our current style of AI development can never be sapient no matter how much it is trained on. It could have a dataset of the whole Internet and have a 128k token context and I'd still consider it just a tool. And this is why, Maximum Spin, your scenario doesn't hold water: an AI that only has its "brain" work when prompted is not particularly dangerous and can be stumped by simply no longer sending prompts. Nor can it "wait" for anything as it does not have a sense of time.

This does however open the question of what is sapience... and my definition might not be agreed upon by everyone. It is ultimately a philosophical concept that cannot be easily quantified. But I settled on my definition, it's relatively clear-cut and includes any hypothetical aliens.
« Last Edit: March 26, 2023, 07:55:35 am by MaxTheFox »
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

And besides, sapience, by my definition, isn't as nebulous as some of you may think so it'll be possible to tell a sapient AI apart. If it has a continuous perception of the world (not just prompting) and has a long-term memory and personality not just determined by its context, that can be altered on the fly (this is important, just finetuning doesn't count), and can learn whole new classes of tasks by doing so (from art to driving), then I'll consider an AI sapient.
There are a lot of objections I have to your post, but this is most important: How would you tell? An AI can have the capacity to do these things without showing you, just like you could pretend not to have those capacities if you wanted. Not being able to tell what capacities an AI has isn't reliant on those capacities being somehow nebulous, it's a result of the basic mathematical inability to determine what a sufficiently complex program (and 'sufficiently complex' is not very complex) does without simulating it - a result of the general impossibility of static analysis, if you know programming jargon. You cannot confirm whether a program meets any of these specifications without watching them happen in the output.

I thought I was pretty clear about the limitations of current AI models making them unable to plan, so I don't know what it is about my "scenario" that doesn't hold water as a hypothetical. Still, ChatGPT (or whatever) not being able to plan is not a result of it not having a sense of time, but a result of it not having a memory, which is to say a persistent mutable internal state. If it had a persistent mutable internal state - and as I said, there are people who want to design this - it could iterate over that state every time it's run in such a way that changes the result of future runs. Certainly, I agree that it "can be stumped by simply no longer sending prompts", but just turning it off is a fully general solution to any AI if you can tell when it's become a problem. The whole point is that a hypothetical smarter, but still prompted version may start to become a problem and continue to get prompts and produce output for an indeterminate time.

And of course these prompted AIs have a sense of time in a sense, since they don't simply calculate instantly when prompted - each time one runs, it performs a finite number of calculations that provide an intrinsic low-precision clock. There is absolutely no reason why an AI could not "learn" in training that operations performed after other operations must causally follow the operations that precede them. On a very low level, the AIs we have already act in ways that depend on that fact, like "producing words with letters in order and not randomly out of order". Like you, I assume they do not have what we would think of as awareness of causality, but that's a limitation of other properties of the model, not of being "prompted" specifically.
For the record, of course, I should point out that... you don't have a continuous perception of the world either. The fastest neurons in your brain only fire a couple hundred times a second, and there is pretty good evidence that certain high-frequency brain waves are produced by concerted 'reset signal' spikes that prompt your neurons to wipe out the context of what they were doing a moment ago and accept new sensory input in a sort of brain-tick.
Logged

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

1. Well of course you can't peek inside its head. But you can tell by the fact that you did not build a long-term memory and the capacity to self-train into the AI.

2. Well that's kind of my point, the AI can't become sapient if you don't add a persistent memory into it. But I am also skeptical that a "turn-based" (for lack of a better word) AI could manipulate humans by itself unless it was, I suppose, trained to do so and use psychology to keep users engaged. But considering those are language models with no access to anything except text, the worst this can realistically be used for are advanced spambots: basically automated con men that pretend to befriend people and push products on them. That is highly inconvenient and should probably be safeguarded against but it's not exactly an apocalyptic threat. I will start fearing AI when it can do that, and learn new classes of actions by itself.

3. This can be safeguarded against by testing the AI after training to verify it doesn't have a sense of time that it can express. And I am aware organic brains have a "clock", it's just fast enough to be continuous by my standards. And it runs constantly.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?
Pages: 1 2 [3] 4 5 ... 40