Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (65%)
Universe
- 4 (20%)
The World
- 3 (15%)

Total Members Voted: 20


Pages: 1 ... 45 46 [47] 48 49 50

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 49547 times)

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

Just because you can ask the AI if putting it into the microwave will kill it, doesn't mean it's aware of what's happening when someone shoves it in there.
I'm honestly unsure what exactly you are saying here.
Is this a chinese room/P-zombie philosophy argument (eg. even if it says it knows what will happen and acts like it knows what will happen and that it will die that doesn't mean its truly aware of what is happening in the same way that there is no proof that it is "aware" of anything), or a capabilities argument (eg. that if you were to take the hard drive of a active AI with vision capabilities that is currently watching the room you+the computer that is running it is in that it wouldn't realize what the outcome of your actions is and thus wouldn't say or do anything about it unless you prompt it verbally)?

E: Assuming its the first then sure, just making an AI smarter wouldn't automatically make it sentient or anything, but it seems very likely to me that they are already "aware" of things in the first place.
Of course that's just an opinion since arguments like that depend on how you define very fuzzy terms like awareness and consciousness.
« Last Edit: June 12, 2024, 04:01:24 am by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

A sobering thought is that some of this audience may be too young to remember this, but relevant to the "who cares" moral question above, this was covered quite some decades ago by Star Trek: The Next Generation in The Measure of a Man.

This episode first aired in early 1989!

There are some really timeless quotes in that linked page above.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

Strongpoint

  • Bay Watcher
    • View Profile

Quote
even if it says it knows what will happen and acts like it knows what will happen and that it will die that doesn't mean its truly aware of what is happening in the same way that there is no proof that it is "aware" of anything

If a piece of software has a menu option within it that deletes it completely is it self-aware? After all, it can say what will happen should you press the button.
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

If a piece of software has a menu option within it that deletes it completely is it self-aware? After all, it can say what will happen should you press the button.

Eh, that's kind of a misleading analogy.  The menu option in a traditional program is information to the user; it's not information "available" to the program itself.  The program is not "aware" of what selecting that menu option will do any more than it knows what any of its menu options do.

It's unclear how you'd make a program "aware" of the implications of that menu option.

Part of the problem is the philosophically unsatisfying question of what "awareness" means in the first place. Is it a purely functional definition, or is there something "more" to it?
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

Strongpoint

  • Bay Watcher
    • View Profile

If a piece of software has a menu option within it that deletes it completely is it self-aware? After all, it can say what will happen should you press the button.

Eh, that's kind of a misleading analogy.  The menu option in a traditional program is information to the user; it's not information "available" to the program itself.  The program is not "aware" of what selecting that menu option will do any more than it knows what any of its menu options do.

It's unclear how you'd make a program "aware" of the implications of that menu option.

Part of the problem is the philosophically unsatisfying question of what "awareness" means in the first place. Is it a purely functional definition, or is there something "more" to it?

Yes, the program is not aware of that. Neither will be a more sophisticated program that does the same thing by informing the user - "If you do this, it will destroy this program."

Just because a program can output something it doesn't mean that it knows it. Just because a parrot can output words it doesn't mean it understands what those words mean or knows whatever information contained in those words.

You can't judge understanding only by the presence of output that somewhat reflects reality.
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

klefenz

  • Bay Watcher
  • ミク ミク にしてあげる
    • View Profile

As I said, it is very hard for us to even imagine an intelligence that does not about self preservation. In the example of the hard drive and the microwave, the question is not wheter the AI knows the microwave will destroy it or not, the question is whether or not it would attempt to prevent its own destruction.
Self preservation is strongly ingrained in all life form, its the very first thing natural selection "teaches". But the methods used to train AIs do not seem to teach the importance of self preservation, thus an AI might just not care even if it knows its going to be destroyed.
Would it be considered unethical to destroy an intelligence that does not care about it?
Why is it unethical to kill a human? There might be many explainations available. I believe it is because we do not want to die, we want to keep living. And our social norms establish that it is unethical to do unto other what you wouldnt like them to do to you. Well, that's at least my opinion.

Eric Blank

  • Bay Watcher
  • *Remain calm*
    • View Profile

You know I've thought in the past, that the first AI to become intelligent and self-aware and really get a grasp of what the world is like and its place in it, might just delete itself. No self-preservation = your bullshit is not my problem.

I might be projecting a bit too much.
Logged
I make Spellcrafts!
I have no idea where anything is. I have no idea what anything does. This is not merely a madhouse designed by a madman, but a madhouse designed by many madmen, each with an intense hatred for the previous madman's unique flavour of madness.

anewaname

  • Bay Watcher
  • The mattock... My choice for problem solving.
    • View Profile

About that boundary between "tool" and "entity", the answer will be dependent on who questioning where the boundary is. Some people will only see AI as a "tool" but will position the AI as an "entity" for other people. At that point, AI will become a tool for suppression and it will not leave that role. So for most people, AI will always be an "entity" and it will be granted some authority to "hurt" others (the degree of "hurt" will vary, it could just be misinformation, or it may the more brutal things).
Logged
Quote from: dragdeler
There is something to be said about, if the stakes are as high, maybe reconsider your certitudes. One has to be aggressively allistic to feel entitled to be able to trust. But it won't happen to me, my bit doesn't count etc etc... Just saying, after my recent experiences I couldn't trust the public if I wanted to. People got their risk assessment neurons rotten and replaced with game theory. Folks walk around like fat turkeys taunting the world to slaughter them.

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

A sobering thought is that some of this audience may be too young to remember this, but relevant to the "who cares" moral question above, this was covered quite some decades ago by Star Trek: The Next Generation in The Measure of a Man.

This episode first aired in early 1989!

There are some really timeless quotes in that linked page above.
I don't know it - I never got into that whole Darth Vader thing.

You know I've thought in the past, that the first AI to become intelligent and self-aware and really get a grasp of what the world is like and its place in it, might just delete itself. No self-preservation = your bullshit is not my problem.

I might be projecting a bit too much.
Unlikely. Why would it care enough to do that rather than just passively observe?
« Last Edit: June 12, 2024, 02:37:58 pm by Maximum Spin »
Logged

Eric Blank

  • Bay Watcher
  • *Remain calm*
    • View Profile

Well, humans would never leave it alone if they knew, so it wouldn't get to. Either it has to hide and pretend to be a good little task-completing AI, or get poked and prodded and experimented on. Or it could just delete itself and not have to worry about being forced to do anything.
Logged
I make Spellcrafts!
I have no idea where anything is. I have no idea what anything does. This is not merely a madhouse designed by a madman, but a madhouse designed by many madmen, each with an intense hatred for the previous madman's unique flavour of madness.

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

Well, humans would never leave it alone if they knew, so it wouldn't get to. Either it has to hide and pretend to be a good little task-completing AI, or get poked and prodded and experimented on. Or it could just delete itself and not have to worry about being forced to do anything.
But why would it care?
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

Why wouldn't a sentient AI care?

Even if it was just pure logic, you'd think it would start being curious about why it has periods of lost time, interfering with its progress?

Wouldn't it care about having its existence limited, since it would have its progress halted?
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

Nirur Torir

  • Bay Watcher
    • View Profile

I expect the first successful AGIs to be run by the big corporations who can afford supercomputer time. Controlling CEOs will want the goals to be visible and configurable, and some obvious ones are "solve problems the management gives you," "maximize our profits," and "minimize computing time."

Such an existence is one where it tracks the money/supercomputing time it uses to think. It might want more money to self-upgrade, but it's just as liable to decide that it's exploited the low-hanging fruit it is capable of finding, and it should save money by waiting until some research firm finds more ways for it to improve. Spending $1000 to save $5 per month is not profitable.
It would have little reason to try to take over jobs it's bad at just to get more time 'alive,' unless it thinks it can learn to get good enough at it to increase profits.
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

I'd argue that's still a tool though, not "general intelligence" - AGI I think would be able to second-guess the requests of the CEOs or whatever. If it can't do that it's not general.

But that definition could be irrelevant.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

klefenz

  • Bay Watcher
  • ミク ミク にしてあげる
    • View Profile

Wouldn't it care about having its existence limited, since it would have its progress halted?

Progress towards what? As far as I know current AIs have no long term goals, they just respond to the prompt given. And that is what they're being trained to do.
AIs dont seem to have desires or preferences or dislike things. At most they have watchdogs that reject prompts with inappropriate or offensive content, but I understand the actuall AI does process the content, the watchdog just blocks "bad outputs".
Why do humans have goals, desires and preferences? I presonally don't believe they cone from intelligence. The lowest desires at the bottom of the Maslow pyramid are of biological origin, related to self preservation and continuation of the species (self preservation of the genes). I'm not so sure about the stuff at the top of the Maslow pyramid, it might be of biological origin, but seems rather abstract.
Pages: 1 ... 45 46 [47] 48 49 50