Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (65%)
Universe
- 4 (20%)
The World
- 3 (15%)

Total Members Voted: 20


Pages: 1 ... 17 18 [19] 20 21 ... 50

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 49730 times)

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

Our current paradigm of AI development won't create anything actually sapient, and honestly I'm happier this way because it means we don't need to worry about giving it rights.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

jipehog

  • Bay Watcher
    • View Profile

And I also happen to think the human brain is just as physically limited, just has vastly more complexity. And inconceivably more complex algorithms. [..] And we're nowhere near reproducing this.
I suspect that the underlying algorithms behind our own mind will turn to be far more simpler than we suspect. We see this with AI where some very simple algorithms unexpectedly led to emergence of very complex human like abilities.

Otherwise we keep discovering how much more our LLM can achieve from problem-solving to creative writing. As we experiment with giving them memory, ability to think over, sense the environment, self improve etc I believe that AI isn't as far as believe from matching our complexity.
« Last Edit: June 01, 2023, 07:16:39 am by jipehog »
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

Until we can define what it means to "understand" something, instead of merely output a correct response when given a question, it's going to be tricky to address this.

So far LLM is cool, but to me it's nothing more than an advanced database. You ask it a question, it gives an answer, and maybe even a useful answer!  But does it understand the question? That's unclear.  I might argue that, if you have a machine that has perfect recall and can "quickly enough" find the answer to any question, it doesn't need to understand.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

Starver

  • Bay Watcher
    • View Profile

It fits nicely under the long-philosophised-about Chinese Room thought experiment, certainly.
Logged

jipehog

  • Bay Watcher
    • View Profile
« Last Edit: June 01, 2023, 02:31:31 pm by jipehog »
Logged

jipehog

  • Bay Watcher
    • View Profile

US military drone controlled by AI killed its operator during simulated test
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
Quote
AI used “highly unexpected strategies to achieve its goal” in the simulated test[..]

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
Logged

Starver

  • Bay Watcher
    • View Profile

You have to laugh about the language used here. Maybe it's the Grauniad, or maybe all from its sources. (Because it's "obey orders by no longer obeying orders", it doesn't match any of these, but definitely matches a key element of the denoument of Star Trek: The Motion Picture.)

A big question here is how it established that the operator/comms mast (or simulcra versions) were valid "goal seeking" targets. Even tethered at the end of a communication link, I doubt a drone could work out where its 'inconvenientbinner compulsions' were coming from.

Meaning probably that the simulations were run many times, rapidly and unattended, while it explored all kinds of random 'solutions' (shooting at practically every simulated rock or tree or other marked feature) under "operator preventing" conditions until it happened to identify an increased win-score when it (first) neutralised the simulated operator and then (when "kill the operator" was adjusted to be a penalty-score, if I understand the account) neutralised the separately simulated broadcast site (which clearly had not yet been similarly set to have an anti-score, but had been given its own entity).

Looks like either rank amateurism or deliberately designed in as possible directives to make a point. And the nature of the simulated 'arena' is left vague... Highly unlikely to be a real drone flying around real landscape but essentially firing Lasertag weaponry. Maybe they're using a full on virtual environment, but it has the whiff of a far more stripped-down rapid-prototyping 'interface'. But let's be clear that this is far from an actual physical Skynet HK drone (even initially "nerfed") being let loose on the real world. It's probably more likely "for attempt=1 to 100000 {play game; get score; adjust parameters;} print results(top ten)" edit, while fixing link, to also re-add the intended caveat: ...if it even happened at all.
« Last Edit: June 02, 2023, 08:15:28 am by Starver »
Logged

King Zultan

  • Bay Watcher
    • View Profile

US military drone controlled by AI killed its operator during simulated test
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
Quote
AI used “highly unexpected strategies to achieve its goal” in the simulated test[..]

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
That sounds incredibly familiar as if it was adapted from a story I've read, which makes me think it might be bullshit.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

brewer bob

  • Bay Watcher
  • euphoric due to inebriation
    • View Profile

US military drone controlled by AI killed its operator during simulated test
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
Quote
AI used “highly unexpected strategies to achieve its goal” in the simulated test[..]

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
That sounds incredibly familiar as if it was adapted from a story I've read, which makes me think it might be bullshit.

The linked article also says:

Quote
The US air force has denied it has conducted an AI simulation in which a drone decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission.

[...]

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

jipehog

  • Bay Watcher
    • View Profile

A big question here is how it established that the operator/comms mast (or simulcra versions) were valid "goal seeking" targets.


Maybe, but I doubt it, as I seen many many such absurd examples e.g. AI cancer detection figured out that the biggest predictor for cancer is whether there is a ruler in the picture (because in the training phots doctors held a ruler to measure it)

So as before, for me the big issue is our inability to understand exactly how AI system work, which limits our ability to intervene to prevent harmful outcomes. In simple systems we might be able to address the vast majority of use case scenarios, but in more complex system like ChatGPT i have doubt about our abilities. And with an AGI it would be a lost battle.
Logged

Starver

  • Bay Watcher
    • View Profile

That's a training error (like racist/sexist algorithms, because they were presented with prebiased data). This is more like the "win at Tetris by hitting Pause" issue. Taken at first value. But that'e be bad goal-seeking specification.
Logged

jipehog

  • Bay Watcher
    • View Profile

Its show weak AI inability to see past syntax, which requires us to play with its parameters to shoe horn until it does what it expected to, which is  limited because mostly we do not understand how AI trained algorithm work.

We all know that there is no such things as bug free programs, and AI algorithm are like magic box compared to programming code.
Logged

Dostoevsky

  • Bay Watcher
    • View Profile

On that murder-drone article, another 'correction':

Quote
UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".

Anyways the idea pretty much fits the longstanding 'paperclip maximizer' thought experiment, just with murder even in the initial objective.
Logged

Starver

  • Bay Watcher
    • View Profile

Its show weak AI inability to see past syntax,
...right, that's two AI problems. Work out how to accomplish a mission, but first it must 'understand' what mission is being told it. We just aren't at a mature-enough level to rely upon such compounding of problems.

Johnny Five is alive? Have a nice conversation with him, but don't imagine that if you ever can pursuade him to go back to Killbot Mode that he'll do what you tell him as well as being a fun guy to hang out with with a good line in banter...
Logged

King Zultan

  • Bay Watcher
    • View Profile

On that murder-drone article, another 'correction':
Looks like I was right, it is bullshit.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?
Pages: 1 ... 17 18 [19] 20 21 ... 50