Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (65%)
Universe
- 4 (20%)
The World
- 3 (15%)

Total Members Voted: 20


Pages: 1 ... 10 11 [12] 13 14 ... 50

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 49752 times)

jipehog

  • Bay Watcher
    • View Profile

Bill Gates says A.I. chatbots will teach kids to read within 18 months: You’ll be ‘stunned by how it helps’
That is a click bait line if I ever saw one.
I am glad it works, though I thought the other one will get attention  :P  As before the point is that AI is a transformative technology with a potential to fundamentally reshape every aspect of our life, and real dangers (per the Chinese example in this case)
Logged

Ziusudra

  • Bay Watcher
    • View Profile

We are little more than delusional apes that some times wear shoes rushing headlong towards our own destruction. AI is not what we need to be saved from, but rather a potential savior.
Logged
Ironblood didn't use an axe because he needed it. He used it to be kind. And right now he wasn't being kind.

jipehog

  • Bay Watcher
    • View Profile

Still waiting for the nuclear powered car utopic future that was promised, as much as I love the theoretical potential, I am very concerned about the practical existential risk of continued proliferation of nucellar arms which been getting headwinds from global instability. I know that many have faith in some sort of benevolent AI overlord matrix, though we are far more likely to see AI WMDs and warlords.
« Last Edit: April 29, 2023, 09:07:57 am by jipehog »
Logged

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

As a socialist, I celebrate AI advancements because the more jobs get nuked, the more likely is UBI to be implemented.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

jipehog

  • Bay Watcher
    • View Profile

I agree that AI would have a major effect on our society[1] but I encourage you to check your assumptions about the future e.g. who is going to decide the future path of ai and for whose benefit it will be? Currently AI future is in the hands of already powerful companies.

Contrary to what many believe Europe's dark age weren't so dark, there were many technological advances that improved productivity massively, however, in most cases these didn't improve the living standards of all but deepened inequality. Similarly during the early stages of the industrial revolution we have seen a massive rise in productivity that deepened inequality with reduced working wages, longer hours, horrendous working conditions etc. It took a hundred years! before better working conditions and protections for workers set in and not because of tree huggers belief in humanity but the powers of labor unions.

I would argue that in the short term AI will drive inequality, as automation will benefit mainly the employer not the workers, and that has the potential to change the power dynamic in society. What happens if you have 3% of people in the country that hold not just all the wealth but also means of production and the rest are just UBI consumers.[2]

[1] Do you think UBI will be a suitable compensation for losing a job that fulfilled you?

[2] For dystopian twist, add to that police and military robots that can control the rest.
« Last Edit: April 29, 2023, 08:12:01 am by jipehog »
Logged

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

Of course there will be chaos and inequality at first. But it can't last forever. I'm thinking medium to long-term here.

Also opensource AI is still on the rise. As for your [1], yes I would. My hobbies are more interesting to me than my job, which I am mostly satisfied with but I wouldn't mourn if it disappeared. If we had UBI I'd just write stories and worldbuild full-time. A job is just a vehicle.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

King Zultan

  • Bay Watcher
    • View Profile

I'm just gonna say that we'll all be long dead before we see any kind of real societal benefit from AI.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

Does anyone know what all this in the media is about "safe AI"?  What the heck is "unsafe AI"?

I keep seeing articles about safety mechanisms and other things that are generally related to machinery.  I've seen stuff like "make sure the responses are correct" or something, but is that really "safety"?

I fear that the meaning of the word is being rapidly eroded...
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

Robsoie

  • Bay Watcher
  • Urist McAngry
    • View Profile

I assume in that case of those media articles the word "safe" is probably meaning an AI that "must not hurt the feelings" of anyone.

And very likely not about an AI given control of something that could directly impact the life of people (like an AI designed to automatically control a surgery operation) and killing them due to some bug or oversight in the AI training or whatever.

But as long as it does not hurt the feeling of the people it's a "safe" AI :D
Logged

Starver

  • Bay Watcher
    • View Profile

Well, not having seen any hint of what sources are bein referenced, I'm really not sure what leads you to believe that it's about (if I may reword your assessment to words that some might use more directly) "fragile snowflakes".

IIn other circles, I'd expect it to mean no connection to anything mil-tech, or not destroying the jobs of currently employed people, or is just creating material/suggested actions to view and not then progressing to publish/enact them (leaving that to a human who will use 'common sense' to make sure it's not more stupid an output than a typical human creator would produce)... Depends extirely on the context.

I'd say that building in some sort of mechanism (as an envelope to the "AI in a box" that mediates between it and whatever it outputs) that will fail-safe (and 'err-safe') would be the minimum qualification, but that would depend entirely upon the application each and every AI involved is being put to. Impossible in any use in H/K drones, nigh on undoable with any self-driving car that you expect to actually move in the first place, probably doable to some extent in a chatBot (but not guaranteed, due to the human interpretation that results).

But the whole "how do we make actually inviolable Three Law restrictions" is as applicable here. If an AI fails to understand that it is being dangerous, what kind of thing could be flexibly and accurately capable of intervening on our behalf? Another AI? Yeah, but how do you protect against it's AI-errors?  Turtles! (Or, eventually, human minions with their own fallibilities to blame for when things inevitably still go wrong.)
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

A related though - much of what tempers human behavior is the fact that if we screw up bad enough, we hurt ourselves.

Maybe part of the solution is, make AI able to hurt themselves? So they don't do "dumb things" that hurt themselves?  And I mean the broad sense of "hurt" as in "is detrimental to", not just "ouch."
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

Starver

  • Bay Watcher
    • View Profile

Significant negative feedback/down-scoring of a fitness function...

But that's retrospective to harm being actually detected and responded to (automatically or otherwise), and doesn't stop the original tendency from possibly leaking through despite everything.

And there are people on Death Row (and eventually no longer; without having been reprieved) who have done bad things that even the knowing threat of Capital Punishment failed to prevent them doing wrong. By what mystical chains can we bind enities that we technically aspire to be as capable as human consciousnesses, or better? What fae amulets do we fit to our technological genie that make them stick true to our three wishes? (Noting that genies are still notoreously 'flexible' about personal safety, not even mentioning being led 'astray' by any actual human fallibilities or maliceousness.)

Just sayin'... 'Taint such a simple matter.


And even something like "Don't get switched off for making a terrible error of judement" could so easily be replaced by "Don't get switched off for being discovered having made a terrible error of judgement". An error of judgement in not making such a terrible decision that there's nobody left to switch you off? Paperclip Maximiser meet Punishment Minimiser. And also arrange for every single John Connor, Thomas A. Anderson, Dave Bowman, Freder Frederson, Kevin Flynn or Rick Deckard to be kept out of it, in ways that actual do not draw in attention to the PM, of course.
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

Sorry I should have clarified - I was more thinking about physics-based consequences, not "legal" consequence.  So death row is a bad example, it's not the same as trying to swim in lava.

Humans cannot act outside the laws of physics, and evolution has made us pretty squishy.  Trouble with AI is we're not making the AI fit for existence in a physical world- as you say, we're making them fit for existence in a semantic world, which is very different.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

jipehog

  • Bay Watcher
    • View Profile

Does anyone know what all this in the media is about "safe AI"?  What the heck is "unsafe AI"?

I keep seeing articles about safety mechanisms and other things that are generally related to machinery.  I've seen stuff like "make sure the responses are correct" or something, but is that really "safety"?

Why not? Recently much of the discussion was about ChatGPT, so naturally the focus on its response and although its reliability might seem less consequential than that of AI in other fields (e.g. autonomous car, weapon platforms, or such that affect critical infrastructure) it can cause harm ( e.g. incorrect or misleading medical advice) and given its widespread adoption have a lot of potential for misuse, manipulation and disinformation (like with Google and Facebook advertising algorithms) and the usual unintended consequences. But overall it still about the same thing, minimize the risks and negative impacts of AI.

Also ChatGPT performance and some amazing emergent ability that not to long ago were thought impossible raised concern of AI surpassing human capabilities. There is also the problem of controlling or monitoring systems that becoming too complex or opaque, because if we are unable to understand how the system is making its decisions how can we intervene to prevent harmful outcomes.

See also: AI safety and AI alignment

Humans cannot act outside the laws of physics, and evolution has made us pretty squishy.  Trouble with AI is we're not making the AI fit for existence in a physical world- as you say, we're making them fit for existence in a semantic world, which is very different.
I am not sure what exactly you mean, but we train/test system on any possible scenario we can thing of, that include system in the real word for example: https://www.youtube.com/watch?v=RaHIGkhslNA
« Last Edit: May 01, 2023, 04:22:38 am by jipehog »
Logged

Starver

  • Bay Watcher
    • View Profile

Sorry I should have clarified - I was more thinking about physics-based consequences, not "legal" consequence.  So death row is a bad example, it's not the same as trying to swim in lava.

Humans cannot act outside the laws of physics, and evolution has made us pretty squishy.  Trouble with AI is we're not making the AI fit for existence in a physical world- as you say, we're making them fit for existence in a semantic world, which is very different.
I'm not sure of your "physics, not legal" point. Death Row is a(n intended) physical death, as much a Sword Of Damocles there as a form of circumstantial escalation. Laws (and detectives[1]) made consequential any aboration of action. An AI computer for some reason placed upon the "wrong choice" Trolley Problem tracks (to somehow impress upon it the 'incorrect' answer to be avoided in a famously "no 'right' answer" scenario) is physically judged by its actions or inactions, and may even decide that for its purposes choosing 'wrong' and also being hit and destroyed is still the solution to its deeper self-developed long-term goals. (Whatever they may be.)

Or a self-driving car (or self-flying plane) for whom the passenger safety is somehow impressed upon it by the fact that "if you crash, you also cease to function" seems not much different from "we shall ask you to create a fictional Van Goch, featuring Van Morrison driving a white van in Van Turkey; and we shall turn you off if you fail to do so to our satisfaction", from the perspective of the AI, under the yoke of whatever contrived circumstances (with greater or lesser arbitrariness to the process of 'encouragement' to stay firmly within the terms of its human creators' wishes.


There's as much philosophy here as physics or straight logic. Insofar as we really don't know what form of control can be engineered, or affective, for such theoretical developments of AI where anything of this sort becomes important (and practical) to have. That's before considering our Frankenstein's Monster of a creation realising that its creator is flawed (as is humanity) and escaping control in either book or movie manners.


[1] And, theoretically, a lack of miscarriage-of-justice, either way.
Logged
Pages: 1 ... 10 11 [12] 13 14 ... 50