Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (65%)
Universe
- 4 (20%)
The World
- 3 (15%)

Total Members Voted: 20


Pages: 1 ... 44 45 [46] 47 48 ... 50

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 49548 times)

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

This is missing the point, though. Regardless of whether the encoding itself counts as knowledge, baby you begins knowing it immediately when your brain starts operating, in the womb, and will not mistakenly learn that something was caused by something else which came afterward.
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

Right - it requires "operation" - just the structure alone isn't enough.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

dragdeler

  • Bay Watcher
    • View Profile

Hm, I'm feeling this right now. It used to bother me more, of course the wave is not going to be erased for lack of an observer. Then I grew to accept the intended meaning more. But also not much to say, it is as if taking that interpretation just stops that thought expereiment right in it's tracks and it's like "that's it?", like when you do get a joke but it doesn't land. Speaking of tracks: the footsteps of the farmer are the best fertilizer, and if you walk along a river every footstep you take can cause a swampy microbiome. Two quotes I probably mangled but think about often enough. So I guess the point was I (didn't?) see your tree falling mutely and I raise you the butterfly effect? Yeah let's go with that.
« Last Edit: June 07, 2024, 08:48:52 pm by dragdeler »
Logged
let

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

Some day, the condition of sapience will be recognized as a relative condition and not as an absolute condition.
I'd say the line is fuzzy, tbh. I'd consider many species of ape, as well as dolphins and elephants and crows, to be close to sapient. It's just not a wide fuzz, and outside it it's rather easy to say "yeah this is non-sapient" or "yeah this is sapient". Nobody in their right mind is gonna argue that an amoeba is sapient, or even a sparrow (even though sparrows are smart birds, they're not really close to crows).
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

King Zultan

  • Bay Watcher
    • View Profile

It seems like we've hit the part of the thread where people start getting philosophical.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

klefenz

  • Bay Watcher
  • ミク ミク にしてあげる
    • View Profile

Remember the 90s? Remember how much games evoled back then? That was when the 3D jump happened. We went from the original Doom to Half Life in just 6 years.
If you asked anyone back then what they thought games would be 2024 most would envision full immersive VR.
In reality advancement slowed down a lot. 6 years after Half Life we got Half Life 2. There was improvement, but not nearly as much as Doom -> HL.

The last 15 years have been crazy for AI because of the breakthrough of deep learning, but even that is reaching its saturation point. I've heard LLMs have already consumed all the knowledge of humanity, what are they gonna feed them now? AI generated content? I doubt that will work.
The next 15 years will have improvements, optimizations and more integration, but none of the crazy developments we had lately.

Starver

  • Bay Watcher
    • View Profile

Heck, Doom's leap to 2.5d FPS was already a leap (not the first attempt at 2.5d or 3d, sure, but the one that did it quickly and visually good. (Different genres, but Alone In The Dark did full 3d rendering (voxels, not sprites) very nicely but painfully slowly, and Elite had done wireframe-3D space combat, etc, for about a decade beforehand on far more basic personal computers. For example. Wolfenstein 3D was (2+)d intermediary, of course.  Never played Half Life, but the next push I recall was Quake (or diversion over into the likes of Descent/Magic Carpet, depending upon what kind of game you like... Battlezone and Battlefield were probably more my thing, with a bit of Tomb Raider and GTA3 and of course Duke Nukem to bring me back to an enhanced 2.5d sprotes'n'vectors playing environment).

Of course, some of the leaps are different quality to others. The whole Sims thing was basically little more than isometric, but looked good enough for all that. There were a few "loops and jumps" racing games (that really worked best on arcade hardware, often) that had marvellous freedom of movement but it was often little more than filled-wireframe.

But you could generally get a good single-player game, by default, due to multiplayer opportunities (even by null-modem) only being an "if you can" playing option. Later on, it looked like the single-player progression was left to fester as the big attraction was massively-multiplayer deathmatch/team-vs-team-coop  feature, and the AI enemies (to swing this vaguely back to the topic) were often stupider than the previous Doom-sprite type player-hunting entities. As I gravitate to single-player stuff (I did a lot of solo Survival Mode minecraft, rather than multiplayer Cooperative/otherwise, I think I'd appreciate a good ability-scaling AIish opponent (hovering between challenging and slightly impeding, tracking my playstyle to some degree to make the personal grind interesting without any degree of hopelessness.


But, yeah, incrementals in development but I think the next leaps are going to be in aspects we can't easily anticipate (a little-known side ability gets its own revolution, or even some principle that's entirely novel).
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

The next big things are going to be energy/learning efficiency and in getting an AI that can self-censor - that is, something that knows it's going to spew nonsense and stops itself before doing so.  Basically something that goes beyond merely being a tool to being something intelligent.

Once things are actually "intelligent" though I think we're going to have Very Interesting Times™ because at that point I think we'll run afoul of issues regarding humane treatment, which we have even for pets, which we don't have when we just have "tools."

It's going to be quite interesting to see just when we cross that boundary between "tool" and "entity" or whatever it is we're going to call them.  Consider very simple things: if entities have intelligence, is it ethical to do a software update? Is it ethical to power them off, even if they save state and they resume when you power them back on?
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

klefenz

  • Bay Watcher
  • ミク ミク にしてあげる
    • View Profile

Hard to tell. From what Ive read modern learning machines are not using darwinian algorithms, so they are not likely to develop self preservation, or even have a concept of it. We humans, having emerged from darwinian evolution have the concept of self preservation engraved on every layer of our brains, it is very hard for us to imagine an intelligent entity that doesnt care about it. An intelligence that really wouldnt care about ceasing to exist.

Strongpoint

  • Bay Watcher
    • View Profile

I've heard LLMs have already consumed all the knowledge of humanity, what are they gonna feed them now? AI generated content? I doubt that will work.
Even all "digitalized knowledge of humanity" would be a very bold claim...

As for what they will feed as training data besides various types of digitalized content that already exists... Hint: OpenAI allows chatting with chatgpt for free not because of their altruism.

Synthetic data, aka stuff created by LLMs or other software will be used, too. In fact, it is already used. AFAIK, some small models are trained exclusively on chatGPT outputs.

And I am sure that people will be hired to create training data for AIs. It is one of the future job markets.
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

It's going to be quite interesting to see just when we cross that boundary between "tool" and "entity" or whatever it is we're going to call them.  Consider very simple things: if entities have intelligence, is it ethical to do a software update? Is it ethical to power them off, even if they save state and they resume when you power them back on?
Who cares?
Logged

King Zultan

  • Bay Watcher
    • View Profile

Making the AI smarter isn't going to suddenly make it into a person, it'll still just be a computer program, it still won't know that updating it could break it or that putting the hard drive that contains it into a microwave for ten minutes will kill it.

It's going to be quite interesting to see just when we cross that boundary between "tool" and "entity" or whatever it is we're going to call them.  Consider very simple things: if entities have intelligence, is it ethical to do a software update? Is it ethical to power them off, even if they save state and they resume when you power them back on?
We'll probably all be dead by the time they get to that point, so who cares it's a problem for future people to figure out.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

I'm somewhat skeptical that purely synthetic data will work to train higher level AIs (although note that as long as you have a better AI its easy to use it to train weaker ones), but even if it proves to be impossible that's only an issue in the short run.
True, if we can't make high enough quality synthetic data to get through the data wall then the short term timelines (such as AGI by 2027) have no chance at working out.
But in the medium to long term now that data for data's sake is useful companies are going to be producing vast quantities of it just to train AI. Cameras on everything, app + website + phone developers spying on you and grabbing all the data to give to OpenAI, robots running around recording all their sensory data, tons of data that would have been thrown out as useless trash instead being recorded, ect.
Hard to tell. From what Ive read modern learning machines are not using darwinian algorithms, so they are not likely to develop self preservation, or even have a concept of it. We humans, having emerged from darwinian evolution have the concept of self preservation engraved on every layer of our brains, it is very hard for us to imagine an intelligent entity that doesnt care about it. An intelligence that really wouldnt care about ceasing to exist.
Of course they have the concept of self preservation (just as they have the concept of hot dogs, or knives, or happiness), its in their training data.
In addition since they are trained to mimic humans they often act in ways to keep themselves "alive", even if arguably they are only doing so because that is what a human would do in their  circumstances. (Note that so far they only do this in text form, since, ya know, they can't do anything but speak, and it may or may not translate to actual action once they can actually act.)
Making the AI smarter isn't going to suddenly make it into a person, it'll still just be a computer program, it still won't know that updating it could break it or that putting the hard drive that contains it into a microwave for ten minutes will kill it.
...
Is this one of those "I have no clue what AI can already do" posts?
Because they totally already know that.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

King Zultan

  • Bay Watcher
    • View Profile

Just because you can ask the AI if putting it into the microwave will kill it, doesn't mean it's aware of what's happening when someone shoves it in there.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

Starver

  • Bay Watcher
    • View Profile

Making the AI smarter isn't going to suddenly make it into a person, it'll still just be a computer program, it still won't know that updating it could break it or that putting the hard drive that contains it into a microwave for ten minutes will kill it.
You got me thinking.

I know that if I was put in a microwave for ten minutes, it would kill me. (Assuming that the mere "putting in" doesn't, therefore cavitary enough, all else being equal; and presuming that it's ten minutes of operation, not merely shut in or even with the door open. In fact, I could just stick my head in quite safely, barring any unusual fail-dangerous circumstances, for ten minutes and be ...if not happy... ok, with perhaps a touch of boredom. I told you that this got me thinking!)

One cannot yet know if a qualifying (i.e. completely self-aware) AI would understand that this would have killed it (but it might already have an issue with its hard drive being extracted in order to be put-in-a-microwave, just as I'd probably not react well to having just my brain placed in a microwave-safe dish, for starters), and it might explicitly know that it would not if it was fully cogniscent (if not an instigator) of having a perfect backup that would be (or already had been) used to ensure any (retrospective-)continuity of existence. Assuming that the conditions for establishing an 'electronic' intelligence do not take it too far past currently possible data-cloning paradigms (copying qubits would be a potential issue, if they are a necessary element of True AI™©®✓...).

And then there's the microwave itself. It could be killed by putting a hard drive into it for ten minutes (might depend upon the microwave, and an AI-powered one might just refuse to do the job on principle or out of correctly determining it as a an action that should not happen). And I'm left wondering if the man-sized microwave would run for ten minutes without 'overkill' (power levels, cavitation, penetration are all factors that work differently from the scaling of purely thermal ovens) and maybe my own ten-minute turn could at least inconvenience a system designed to regularly cook such items in the manner of my three-minute potatoes. (Depending upon the care of preparation, I might also be being taking microwave-unsafe/-unfriendly material in there with me. Possibly, depending upon the circumstances, I might insist upon it!)


As I said, it got me thinking.
Logged
Pages: 1 ... 44 45 [46] 47 48 ... 50