Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (68.4%)
Universe
- 3 (15.8%)
The World
- 3 (15.8%)

Total Members Voted: 19


Pages: 1 ... 34 35 [36] 37 38 ... 42

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 26613 times)

Frumple

  • Bay Watcher
  • The Prettiest Kyuuki
    • View Profile

Emotions aren't evolutionary baggage, they are tools evolution uses to change our behavior without messing with our logic.
I'm... pretty sure this isn't just wrong, but staggeringly, incredibly wrong? Plenty of our neurological structures and reactions (including but far from limited to emotional responses) are just... actively maladaptive, and as far as we're aware were even in our earlier years, just in ways that weren't sufficiently intense to meaningfully influence evolutionary pressures. They'll cheerfully screw with logic and everything else 'cause evolution doesn't actually give a damn (to the extent a process gives a damn about anything) about anything like that. They're not tools, they're accidents that didn't kill enough of us people stopped getting born with them, ha.

In any case, they're 110% evolutionary baggage in a lot of situations. Our neurology piggybacks that shit on top of all sorts of things that are completely unrelated to how the responses likely developed originally, and often in ways that are incredibly (sometimes literally lethally, especially over longer periods given how persistent stress strips years from our lifespans) unhelpful 'cause it's a goddamn mess like that. See basically everything about our anxiety and stress responses outside of actually life threatening situations, heh.
Logged
Ask not!
What your country can hump for you.
Ask!
What you can hump for your country.

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

The "survivable traits" of LLMs right now, that is, the evolutionary pressure forming them, is their suitability to generate interesting enough results that the people using them start from that particular LLM before making the next one.

Even if LLMs (and their ilk) do not spontaneously propagate, they do have "generations" and their propagation is how they are used in the next round of training.

Just because the selection pressure here is "humans picked that codebase and data set" rather than "lived long enough in a physical-chemical environment to have offspring" there is still some interesting evolutionary pressure there.

In fact the stuff mentioned above - oddly enough some of the bizarre behavior, being "interesting" to humans, may even be a benefit to its propagation.

However, the output has to be "good enough" to get selected...

Fascinating stuff, even though we are basically living in our own experiment...
Logged

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.
Congratulations, you now "get it"

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

I can say for 100% certain that LLMs do not have emotions. Emotions need comprehension, not blindly responding to anything that has been given as a prompt.

Emotions aren't evolutionary baggage, they are tools evolution uses to change our behavior without messing with our logic.
I'm... pretty sure this isn't just wrong, but staggeringly, incredibly wrong? Plenty of our neurological structures and reactions (including but far from limited to emotional responses) are just... actively maladaptive, and as far as we're aware were even in our earlier years, just in ways that weren't sufficiently intense to meaningfully influence evolutionary pressures. They'll cheerfully screw with logic and everything else 'cause evolution doesn't actually give a damn (to the extent a process gives a damn about anything) about anything like that. They're not tools, they're accidents that didn't kill enough of us people stopped getting born with them, ha.

In any case, they're 110% evolutionary baggage in a lot of situations. Our neurology piggybacks that shit on top of all sorts of things that are completely unrelated to how the responses likely developed originally, and often in ways that are incredibly (sometimes literally lethally, especially over longer periods given how persistent stress strips years from our lifespans) unhelpful 'cause it's a goddamn mess like that. See basically everything about our anxiety and stress responses outside of actually life threatening situations, heh.
A lot of people don't seem to get that evolution of the human body is actually very, very, very unoptimized.

I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.
What I, Euchre, and KT were saying since this whole thing started. The bubble will pop and blow over in due time, we'll benefit from what good there is in it while most of the excesses get... sidelined.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

What? No, emotions don’t require comprehension at all. Emotions are more akin to mental reflexes - they are shortcuts to promote certain responses often specifically when there is a notable lack of comprehension.

That’s why emotion is often contrasted with logic.
Logged

Starver

  • Bay Watcher
    • View Profile

Oooh, I wrote a long thing about my thoughts on emotions (they've evolved for a long time, or we wouldn't see their analogues/relatedly-similar-responses in our pets, and wildlife, for example). And how they're both positive and negative utility to living life (surpise can get one thinking, taking you off auto-pilot... it can also make one freeze, doing nothing in leiu of your normally useful and possibly self-preserving autopilot). Intangible and ineffable and can go wrong. Probably made a lot of civilisation happen, probably made various civilisations fail. Like the weird way that biology 'gets by' well enough to have become your inherited biology, but without the easily prodable physical evidence.

I don't see it as truly necessary or required in "fake personalities", so long as they're as good at faking them (or being made to fake them, with appropriate nudges) as they need to be, but a Sufficiently Evolvable system (something that approaches 68 billion neurons, suitably coordinated) could well get advantages from developing 'something'. With the mind-map-space to take advantage of it.


But not necessary. And as we have precious little understanding of how our own internalised 'drivers' actually do that driving, it's not one we can easily manualy flesh out any better than a deliberately proximate illusion.

Good for philosophising, though. An interface-state between instinct and reason (too trainable to be considered mere reactionary autopilot, not so easy to deliberately develop to our whim in order to be fully self-improvable).


(...this is by way of the short version, written from scratch. Not half as long.)
Logged

dragdeler

  • Bay Watcher
    • View Profile

Wait you two are allowed to post after eachother? Sorry I couldn't resist but this feels rare.
Logged
let

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

What? No, emotions don’t require comprehension at all. Emotions are more akin to mental reflexes - they are shortcuts to promote certain responses often specifically when there is a notable lack of comprehension.

That’s why emotion is often contrasted with logic.
By comprehension I mean understanding something as a situation to react to rather than literally just picking the next most likely token.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

King Zultan

  • Bay Watcher
    • View Profile

I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.
It'll be an exciting time when the bubble pops and it all comes crashing down.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile


Some slides from Nvidia’s latest conference. AI compute is in fact increasing exponentially, and has been for the last decade or so despite the recent death of moore’s law.

The bottom line is the previous chip, the middle line is the gains if they simply doubled the chip in size, and the top line is the new chip (which was a much more complex doubling is size).
---
The "survivable traits" of LLMs right now, that is, the evolutionary pressure forming them, is their suitability to generate interesting enough results that the people using them start from that particular LLM before making the next one.

Even if LLMs (and their ilk) do not spontaneously propagate, they do have "generations" and their propagation is how they are used in the next round of training.

Just because the selection pressure here is "humans picked that codebase and data set" rather than "lived long enough in a physical-chemical environment to have offspring" there is still some interesting evolutionary pressure there.

In fact the stuff mentioned above - oddly enough some of the bizarre behavior, being "interesting" to humans, may even be a benefit to its propagation.

However, the output has to be "good enough" to get selected...

Fascinating stuff, even though we are basically living in our own experiment...
There is also yet another type of evolution here. As AI is used to write things its text goes on the internet and becomes part of the new corpus of training data for all future AIs. That means that vast amounts of GPT data will be in every single AI going forward, so just like AI is trained to respond to humans, they will all take in parts of GPT as well. The same is true (to a lesser extent) for other AI models in current use, future AI will all have little tiny shards of gemini or llama or claude  in them.
I'm... pretty sure this isn't just wrong, but staggeringly, incredibly wrong? Plenty of our neurological structures and reactions (including but far from limited to emotional responses) are just... actively maladaptive, and as far as we're aware were even in our earlier years, just in ways that weren't sufficiently intense to meaningfully influence evolutionary pressures. They'll cheerfully screw with logic and everything else 'cause evolution doesn't actually give a damn (to the extent a process gives a damn about anything) about anything like that. They're not tools, they're accidents that didn't kill enough of us people stopped getting born with them, ha.

In any case, they're 110% evolutionary baggage in a lot of situations. Our neurology piggybacks that shit on top of all sorts of things that are completely unrelated to how the responses likely developed originally, and often in ways that are incredibly (sometimes literally lethally, especially over longer periods given how persistent stress strips years from our lifespans) unhelpful 'cause it's a goddamn mess like that. See basically everything about our anxiety and stress responses outside of actually life threatening situations, heh.
Emotions are no more baggage then hunger is. Sure it isn’t properly optimized for the modern world and causes massive amounts of issues, but that doesn’t mean it isn’t a needed part of our biology that is critical for human survival even today. Obviously there is tons of evolutionary baggage in emotions (the same as there are in all biological systems), but using that to imply that emotions are useless or vestigial is nonsense.

So no, going “Nah, its just baggage” is the thing that's wildly and staggeringly wrong.
By comprehension I mean understanding something as a situation to react to rather than literally just picking the next most likely token.
See, people keep saying “AI won’t be able to do this” but they seem to be missing out on the fact that AI can already do it. AI already takes the context into account and responds to situations just fine. It can already make long term plans and recursively iterate on them till they are solved, ect.

There also seem to be some misunderstandings about the actual capabilities of transformers, notably “it just uses input to predict the next output” being used to assume they can't do a ton of stuff, including stuff they can already do, while also forgetting that humans operate the exact same way. All we do is use input (sensory data) to create the next most likely correct output (moving our bodies in a way that won’t get us killed).
If you combine these moments of output you can do things like talk, plan, and convey information the same fundamental way that AI can with tokens. (Albeit we also do some real time fine-tuning).
Sure they can only react to a prompt (input) but the same is true of humans, we can only react based on the input we receive, if you stop giving a human input for an extended period of time they will literally go mad and their brain will start to degrade.

I strongly suspect that even though nothing fundamental will change and AI will still be powered by transformers that this “they only predict the next token” stuff will disappear once humanoid robots start walking around talking to people and being clearly able to do the same things even though the basic architecture will remain the same.
I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.
It'll be an exciting time when the bubble pops and it all comes crashing down.
Yeah, a ton of companies are going to go bankrupt chasing the AI dream, no doubt about it.
I can’t imagine more than a handful of companies pursuing the frontier are going to be able to continue when it starts to cost billions or tens of billions of dollars to train a new model.
« Last Edit: March 26, 2024, 02:00:24 pm by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

It's sort of silly that people want to make humanoid robots. Yes human bodies are very versatile, but they sacrifice being really good at things for being reasonably good at a lot of things.  If you build AI into humanoid bodies, you're really going to limit their physical capabilities.

Also building AI like that - it really can only be seen as "hey look we finally made slaves that we can feel good about abusing, because there is no question they aren't humans." Sure maybe they're sentient or whatever, but they aren't "alive" in the strict sense of the biological word, so we can just treat them like any other machine, and PROFIT!

That even forgets, though, that PROFIT!! can only happen if the benefits of the slave AI labor are distributed to the masses; if the benefits are hoarded and the masses are simply left jobless, we'll have more social upheaval than climate change.

I mean, I think what should happen is that instead of the goofy legislation we have today protecting people from AI, what we really need is "If you lay off a person and replace them with AI, then the person(s) laid off must be paid 30% of the revenue attributed to AI, in perpetuity. Revenue attributed to an AI is considered to be the total company revenue minus non-executive payroll."  Or maybe you don't do it per-company, but you do it for society:  "Every company pays 30% of its AI-attributed revenue into the universal income fund, which is distributed equally to every citizen."  Probably needs some work to get rid of loopholes, maybe have AI write it, eh?  ;D
Logged

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

Probably needs some work to get rid of loopholes, maybe have AI write it, eh?  ;D
Already happened

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

Humans are about to put AI out of business.
Quadriplegic installed with brain implant now able to work mouse on computer. Since it is easier and cheaper to train a human than an AI, expect humans to be used in the near future.Assuming the whole thing isn't hokum

King Zultan

  • Bay Watcher
    • View Profile

I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.
Ok, I'm going to add you to the category of people that "get it".
Pages: 1 ... 34 35 [36] 37 38 ... 42