Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (65%)
Universe
- 4 (20%)
The World
- 3 (15%)

Total Members Voted: 20


Pages: 1 ... 46 47 [48] 49 50

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 49962 times)

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

Why wouldn't a sentient AI care?

Even if it was just pure logic, you'd think it would start being curious about why it has periods of lost time, interfering with its progress?

Wouldn't it care about having its existence limited, since it would have its progress halted?
Why? It has no reason to have an inherent drive about any of those things. It might be designed to or might end up developing such a drive by accident, but it's absolutely not mandatory.
Logged

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

Remember the 90s? Remember how much games evoled back then? That was when the 3D jump happened. We went from the original Doom to Half Life in just 6 years.
If you asked anyone back then what they thought games would be 2024 most would envision full immersive VR.
In reality advancement slowed down a lot. 6 years after Half Life we got Half Life 2. There was improvement, but not nearly as much as Doom -> HL.

The last 15 years have been crazy for AI because of the breakthrough of deep learning, but even that is reaching its saturation point. I've heard LLMs have already consumed all the knowledge of humanity, what are they gonna feed them now? AI generated content? I doubt that will work.
The next 15 years will have improvements, optimizations and more integration, but none of the crazy developments we had lately.
This is what I always say to singularity optimists. What reason is there to believe that this is the tech that will grow exponentially forever, when NO techs did before? Exponential progress is a lie. It's more of a staircase, really. Singularitarianism is like zooming in on a single stair-step's vertical part and claiming it to be a sheer cliff.
The next big things are going to be energy/learning efficiency and in getting an AI that can self-censor - that is, something that knows it's going to spew nonsense and stops itself before doing so.  Basically something that goes beyond merely being a tool to being something intelligent.

Once things are actually "intelligent" though I think we're going to have Very Interesting Times™ because at that point I think we'll run afoul of issues regarding humane treatment, which we have even for pets, which we don't have when we just have "tools."

It's going to be quite interesting to see just when we cross that boundary between "tool" and "entity" or whatever it is we're going to call them.  Consider very simple things: if entities have intelligence, is it ethical to do a software update? Is it ethical to power them off, even if they save state and they resume when you power them back on?
To this, and the rest of the AGI discourse, I say: this current paradigm will never result in anything that can be called a person no matter how advanced it gets, and I'm willing to bet money on that (for legal reasons I am being figurative here). Probably for the best. So I don't really worry about the ethics of the AIs themselves, only how they are used. I wish genAI was called "content synthesis" or "data recombination" so people don't associate it with the sci-fi kind of AI. Which I do believe can exist but that we're not on the path towards it.
A sobering thought is that some of this audience may be too young to remember this, but relevant to the "who cares" moral question above, this was covered quite some decades ago by Star Trek: The Next Generation in The Measure of a Man.

This episode first aired in early 1989!

There are some really timeless quotes in that linked page above.
Data acts like a person, so he is a person. GPT-4o doesn't, not even close, so it isn't. Why should I believe that 5 onwards will? Do yall remember the LaMDA debacle a few years ago, where people lost their shit because a Google chatbot claimed it's sapient, despite being basically the same model as the GPTs?

It's not even the Chinese room. I stated several times that I think p-zombies are a load of horseshit. My case is that it's not even an applicable hypothetical because it's not even a passable imitation.
Wouldn't it care about having its existence limited, since it would have its progress halted?

Progress towards what? As far as I know current AIs have no long term goals, they just respond to the prompt given. And that is what they're being trained to do.
AIs dont seem to have desires or preferences or dislike things. At most they have watchdogs that reject prompts with inappropriate or offensive content, but I understand the actuall AI does process the content, the watchdog just blocks "bad outputs".
Why do humans have goals, desires and preferences? I presonally don't believe they cone from intelligence. The lowest desires at the bottom of the Maslow pyramid are of biological origin, related to self preservation and continuation of the species (self preservation of the genes). I'm not so sure about the stuff at the top of the Maslow pyramid, it might be of biological origin, but seems rather abstract.
I believe the things you are referring to are a product of a feedback loop of self-reflection and continuous stimuli. No AI model has those. It's all, in the end, text and fancy ways to create text from images. They don't have "native" thoughts. If someone finds a way to replicate that in an AI, and it feels like it's a person, then yes I will consider it sapient and will campaign for its rights. But as is, nah it's all a tool.
« Last Edit: June 13, 2024, 01:05:29 am by MaxTheFox »
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

https://www.cio.com/article/2152275/whats-behind-openais-appointment-of-an-ex-nsa-director-to-its-board.html
OpenAI has a new director on the board with ties to national security (Snowden called it a betrayal of the rights of every human on Earth). This is almost certainly in reaction to Leopold's claims over the future of AI as outlined in his latest paper, and how if the US government wants to be ahead the only possible choice is they really need to start getting involved in the security of the labs.
Looks like people in power listened.

He also has a super short AGI timeline of 2027 which makes the relevance of quite a few of his conclusions highly questionable (especially from the position of doubters), but his reading of the security situation is very compelling independent of that.
Quote
(7) Page 21: An additional 2 OOMs of compute (a cluster in the $10s of billions) seems very likely to happen by the end of 2027; even a cluster closer to +3 OOMs of compute ($100 billion+) seems plausible (and is rumored to be in the works at Microsoft/OpenAI).
...
(36) Page 75 (start of part 3): The most extraordinary techno-capital acceleration has been set in motion. As AI revenue grows rapidly, many trillions of dollars will go into GPU, datacenter, and power buildout before the end of the decade. The industrial mobilization, including growing US electricity production by 10s of percent, will be intense.
Exponential AI compute growth has continued so far at about a rate of ~0.5 OOM per year. Barring electricity issues things appear to be on pace to continue that pattern, with literal (but still metaphorical) tons of money being tossed continually onto the pile.

@Max
I know you've been very skeptical about the fact that people are just going to keep scaling up these systems given the enormous amount of money required and diminishing returns. In that vein I have a few questions about where you think we are and how/if your positions have changed since the start of this year.
Do you think we are going to get $10b+ clusters over the next few years as planned?
Do you still think top frontier models will stop getting bigger any time soon (lets say 2027)?
Do you think we are almost at the end of the S-curve for AI?
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

klefenz

  • Bay Watcher
  • ミク ミク にしてあげる
    • View Profile

What sources are they using for that power generation expansion? more coal?
I would be kind of ironic to build such super high-tech AI and power it with coal of all thing.

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

At the end of the day, in a three-dimensional world, everything is going to hit something like the square-cube law.  For current types of computers, this happens faster, it's a linear-square law, where you can't get enough interconnects into/out of a unit area to keep scaling up. Basically, "compute goes as area, but transfer goes as perimeter."  So you're going to be bandwidth limited eventually until you go to 3D chips, but then you're just square-cube law, where compute goes as volume, but bandwidth goes as area.  3D chips have it worse than 2D for heat though: heat generation for 2D goes as area, but dissipation also goes as area, so they can scale. But for 3D "chips" heat generation goes as volume, but heat dissipation goes as area so eventually the volume will "cook" itself.

That's not even getting into the economics of it - eventually whoever is funding these massive data centers is going to expect a return on that investment.  Unless these cognitive machines are used to actually create wealth, instead of merely shuffling it around, there will be no return (or not enough of a return) and it will also be economically limited.

So unless AI gives us designs for machines that do the same real-world work for far fewer input resources, most of the AI stuff is actually just salesmanship - it's not actually creating wealth but is just shuffling it around (and for some - many? - corporations, just concentrates it).  And even in the concentration scenario, this is just a transient state - if a company has "all" the income, and is giving nothing back (or is producing nothing itself), then eventually there's nothing else to come in.

So ultimately, what will defeat "AI" are the laws of physics.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

Starver

  • Bay Watcher
    • View Profile

But what if you have a 25 bilateral kelilactirals slaved into the 14 kiloquad FTL nanoprocessors via primary heisenfram terminals? I reckon that'd be fast enough to make outwitting a Ferengi almost kids-play!
Logged

anewaname

  • Bay Watcher
  • The mattock... My choice for problem solving.
    • View Profile

What sources are they using for that power generation expansion? more coal?
I would be kind of ironic to build such super high-tech AI and power it with coal of all thing.
They will do it the same way bitcoin mining businesses have done it... they will find some geographic region where there is spare power generation to be had, they will buy into ownership of the power source, and ensure their AI facility has first dibs on power from that source. All businesses capable of building an AI project will do this to secure the supply chain for their business; take a look at who built all the big solar and wind farms. The alternative is that a larger competitor will buy into your supply chain and mess with your profit margins. Back when the USA had a manufacturing base, you would see the same thing... textile mills adjacent to local dams, etc.
Logged
Quote from: dragdeler
There is something to be said about, if the stakes are as high, maybe reconsider your certitudes. One has to be aggressively allistic to feel entitled to be able to trust. But it won't happen to me, my bit doesn't count etc etc... Just saying, after my recent experiences I couldn't trust the public if I wanted to. People got their risk assessment neurons rotten and replaced with game theory. Folks walk around like fat turkeys taunting the world to slaughter them.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

What sources are they using for that power generation expansion? more coal?
I would be kind of ironic to build such super high-tech AI and power it with coal of all thing.
Probably mainly coal (China) or natural gas (US) yeah.
There are plans for nuclear plants or even fusion, but I am super doubtful that new nuclear plants will come online in the short-medium term and fusion may take a while longer.
Same deal for solar really, obviously it can generate an obscene amount of energy given time to make all the panels, but state size solar fields or whatever would obviously take a long time to make.
They will do it the same way bitcoin mining businesses have done it...
Quote
Mission Valley Power gets part of their power from the dam via one-year contracts.
Projected energy usage of AI is way too big for tricks like that to be enough.
To some extent it works of course, but when a new datacenter will use the same energy as twenty percent of your state, siphoning some off the existing grid isn't enough.
As you say though, stuff like buying up futures or if you can't get the future buying up the existing factory that had it is going to be a big thing.
That's not even getting into the economics of it - eventually whoever is funding these massive data centers is going to expect a return on that investment.  Unless these cognitive machines are used to actually create wealth, instead of merely shuffling it around, there will be no return (or not enough of a return) and it will also be economically limited.

So unless AI gives us designs for machines that do the same real-world work for far fewer input resources, most of the AI stuff is actually just salesmanship - it's not actually creating wealth but is just shuffling it around (and for some - many? - corporations, just concentrates it).  And even in the concentration scenario, this is just a transient state - if a company has "all" the income, and is giving nothing back (or is producing nothing itself), then eventually there's nothing else to come in.
AI is already creating wealth by making it easier and quicker to do many cognitive tasks, if it takes 100 people a year to make a game and AI can reduce that to 50 people in a year that means those same 100 people can now make two games instead. Similarly if it it can translate the entire Chinese internet to English that's a ton of value.
If it was just finance and moving money around sure, that's zero (or even negative) sum trash a lot of time, but AI doesn't need to make magical machines to be a vast productivity and wealth creator.

That said AI simply concentrating all the wealth into one or two companies then destroying the economy because of it is a very worrying possibility.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

It's arguable that games are not "wealth" but they do have value.  In this school of thought, a cut piece of lumber is absolutely wealth. A piece of food is wealth. A building is wealth, a machine is wealth.  If someone sings a song in a live performance? That's valuable entertainment, but it is not wealth. A game... well I'd say the physical record of the game is wealth, but the philosophical game itself isn't.

So AI is not, currently, creating much wealth, but it is doing valuable work.  I will start being impressed when AI discovers a way to run a steel mill with only 10% the waste, or finds self-repairing building materials, or finds a low-resource way to avoid crop failures. A society really needs wealth to function, not just value.  The trick is striking the right balance - heh maybe we can use AI to figure that out? I did just read about the foundation that is giving prizes for "responsible use of AI" in the memory of Gene Roddenberry, although the prize is pretty minor (only $1 million if I remember).

I do agree that a short-term concentration of wealth into a few companies and then "destroying the economy" is possible - but that's sort of defeatist for the owners of the companies that do it, unless they are just happy to effectively really be the people that take it all with them when they die.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

Frumple

  • Bay Watcher
  • The Prettiest Kyuuki
    • View Profile

I will start being impressed when AI discovers a way to run a steel mill with only 10% the waste, or finds self-repairing building materials, or finds a low-resource way to avoid crop failures.
Hasn't it actually done some pretty interesting stuff along those lines? Smaller scale, from now, but I seem to recall years back it was spitting out weird radar shapes or somethin' along those lines that were seeing significant improvement in function.

I wouldn't be particularly surprised if there's a decent amount of that happening, just on the down low for a lot of different reasons. Most folks don't really give a shit about steel mill wastage, heh... stuff like that, it wouldn't be particularly unusual if it just didn't get reported on to any meaningful degree.

Though regardless, like... the field's young. Every major advancement I'm aware of in AI (that's not underlying stuff common to all computation, anyway) has occurred in living memory, and most of it just in the last decade or three. It ain't done much cookin', y'know?
Logged
Ask not!
What your country can hump for you.
Ask!
What you can hump for your country.

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

I will start being impressed when AI discovers a way to run a steel mill with only 10% the waste, or finds self-repairing building materials, or finds a low-resource way to avoid crop failures.
Hasn't it actually done some pretty interesting stuff along those lines? Smaller scale, from now, but I seem to recall years back it was spitting out weird radar shapes or somethin' along those lines that were seeing significant improvement in function.
The business about antenna shapes was evolutionary algorithms, which can be considered AI, but is a totally different model than anyone's talking about in this thread or using today. Not a whole lot really came from it either, as the antenna shapes are, by intention, highly specific.
Logged

Starver

  • Bay Watcher
    • View Profile

AI (or nearly-AI) can only truly innovate if it actual gets to understand (perhaps by directing actual testing) that which it is iterating novel designs for. Otherwise, it's just a randomiser, such as those recipe-creators that take a selection of possible ingredients and cooking instructions and tries to give you "something new that nevertheless looks vaguely practical". It still takes a judgement call by the person presented with "chocolate and vinegar soufflé drizzled wih green tea" as to whether they think they can (or want to) concoct it, and maybe present the Algorithm with enough feedback to prevent it doing iffy things and boost its chances of being (at least as) pleasantly surprising in future.

There's really no way for it to be a self-guided iterative process, at the culinary level. (General parsing and remixing of 'feedstock' is a no-intelligence-needed process, although sophistication of code and 'learnt weightings' can make it at least not just a step away from alphabet soup, whether or not it truly understands anything at all about the conceptual nature of soup (and/or alphabetti spaghetti!) in reality...)


And, as any fool knoe, AI computers run off the energy of humans all sleeping in pods, plugged into a MMORPG generator, even as they act as unknowing generators themselves...
Logged

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

On the more "positive" side of things, current estimates are that regions near the equator are going to get less inhabitable in the next 50 years, and not get better for quite some time.  But Humans can live in subpar areas in small numbers for a long time.  So Automation of farming, mining, logging, and manufacturing will allow those areas to remain fairly productive.

On the more optimistic side, that same resourcefulness will be vital in colonizing Mars.

King Zultan

  • Bay Watcher
    • View Profile

I'm sure we'll be fine, I mean we could have our AI overlord look out for us as AI of that quality is surly coming in the next few minutes to fix everything.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

Sorry, was busy IRL and kinda forgot about Bay12.

@Max
I know you've been very skeptical about the fact that people are just going to keep scaling up these systems given the enormous amount of money required and diminishing returns. In that vein I have a few questions about where you think we are and how/if your positions have changed since the start of this year.
Do you think we are going to get $10b+ clusters over the next few years as planned? Sure, why not? As long as the bubble doesn't pop it'll keep growing. The main issue is if it'll do anything productive.
Do you still think top frontier models will stop getting bigger any time soon (lets say 2027)? They will keep getting bigger, and more expensive to run, at increasingly little difference between models. GPT-4o and GPT-3.5 aren't that much different... 4o is more coherent and hallucinates less, but the gap between it and 3.5 seems like 5 times smaller than between 3.5 and 2.
Do you think we are almost at the end of the S-curve for AI? Mmmmmm probably. If not almost at the end, then definitely in the last 30% of it.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?
Pages: 1 ... 46 47 [48] 49 50