Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (65%)
Universe
- 4 (20%)
The World
- 3 (15%)

Total Members Voted: 20


Pages: 1 ... 35 36 [37] 38 39 ... 50

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 49626 times)

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.
Ok, I'm going to add you to the category of people that "get it".
Am I in that category? 🥺
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

Strongpoint

  • Bay Watcher
    • View Profile

I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.

I don't think that AI will replace humans period.

The simplest example is chess. Hardcoded chess engines have been far better than humans since the late 1990s. Neural network chess engines came like 5 years years ago and kicked the ass of hardcoded chess engines. Modern engines are a combination of the two and their level of play is ungodly, they make moves beyond human comprehension that somehow work.

And yet chess is alive both as a hobby and as a professional sport.

This is why I chuckle when I hear that AI will replace humans in stuff like graphics design or movie script writing where such concept as "better" is very vague compared to chess.
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.
Ok, I'm going to add you to the category of people that "get it".
Am I in that category? 🥺
I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.
What I, Euchre, and KT were saying since this whole thing started. The bubble will pop and blow over in due time, we'll benefit from what good there is in it while most of the excesses get... sidelined.
I officially adopt the opinion of my co-patriot(s) in the Human Resistance.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

So its time for yet another roundup of AI news, as expected AI is still developing at a breakneck pace.

Three weeks ago Elon Musk promised that his new Grok 1.5 AI would be released the next week, as with almost every single Musk timeline promise it turned out to be nonsense as it still isn’t released.

There are now numerous companies that have matched or nearly matched GPT 4 at release. Catching up to where OpenAI was two years ago is impressive  but its not like OpenAI is standing still, new versions of GPT 4 are being released that are notably and measurably better.

Speaking of OpenAI…
GPT 5 has finished training, and could now be released. However, it's almost certain that its release will be delayed at least a few months for security purposes given the release delay on every single one of their other projects. I suspect it will be released at some point after the US election is finished.
Apparently its substantially and meaningfully better than 4 in everything as well as being significantly larger. Only rumors though since its still under wraps.
Quote
Unlock the power of accurate predictions and confidently navigate
uncertainty. Reduce uncertainty and resource limitations. With
TimeGPT, you can effortlessly access state-of-the-art models to make
data-driven decisions. Whether you’re a bank forecasting market trends
or a startup predicting product demand, TimeGPT democratizes access to
cutting-edge predictive insights.
Their new TimeGPT is also out which is designed for time series analysis and forecasting the future. Not that useful to a regular person, but it sounds like it could be a very big deal for businesses since its flat out better than existing forecasting services.
Sora will be released at some point this year as well.
In addition OpenAI has developed an AI that can clone your voice by just listening to it for 15 seconds. Like a lot of AI tech this is really scary stuff. Even if OpenAI keeps a lid on it someone else will soon develop and release equivalent tech to the public, scammers and people creating deepfakes will absolutely love it.

Quote
DarkGemini is a powerful new GenAI chatbot, now being sold on the dark web for a $45 monthly subscription.

It can generate a reverse shell, build malware, or even locate people based on an image. A “next generation” bot, built specifically to make GenAI more accessible to the attacker next door.
A few pages back I was talking about the end of the open internet, and criminal AI was brought up and it was questioned why it didn’t exist. Well, it exists now. On the darknet you can find DarkGemini which will assist you with criminal activities.

Quote
Prompt: a song about boatmurdered.
https://www.udio.com/songs/gnqdHVMZjX89866jQjTQ7P
A new AI music generation service called Udio is now out and it makes pretty decent music. Not amazing, but as I keep saying, its still just early days.
Musicians are now officially in trouble. Not as much as writers or even artists since people care about who wrote the songs they listen to in a way they don’t care about who wrote what they read or who made the art they see, but things aren’t looking good for them either.
Like the ability to create functionally free art on demand this will be a big tool in the box of creators.

There are various regulations on AI in the works, but aside from the anti-deepfake stuff I’m very doubtful about what will actually get through, money talks after all and the US congress has huge amounts of trouble acting against anyone with any real amount of money.


Claude 3 (the best AI out out there right now, aside from possibly the newest fork of GPT 4) is now about as persuasive as a human. When its acting deceptively it is more persuasive than your average person.
Quote from: Different study
Durably reduce belief in conspiracy theories about 20% via debate, also reducing belief in other unrelated conspiracy theories.
On some topics (such as convincing people that conspiracy theories are wrong) its vastly better than your average person, presumably due to the fact that it knows all the conspiracy theory talking points that regular people don’t and can counteract them point by point.

Of course AI is just going to get better at persuasion, and there is no reason at all to think that it won't get far better then your average human at it.

Some interesting stuff summarized from an interview with some AI engineers working for Google and Anthropic (claude).
https://www.youtube.com/watch?v=UTuuTTnjxMQ
Quote
(8:45) Performance on complex tasks follows log scores. It gets it right one time in a thousand, then one in a hundred, then one in ten. So there is a clear window where the thing is in practice useless, but you know it soon won’t be. And we are in that window on many tasks. This goes double if you have complex multi-step tasks. If you have a three-step task and are getting each step right one time in a thousand, the full task is one in a billion, but you are not so far being able to in practice do the task.
Quote
(9:15) The model being presented here is predicting scary capabilities jumps in the future. LLMs can actually (unreliably) do all the subtasks, including identifying what the subtasks are, for a wide variety of complex tasks, but they fall over on subtasks too often and we do not know how to get the models to correct for that. But that is not so far from the whole thing coming together, and that would include finding scaffolding that lets the model identify failed steps and redo them until they work, if which tasks fail is sufficiently non-deterministic from the core difficulties.
The interview talks about this quite a bit, how the reliability (especially multistep) is a huge bottleneck for actually using these. But once it can do it even infrequently that means that being able to to do the same thing actually reliably is just around the corner.
Quote
(51:00) “I think the Gemini program would probably be maybe five times faster with 10 times more compute or something like that. I think more compute would just directly convert into progress.”
The two bottlenecks are currently highly skilled engineers who have the right “taste” or intuition for how to design experiments and compute. More compute is still the biggest bottleneck.
Quote
(1:01:30) If we don’t get AGI by GPT-7-levels-of-OOMs (this assumes each level requires 100x times compute) are we stuck? Sholto basically buys this, that orders of magnitude have at core diminishing returns, although they unlock reliability, reasoning progress is sublinear in OOMs. Dwarkesh notes this is highly bearish, which seems right.
Quote
(1:03:15) Sholto points out that even with smaller progress, another 3.5→4 jump in GPT-levels is still pretty huge. We should expect smart plus a lot of reliability. This is not to undersell what is coming, rather the jumps so far are huge, and even smaller jumps from here unlock lots of value. I agree.
Yeah, sounds reasonable enough, eventually things will become too costly to continue scaling, and if we don’t reach AGI before then progress will slow down dramatically. But we are currently nowhere near the end of the S-curve.
Quote
(1:32:30) Getting better at code makes the model a better thinking. Code is reasoning, you can see how it would transfer. I certainly see this happening in humans.
(They *also* say that making it better at coding improves its more mundane language skills too).
It has a few things in this vein where the researcher point out how cross-learninghas interesting side effects, for instance apparently fine tuning a model to make it better at math makes it better at entity recognition at the same time.

https://dreams-of-an-electric-mind.webflow.io/
Claudes talking to each other. This sure looks like creativity to me.

I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.

I don't think that AI will replace humans period.

The simplest example is chess. Hardcoded chess engines have been far better than humans since the late 1990s. Neural network chess engines came like 5 years years ago and kicked the ass of hardcoded chess engines. Modern engines are a combination of the two and their level of play is ungodly, they make moves beyond human comprehension that somehow work.

And yet chess is alive both as a hobby and as a professional sport.

This is why I chuckle when I hear that AI will replace humans in stuff like graphics design or movie script writing where such concept as "better" is very vague compared to chess.
Do you think a hobby/sport where 99% of people make no money off it operates remotely the same as profit driven businesses where everyone involved expects a paycheck?

Because I can tell you with 100% certainty, if AI can deliver a equivalent product* at significantly lower costs** companies will drop screenwriters like hot potatoes.

*Obviously if they can’t then things are different, but going “well, if AI sucks then it won’t replace everyone” is obvious.
**And of course it will, because the “AI is expensive” crowd is forgetting that people are really expensive. On average a screenplay sells for $110k dollars. Even if you increase the price of AI generation by literally ten thousand times it will still be cheaper.

Parts of the movie industry that people care about as individuals (movie stars) will have protection, but nobody actually cares who or what wrote the movie they are watching as long as its good.
« Last Edit: April 15, 2024, 12:18:41 am by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

Ultimately, if people have no jobs, then people have no money.
And if people have no money, then AI has no jobs.

Strongpoint

  • Bay Watcher
    • View Profile

Quote
Do you think a hobby/sport where 99% of people make no money off it operates remotely the same as profit driven businesses where everyone involved expects a paycheck?

Because I can tell you with 100% certainty, if AI can deliver a equivalent product* at significantly lower costs** companies will drop screenwriters like hot potatoes.

Yes, people using AI (not AI) will be more productive in certain tasks requiring fewer manhours per task performed. It is what new technologies do. By this metric every new technology replaced humans.

Also, if someone was receiving $100K per screenplay and a random dude will be able to replicate that with a single prompt that produces semi-random words... they were getting too much.
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

$100k/screenplay may sound like a lot - but how many does a typical writer sell per year? I honestly don't know, but even if it's 1/year, that's not that much for a specialized job.

My take on all the AI stuff, especially market predictions: if it doesn't take into account the impact that having AI has on the market itself, it's going to be "amusing."

Also, if AI is a "perfect market participant" then there won't be much room to make profit; in some sense, profit is an indicator of an inefficient market. In an efficient market, profit (in a dollar sense) is minimized while profit in a "value added" sense is maximized.  The two are the same only if money exactly matches value, and it clearly doesn't.  But maybe AI can resolve that?

What I mean is:  If I can have more vacation time but still buy the same amount of received goods and services, that's "value add" but doesn't necessarily increase the amount of money I receive.  Q.E.D.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

Hm, but anyone remember the world pre-computers?
Word processing and in-office printing in particular.
The computers created as much work as they were saving.
Suddenly, office workers were required to submit paperwork for everything.

Now, we have AI and 3D printers on the horizon. Who's gonna clean up after the work they generate?
...I think I am becoming a Luddite.

Strongpoint

  • Bay Watcher
    • View Profile

$100k/screenplay may sound like a lot - but how many does a typical writer sell per year? I honestly don't know, but even if it's 1/year, that's not that much for a specialized job.

I am not saying that professional screenwriters receive more than they earn. I am saying that if their unique, highly creative work can be replaced by an unskilled worker with a LLM tool, THEN they don't deserve $100K

And looking at the level of writing of many modern shows and movies... Yea, chatgpt can produce equivalent generic crap. Nothing of value will be lost.
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

My take on all the AI stuff, especially market predictions: if it doesn't take into account the impact that having AI has on the market itself, it's going to be "amusing."

Also, if AI is a "perfect market participant" then there won't be much room to make profit; in some sense, profit is an indicator of an inefficient market. In an efficient market, profit (in a dollar sense) is minimized while profit in a "value added" sense is maximized.  The two are the same only if money exactly matches value, and it clearly doesn't.  But maybe AI can resolve that?

What I mean is:  If I can have more vacation time but still buy the same amount of received goods and services, that's "value add" but doesn't necessarily increase the amount of money I receive.  Q.E.D.
What will actually happen with the market if we get AGI or AI advances to be able to automate 50% of all jobs (with humanoid robots running around doing many physical ones) is pretty much impossible to know.
Plenty of normal economists seem to think the economy will operate pretty much the same if we reach AGI, but that's completely obvious nonsense. If they actually knew how the world worked *today* I might trust them more, but a lot of widely respected economic stuff even today (*cough* Chicago school *cough*) is just pseudoscientific nonsense.

I am pretty skeptical of profit not existing, but it may (or may not) take a very different form from how things currently work with companies and nationally enforced currencies. Value will certainly exist, but if it actually gets to your average person is an entirely different question.

Will money be the driving force in the world as it is today? Will it be (as some have speculated) bitcoin-esque blockchain derived proof of compute showing how much compute you have given/used? Will it be GPTBucks and DisneyBucks and ClaudeBucks as AI functionally seizes control over all forms of intellectual production with their highly optimized processes?
Also, if AI is a "perfect market participant" then there won't be much room to make profit; in some sense, profit is an indicator of an inefficient market. In an efficient market, profit (in a dollar sense) is minimized while profit in a "value added" sense is maximized.  The two are the same only if money exactly matches value, and it clearly doesn't.  But maybe AI can resolve that?

What I mean is:  If I can have more vacation time but still buy the same amount of received goods and services, that's "value add" but doesn't necessarily increase the amount of money I receive.  Q.E.D.
The idea of a truly efficient markets assumes that monopoly power doesn’t exist. Very few companies will have the ability to create and run these massive models, and they will be able to use this to generate absurd profits off the backs of those without the ability to create their own AIs that have to pay them for the AIs.
Now I totally buy the idea of everyone using strong AI being able to get massive advantages and basically steal the gains from companies and individuals that don’t own  powerful AI, which could indeed leave most of the market without profit.
Quote from: Perfect market model requirements
Many buyers and sellers are present.
An identical product or service is bought and sold.
Low barriers to entry and exit are present.
All participants in the market have perfect information about the product or service being sold.
I really don’t see why they would be perfect market participants though, the world we are in doesn’t meet perfect market requirements (eg. it requires everyone to magically have perfect information), so AI can’t be perfect market participants either.
$100k/screenplay may sound like a lot - but how many does a typical writer sell per year? I honestly don't know, but even if it's 1/year, that's not that much for a specialized job.
https://www.ziprecruiter.com/Salaries/Film-Screenwriter-Salary--in-California
Its the average, which like other “average” human wages (especially in fields where quality matters) is driven up significantly by the high end earners. Your median screenwriter makes significantly less than 100k per script/year.
I agree that even at 100k per year its not outrageous at all, but pure text is also the type of thing AI is most efficient at and is capable of completing vastly faster then humans, so even at $20k per script AI would still be more efficient, especially when you take the time and friction savings into account.
Yes, people using AI (not AI) will be more productive in certain tasks requiring fewer manhours per task performed. It is what new technologies do. By this metric every new technology replaced humans.
That is the next step or two true, we are quite a ways away from AI just replacing top writing talent and being more than a aid to them.
Also, if someone was receiving $100K per screenplay and a random dude will be able to replicate that with a single prompt that produces semi-random words... they were getting too much.
The idea that they just produce “semi-random words” belongs in the same bin as them being “just a next token predictor” in that it shows a profound lack of comprehension about how this technology works or what its current limits (much less future limits) are.
Its like saying that computers are “just” electric rocks or motors are “just” a tiny piece of spinning metal to dismiss what they can do.
I mean both *are* true in the most technical sense, but at same time it shows that you really don’t understand what the technological implications of those electric rocks and tiny spinning pieces of metal really are.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

Strongpoint

  • Bay Watcher
    • View Profile

Quote
Its like saying that computers are “just” electric rocks or motors are “just” a tiny piece of spinning metal to dismiss what they can do.
You insist on giving agency to tools. Not what can they do but what can be done using them.

Quote
but at same time it shows that you really don’t understand what the technological implications of those electric rocks and tiny spinning pieces of metal really are.
No. I understand what they can do. I also understand what they CAN'T do.

AI overhypers sound like people saying that "Jet engines will totally replace internal combustion engines!" post WW2 even if it was obvious that it wouldn't be the case no matter how advanced they will become. Even in aviation, they have their limitations and using them in cars is insanity and no level of improving the tech will change it.

So no writing with AI will not replace writing with your imagination and knowledge. There are very many things AIs can't do because they are tools, semi-random word generators with no abstract thinking whatsoever. You need that abstract thinking to make a consistent complex plot, to make it interesting for people, to make it original enough, to tie it in with modern trends, etc.
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

I haven't heard any further news about that AI software developer... frankly I suspect it was just a scam.

As for writing AI, I am a writer who also happens to interact with LLMs quite a bit. I frankly don't see it ever being useful to me as anything except a wall to bounce ideas off of. It's very hard to get an AI to write something creative, but at least it's possible to squeeze out some measure of creativity with heavy guidance. However it's outright impossible to get it to write what I want without going off the rails on its own tangent. That's why it's mostly useless to me as a writer.

Especially since I don't write as a source of income, I do it as a hobby. Why would I publish something I didn't write myself? And when I leave Russia I'll set up a Patreon or something. All in all I'm not worried, it's not like there's not already a flood of complete slop that's made by humans (cough most of the LitRPG genre cough), adding AI-generated slop to the corpus of books is like diluting cheap beer with water: it's mostly water in the first place anyways, what does it change?

It'll probably replace those cheap airport romance novels but it's not like the target demographic would notice or care.
« Last Edit: April 16, 2024, 03:32:08 am by MaxTheFox »
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

King Zultan

  • Bay Watcher
    • View Profile

lemon10 seems to be the only person in this tread that thinks AI will be anything more than an over hyped tool.

Quite frankly I don't see AI art, music, or writing overtaking any thing done by humans any time soon as it seems to require a massive amount of effort to make it produce anything that isn't an abomination of some kind.

Also what is LitRPG?
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

Sure, as a hobby. People do stuff because they love doing them or find deeper value in doing them. Writing is often one of those things.

However the vast majority of writing is writing people try to make money doing (even you seem to be counting on patreon writing for money). People that don't care about output (and thus money) won't use AI tools as much. Those that do will be able to significantly increase output (even yes, if just used as tools like making lists of names to user and bounce ideas off of at any point in the day).
In the case of stuff like chinese Xianxia it will probably even improve the quality at the pace they are going at.
I haven't heard any further news about that AI software developer... frankly I suspect it was just a scam.
Ehh, probably? Hard to tell honestly. Things in AI frequently get a release date or just a paper then get tied up in security or other reasons and just get delayed without a word for weeks, months, or even never get released at all).

I have heard of other similar stuff, like AI that can just write entire apps from just a well designed prompt.

For the 'people who get it' out there:
How significant an improvement do you think GPT 5 will be? What do you think there will be significant improvements in?
---
lemon10 seems to be the only person in this tread that thinks AI will be anything more than an over hyped tool.
*Sigh* Yeah, fair enough. I really should worry stop worrying all this stuff, it ain't healthy.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

Oops double post.
« Last Edit: April 16, 2024, 04:14:07 am by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.
Pages: 1 ... 35 36 [37] 38 39 ... 50