Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (65%)
Universe
- 4 (20%)
The World
- 3 (15%)

Total Members Voted: 20


Pages: 1 ... 42 43 [44] 45 46 ... 50

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 49570 times)

Rolan7

  • Bay Watcher
  • [GUE'VESA][BONECARN]
    • View Profile

Eh?  You're an artist, just not visual art.
It's definitely already threatening your field, it's just harder to... (gosh what pun do I go with) visualize the problem?  AI-mashed drawings present the problem more dramatically and accessibly.

This is a threat to creative fields, IE art, in general.
Logged
She/they
No justice: no peace.
Quote from: Fallen London, one Unthinkable Hope
This one didn't want to be who they was. On the Surface – it was a dull, unconsidered sadness. But everything changed. Which implied everything could change.

Strongpoint

  • Bay Watcher
    • View Profile

Wow. A twtter\reddit style rant. Mention it and you'll summon it.

Yes, I am happy that talentless and uncreative folk that produced shit will lose their undeserved status of some elite category of great talents doing magical things when in reality they used simple techniques from youtube tutorials, digital tools (many of which are also AI-powered :D), referencing and outright theft fan fiction.

PS. Scrapping their crap did little good for the model unless it was properly tagged as "amateur, bad anatomy, bad hands, generic, pornorgraphic, etc". Then it was useful for negative prompting
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

It's definitely already threatening your field, it's just harder to... (gosh what pun do I go with) visualize the problem?  AI-mashed drawings present the problem more dramatically and accessibly.

This is a threat to creative fields, IE art, in general.
I mean, not really? There is already a deluge of utterly forgettable writing on every webnovel site. It wasn't easy to get publicity even before AI because people cranking out pages upon pages of isekai LitRPGs with cultivation will massively outproduce you and for some reason people flock to them. And most of those are human-made. God I can't wait for that fad to pass.

The real problem with AI writing is when it's passed off as guides, manuals, etc. Now that's something I despise because it leads to actual, direct harm like people getting poisoned because ChatGPT told them a plant isn't toxic when it is.

(btw I don't share Strongpoint's enthusiasm, I'm more being skeptical of it being a major threat in the first place than passing it off as good. I just don't feel threatened. Scraping? I mean, do I really care, it'll be lost and diluted to nonexistence amid the millions of cookie-cutter LitRPGs in the dataset anyways.)
« Last Edit: June 05, 2024, 07:48:06 am by MaxTheFox »
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

Rolan7

  • Bay Watcher
  • [GUE'VESA][BONECARN]
    • View Profile

Wow. A twtter\reddit style rant. Mention it and you'll summon it.

Yes, I am happy that talentless and uncreative folk that produced shit will lose their undeserved status of some elite category of great talents doing magical things when in reality they used simple techniques from youtube tutorials, digital tools (many of which are also AI-powered :D), referencing and outright theft fan fiction.

PS. Scrapping their crap did little good for the model unless it was properly tagged as "amateur, bad anatomy, bad hands, generic, pornorgraphic, etc". Then it was useful for negative prompting
I'm seeing a lot of (I assume?) vitriol at me as a person, and CERTAINLY at artists, which is exactly what I'm pointing out.  Thanks.

If their work wasn't valuable then tech bros who gave up after half of one tutorial wouldn't be so gleeful about devaluing it.  Through legally-not-yet-recognized theft.

Also you may have missed some sarcasm.  The fascist idea of Great Men who emerge fully formed on the scene is actually nonsense?  Every artist has to practice (this is why AI bros gave up on actual creation and got jealous and larcenous, they are definitionally lazy).  By flooding the market with stolen low-quality art, you choke out a necessary market for inexpensive amateur art.

Then you don't develop the Great Artists who can draw the double-Ds just right, and then the larceny algorithm runs out of material, but tech bros (and... "people who consider art and artists decadent") ironically don't consider future consequences if they can fuck over people they don't like.
Logged
She/they
No justice: no peace.
Quote from: Fallen London, one Unthinkable Hope
This one didn't want to be who they was. On the Surface – it was a dull, unconsidered sadness. But everything changed. Which implied everything could change.

Strongpoint

  • Bay Watcher
    • View Profile

Quote
  Every artist has to practice (this is why AI bros gave up on actual creation and got jealous and larcenous, they are definitionally lazy).  By flooding the market with stolen low-quality art, you choke out a necessary market for inexpensive amateur art.

1) There is no theft, analyzing copyrighted data and using the results of this analysis is just not theft.

2) It is of varying quality. Yes, easy-to-use tools tend to let people flood the internet with low-effort shit but there are many wonderful things made with the use of generative AI

3) As I said, the impact of images of those 'artists' in models is insignificant and, most likely, net negative. Most of the training data is either public domain or copyrighted by huge corporations - Dysney, Warner Media, etc. For some reason, they don't sue anyone...
Note that, let's say, a Twi'lek girl on someone's art also belongs to Disney because it is a copyrighted race.

4) Customers do win from an alternative source of being able to do the most simple stuff themselves. They also win from the fact that good rational artists can do more using those great new AI tools. Also, it is amazing when you don't have to buy stuff from people who consider you "jealous and larcenous". Despising people for merely not possessing your skills (especially when your own skills in the field are modest) is an appalling behavior.

5) "Fan fiction masters"* with no creativity will never become truly professional artists no matter how much experience they amass. AI is just a great new excuse for why they will fail.

* I have nothing against fan fiction or other derivative works... When they actually add something. Fan fiction sequel to a book - cool. Image of a character in an unusual style or with a strong message - cool. But most visual fan fiction doesn't go beyond simple copying. It is literally "Look! I made another porn version of *inserted copyrighted character*! I am so better than those plebs who have no time or motivation to learn the basics of drawing"


Quote
and then the larceny algorithm runs out of material, but tech bros (and... "people who consider art and artists decadent") ironically don't consider future consequences if they can fuck over people they don't like.

1) It is not how AI training works...
2) Existing models aren't going anywhere. Neither will they change. Stable Diffusion on my PC will stay the same if the internet disappears tomorrow
3) Synthetic data is not inherently bad. Proper tags matter. Note that it isn't even raw output. It is a filtered output.
4) Randomly scrapping data from the internet is the easiest and the cheapest way to get training data. It is also the worst one.
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

Talent is a real thing that exists and current models of AI cannot copy it, definitionally.
The funny thing about making some incredibly dumb testosterone-poisoned argument about "fascists" who don't like "decadent" art is that AI cannot even come up with the bottom-tier obnoxious artbro stuff like putting up a urinal and calling it a sculpture. It certainly doesn't threaten talentless money-laundering-enablers like Banksy whose work is supported by a deep social patronage network (not to mention, exists physically in real life, which is another thing AI can't do).
AI's crime, instead, is killing the lie told to an ocean of former deviantart youth that they could support themselves as "real artists" with perfectly generic, stylistically dead digital art, if only the bad capitalism would go away - bolstered from above by the myth that, once in a while, someone would get "randomly" (actually by more social patronage networks, usually Bay-area-limited) get elevated from their ranks to do soulless corporate art or an intentionally hideous indie game, and from below by a buzzing hive of meaningless financial ability where they all donate to each other, producing the false appearance of real demand while nobody nets anything because no money flows in.
In fact, that last bit is very often the real fear expressed by such people: because they can't tell the difference, since they are not very good at art, they are afraid they might be "scammed" into spending money on AI art and thus draining the fake market of capital.

There is no problem of a "market for amateur art" being flooded by AI because AI's problem isn't that it's amateur. Those who actually have talent, even if unpolished, are still recognizable and such and still make nice things and get popular and have plenty of opportunity to practice, anyway, because if you like art then practice is its own reward and you couldn't be stopped from doing it if someone tried. Not having artistic ability is like being tone-deaf, though... you just can't tell internally. There's nothing wrong with that, it's just the way some people are built, but it gets frustrating when they develop a whole mythos pretending that there's no such thing as talent and all art is merely either amateur or professional, with no other dimension but how much practice you've done. That's just not how it is.
Logged

Rolan7

  • Bay Watcher
  • [GUE'VESA][BONECARN]
    • View Profile

Who are you replying to with that?  Some "testosterone-poisoned" man who claimed talent doesn't exist?  He seems to have made you incoherently upset.

I thought maybe I was overstating my case, but here you are saying that the talent in Great Artist shines through their entire career.  Even when they're an amateur, these ubermensch face no financial threat from the market flood, because their amateur art is so much better that nobody would ever choose "AI art" instead of buying their services.

That's not a convincing argument, but the invective is strong.  Maybe stick to what you're good at.
Logged
She/they
No justice: no peace.
Quote from: Fallen London, one Unthinkable Hope
This one didn't want to be who they was. On the Surface – it was a dull, unconsidered sadness. But everything changed. Which implied everything could change.

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

I gained a snippet of wisdom today at work that definitely applies here:

"Cognitive automation is not cognitive intelligence."

LLMs have cognitive automation, but they don't have intelligence.

A good example: you can teach any three-year-old what a stop sign is (even if they can't read!) with one* "sample image."  And this will be robust to all sorts of mangled, defaced, or partial stop signs.  Current state-of-the-art for ML takes thousands (millions?) of samples to learn a stop sign, and they are still foiled by pieces of masking tape on the sign, or the sign being 10% obscured by a fencepost or something.  Similarly you can teach children with not much effort that eating rocks is not a reasonable behavior, nor is making pasta with gasoline sauce.

We (as in, humanity) clearly don't have the correct models for intelligence.

*Perhaps not one, but definitely on the order of 10 or less.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

I gained a snippet of wisdom today at work that definitely applies here:

"Cognitive automation is not cognitive intelligence."

LLMs have cognitive automation, but they don't have intelligence.

A good example: you can teach any three-year-old what a stop sign is (even if they can't read!) with one* "sample image."  And this will be robust to all sorts of mangled, defaced, or partial stop signs.  Current state-of-the-art for ML takes thousands (millions?) of samples to learn a stop sign, and they are still foiled by pieces of masking tape on the sign, or the sign being 10% obscured by a fencepost or something.  Similarly you can teach children with not much effort that eating rocks is not a reasonable behavior, nor is making pasta with gasoline sauce.

We (as in, humanity) clearly don't have the correct models for intelligence.

*Perhaps not one, but definitely on the order of 10 or less.
In my setting, the GPT paradigm and similar are considered pseudosapient AI. In the future it's used for menial labor, because you don't have to pay pseudosap robots (mules). It can hold up a conversation but feels soulless and things outside its purview make it do nonsensical things. By the nature of PSAI, someone needs to oversee mules in all but the simplest of tasks. It's not a person and can never be a person.

True, sapient AI is a blackbox that I included by authorial fiat. It's hardware, not software, and is essentially an electronic equivalent of a human brain: about the same size with somewhat higher, but not super, intelligence. It has a soul, unlike mules, and talks basically like humans do. There are several paradigms but the thing with them is that they don't scale well beyond human intelligence. And they're expensive to produce and maintain. Humans, or rather specialized genemods, are often cheaper.
« Last Edit: June 05, 2024, 07:05:40 pm by MaxTheFox »
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

A good example: you can teach any three-year-old what a stop sign is (even if they can't read!) with one* "sample image."  And this will be robust to all sorts of mangled, defaced, or partial stop signs.  Current state-of-the-art for ML takes thousands (millions?) of samples to learn a stop sign, and they are still foiled by pieces of masking tape on the sign, or the sign being 10% obscured by a fencepost or something.  Similarly you can teach children with not much effort that eating rocks is not a reasonable behavior, nor is making pasta with gasoline sauce.
The issue is that that child didn't come out of nowhere, they are the product of billions of years of adversarial training. Yes, after that training (and multiple years of fine tuning) it only takes a single example to teach them some things, but to get there required massive amounts of time and information.

But also yes, if you make up a sign and show it to an AI, tell them what it is, then ask about it they will totally get it after seeing it once.
Its true their image recognition isn't as good, but again, they have been training for a hell of a lot less time then humans.
LLMs have cognitive automation, but they don't have intelligence.
Quote from: Yan Lecun
General intelligence, artificial or natural, does not exist.
Cats, dogs, humans and all animals have specialized intelligence.
They have different collections of skills and an ability to acquire new ones quickly.
Much of animal and human intelligence is acquired through observation of -- and interaction with -- the physical world.
That's the kind of learning that we need to reproduce in machines before we can get anywhere close to human-level AI.
There is no such thing as intelligence. Or rather there is, but its made up of a vast number of different categories in the same way that charisma and agility are.
 
LLM's have most of the puzzle pieces needed for "intelligence" as understood by humans, they can generalize, they can plan, they can learn information, they have theory of mind, they have object recognition, ect. But there are a lot of different pieces, and they don't have them all yet.
Since they are still lacking many things and fundamental breakthroughs are indeed needed. Breakthroughs like those needed to go from GPT 2->GPT 3->GPT 3.5->GPT 4->GPT 4o, or GPT 4-> Gemini, or Dalle-> Sora.

To me the most notable things AI is lacking are 1) The ability to do long term tasks, 2) Long term memory, 3) The ability to learn fundamentally new skills.
IMHO none of these are impossible to fix or require a fundamentally new model.
In the end I suspect #3 is the greatest roadblock, but we will of course see.
Talent is a real thing that exists and current models of AI cannot copy it, definitionally.
Definitionally why? Honestly there is a pretty solid argument that AI are nothing *but* talent and intuition.
because in the end transformers are just a sophisticated way to predict text.
Quote
With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.
Incidentally this is already wrong and outdated. GPT-4o is inherently multimodal, and thus transformers have shown the ability to work on multiple different inputs (senses) and convert them into multiple different outputs, not just text.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

Strongpoint

  • Bay Watcher
    • View Profile

Quote
The issue is that that child didn't come out of nowhere, they are the product of billions of years of adversarial training. Yes, after that training (and multiple years of fine tuning) it only takes a single example to teach them some things, but to get there required massive amounts of time and information.
Hardware and software are different. Yes, our brains are a product of an incredibly long process but we don't really inherit knowledge

Quote
GPT-4o is inherently multimodal, and thus transformers have shown the ability to work on multiple different inputs (senses) and convert them into multiple different outputs, not just text.

I sincerely doubt it is more than sound2text and img2text attached to a sophisticated way to predict text. In the end, it still works with text.
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

There is no such thing as intelligence. Or rather there is, but its made up of a vast number of different categories in the same way that charisma and agility are.
 
LLM's have most of the puzzle pieces needed for "intelligence" as understood by humans, they can generalize, they can plan, they can learn information, they have theory of mind, they have object recognition, ect. But there are a lot of different pieces, and they don't have them all yet.
This is a statement of faith which contradicts all the evidence of current cognitive research. General intelligence does appear to be a thing and it seems to have evolved from spatial navigation (which is why some birds and dolphins have way more of it than we would expect). By the way, it is the ability to draw abstract conclusions from technically insufficient data by generalizing across broad data classes, like McTraveller's stop sign example. We, not to mention birds, inherently recognize that any kind of small obstruction can be in front of an object without changing what it is because we have already learned the concept of being obstructed, and can apply that to any new shape, like a stop sign, without needing a single domain-specific example. Teaching this to a neural network would require vastly more power and computational effort than you (or anyone) can comprehend, and is impossible with the training methods we have currently developed.
Quote
Definitionally why? Honestly there is a pretty solid argument that AI are nothing *but* talent and intuition.
Because predictive models - regardless of whether text-based or multimodal - can only approach the mean of the corpus. That's what they do. They are inherently built to generate the "common denominator". The only way to overcome this is a different model.
I doubt you'll believe this, but it's true and it's pretty immediately apparent to anyone who isn't an AI. :P
« Last Edit: June 06, 2024, 11:33:39 am by Maximum Spin »
Logged

eerr

  • Bay Watcher
    • View Profile

Quote
The issue is that that child didn't come out of nowhere, they are the product of billions of years of adversarial training. Yes, after that training (and multiple years of fine tuning) it only takes a single example to teach them some things, but to get there required massive amounts of time and information.
Hardware and software are different. Yes, our brains are a product of an incredibly long process but we don't really inherit knowledge

Quote
GPT-4o is inherently multimodal, and thus transformers have shown the ability to work on multiple different inputs (senses) and convert them into multiple different outputs, not just text.

I sincerely doubt it is more than sound2text and img2text attached to a sophisticated way to predict text. In the end, it still works with text.
We do inherit knowledge though. People come into this world with concepts that help them perceive the world.
humans have concepts that are pretty universal, but we have this before we learn to process the exact nature of what it means for everything else.
The young naturally develop these concepts into, the difference between man and woman, fairness between people, fears.
Not all people start with the same concepts, and they lead to vastly different conclusions later in life.
But those concepts are there.
Logged

Starver

  • Bay Watcher
    • View Profile

We do inherit knowledge though. People come into this world with concepts that help them perceive the world.
humans have concepts that are pretty universal, but we have this before we learn to process the exact nature of what it means for everything else.
The substrate of the newborn brain is receptive of experiences from whicy it might learn, but... Knowledge?

The experiments with kittens that exposed them to only horizontal lines or only vertical lines for perhaps the first few months of (ex-utero) development found that they were unable to properly see vertical lines/horizontal lines, thereafter. They just were not equipped for it. Feral children (and some particular neglected ones) have had very limited ability to develop language once 'saved' from their human(ity)-less fate. There is very little that we could identify as 'inate' knowledge, beyond the bare minimum required for the instinctual/autonomous-system to bootstrap and support itself in the most basic ways possible.


Quote
The young naturally develop these concepts into, the difference between man and woman, fairness between people, fears.
Not all people start with the same concepts, and they lead to vastly different conclusions later in life.
But those concepts are there.
I think much (all?) of that is explainable by nurture, not nature.


In the context of AI, clearly the most elegent programming is useless on the most incapable hardware. As such the hardware (or wetware) must have the throughput, must have enough (of the right kinds of) storage, must present itself to the world with suitable I/O connections. But the most exact analogue of the human brain likely must still be configured and trained to become more than an electronic pachinko machine randomly firing off internal signals that accomplish no purpose.

The eons of 'development' gone into the human genome, and epigenome, being expressed, (more or less) advantageously sets up the hopefully valid 'platform' for the mind to work in the right way, but there's no clear "idiome" that does not rely upon entirely separate parental (or other carer) input to prime the newborn with all the experiences it needs to become a reasonably adept member of society. (And society could well develop the general population's 'normal' attitudes regarding distinctions of men and women, fairness, philias/phobias. Or generate the conditions to explain why some individuals may display diverging attitudes on such concepts, in various ways.)
Logged

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

I actually do believe that general intelligence is a concrete thing, and is synonymous with consciousness and sapience. It is something that emerges from a feedback loop of conscious perception and the possibility to self-reflect and think in the abstract. Current AI paradigms lack all or most of those things. As Strongpoint said, it's still just a predictive model, unlike a brain that can work with images and (very importantly) internal thoughts natively, independent from text.

I don't believe in philosophical zombies as a sensical construct. If something acts sapient in all situations, it actually is. If something looks like a duck, quacks like a duck, and flies like a duck, it is a duck. The trouble is that AI doesn't act sapient except on a very surface level. It's an animatronic of a duck: it could fool someone from a distance, or as a still image, but up close and in motion it's blatantly not the real thing.
« Last Edit: June 06, 2024, 08:37:15 pm by MaxTheFox »
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?
Pages: 1 ... 42 43 [44] 45 46 ... 50