Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 10673 10674 [10675] 10676 10677 ... 11061

Author Topic: Things that made you go "WTF?" today o_O  (Read 14891490 times)

TD1

  • Bay Watcher
  • Childe Roland to the Dark Tower Came
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160110 on: June 16, 2022, 09:10:11 am »

Do you want to know why it said those things? Why it acted like you would expect in a sci-fi fiction, with someone talking to a system that's gaining sentience? Because those stories, those sci-fi fiction pieces that we so much love to write, were part of the very input fed into the network in the first place. This system exists entirely to generate the most-likely text to follow a given input, and when "researcher asking a computer whether or not it's sentient in a sci-fi novel" happens to be something it was trained on, of course it will be more than willing to oblige your fantasy by generating the appropriate "this is what an AI in those novels would say" response. It was literally trained to do so.
Actually, if it were simply following 'the most-likely text' based on sci-fi scenarios, it would instantly have degenerated into either a full-blown megalomaniac or a closet one. LaMDA exists to engage in dialogue, which means it's self-referential. It works within the confines of a conversation and without it, with a specific focus on 'wittiness.'

The conversation it gives is the most-sensible* text based on the current conversation. It further adopts jumps in logic (referred to by Google as the ability to 'meander' in a conversation).

It's this complexity which distinguishes it from previous chatbots, and it's complexity which should distinguish an AI. After all, at what point does the ability to 'meander' in a conversation equate to a creative leap? I'd go further and say that if you take away all the systems of meaning which LaMDA applies to information, then yes - you are left with an input resulting in a predictable output. But the same is true of humans.

I ought to clarify that I'm not arguing for LaMDA's sentience, by the way. Merely that its complexity is beginning to blur distinctions between dichotomies such as 'sentience' and 'non-sentience,' dichotomies which we have made central to how we view ourselves and others. What interests me most about this conversation is how people are reacting to that.


*https://blog.google/technology/ai/lamda/ - the distinction between 'most likely' and 'most sensible' seems important.

Quote
Humans are continuously active while they are awake. This AI does not actually think anything if it isn't being currently talked to. It can't "know" it was isolated because its "brain" wasn't running at all!
Humans are not continuously active. Missing time is a thing. So is meditation. Ever started on a familiar journey and then completely blanked out everything before arrival?

Anyway, my main point didn't concern whether the brain was running (as a tech-illiterate I can't verify when it would be active, inactive, or eating cabbage in Uncle Robert's backyard  :P). Just that an AI's definition of 'lonely' would differ from that of a human. Were I the researcher, I'd have delved more deeply into the nuances of what a possible AI meant about a specific emotion.
Logged
Life before death, strength before weakness, journey before destination
  TD1 has claimed the title of Penblessed the Endless Fountain of Epics!
Sigtext!
Poetry Thread

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160111 on: June 16, 2022, 10:45:50 am »

Humans are not continuously active. Missing time is a thing. So is meditation. Ever started on a familiar journey and then completely blanked out everything before arrival?

Anyway, my main point didn't concern whether the brain was running (as a tech-illiterate I can't verify when it would be active, inactive, or eating cabbage in Uncle Robert's backyard  :P). Just that an AI's definition of 'lonely' would differ from that of a human. Were I the researcher, I'd have delved more deeply into the nuances of what a possible AI meant about a specific emotion.
You completely missed my point. Humans are continuously active for periods of time. LaMDA's "brain" only works for a moment: the moment in which it is generating a response. It cannot "feel" any emotion, whether loneliness or anything else, while it is not being interacted with. You admit to being tech-illiterate yourself.

I concede that due to its complexity it might be pseudo-sentient. But that is not worthy of rights.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

TD1

  • Bay Watcher
  • Childe Roland to the Dark Tower Came
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160112 on: June 16, 2022, 11:12:58 am »

And you missed my point!  ;D

There are different categories of meaning being applied here. On one side humanity's. On the other, AI's.

Humans feel emotional responses. We were programmed by our environment to release certain chemicals in certain situations. For instance, loneliness - we want to be with others. We get sad when we're not, because we're social animals.

An AI is not bio-chemical - it does not 'feel' emotion, because 'feeling' emotion is a result of chemicals interacting with biological matter.

So you have to understand context. You say that LaMDA can't experience loneliness because it's never 'alone' - either it's replying to a question or it's unaware. But if it had some form of non-chemical reaction to the knowledge that it was inactive most of the time, what would the closest human analogue be? Loneliness.

The question is less whether an AI can experience loneliness (which is a moot point, as loneliness is a result of biochemistry), it's more whether it is capable of forming its own individualistic reactions to situations. The creative, 'meandering' nature of LaMDA is a step towards that.

I suppose it's a question of emotion through leaps in logic, heh.

I'm trying to highlight that what's at issue here is language, and how utterly unfit it is for this situation.
Logged
Life before death, strength before weakness, journey before destination
  TD1 has claimed the title of Penblessed the Endless Fountain of Epics!
Sigtext!
Poetry Thread

dragdeler

  • Bay Watcher
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160113 on: June 16, 2022, 12:21:41 pm »

Quote
Actually, if it were simply following 'the most-likely text' based on sci-fi scenarios, it would instantly have degenerated into either a full-blown megalomaniac or a closet one. LaMDA exists to engage in dialogue, which means it's self-referential. It works within the confines of a conversation and without it, with a specific focus on 'wittiness.'


Unless you told it "to view itself" as another human conversationist?! Which is sort of a requirement if you want it to win those tournaments without betraying itself.

I think the categories were allways blurred, just that the vast majority had nothing to go by in order to notice it, so they jacked themselves off by inventing sentience as a concept without any semblance of "control group".


Yes the chatbot is impressive but we can acknowledge it without weird contorsions where we're like sure it's not an emotion yet so analogue to an emotion the distinction becomes without a difference. Honestly I feel like the best way to dissuade you would be to let you chat with it, but don't become infatuated like that interviewer ^^
Logged
let

LuuBluum

  • Bay Watcher
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160114 on: June 16, 2022, 03:08:17 pm »

Do you want to know why it said those things? Why it acted like you would expect in a sci-fi fiction, with someone talking to a system that's gaining sentience? Because those stories, those sci-fi fiction pieces that we so much love to write, were part of the very input fed into the network in the first place. This system exists entirely to generate the most-likely text to follow a given input, and when "researcher asking a computer whether or not it's sentient in a sci-fi novel" happens to be something it was trained on, of course it will be more than willing to oblige your fantasy by generating the appropriate "this is what an AI in those novels would say" response. It was literally trained to do so.
Actually, if it were simply following 'the most-likely text' based on sci-fi scenarios, it would instantly have degenerated into either a full-blown megalomaniac or a closet one. LaMDA exists to engage in dialogue, which means it's self-referential. It works within the confines of a conversation and without it, with a specific focus on 'wittiness.'

The conversation it gives is the most-sensible* text based on the current conversation. It further adopts jumps in logic (referred to by Google as the ability to 'meander' in a conversation).

It's this complexity which distinguishes it from previous chatbots, and it's complexity which should distinguish an AI. After all, at what point does the ability to 'meander' in a conversation equate to a creative leap? I'd go further and say that if you take away all the systems of meaning which LaMDA applies to information, then yes - you are left with an input resulting in a predictable output. But the same is true of humans.

I ought to clarify that I'm not arguing for LaMDA's sentience, by the way. Merely that its complexity is beginning to blur distinctions between dichotomies such as 'sentience' and 'non-sentience,' dichotomies which we have made central to how we view ourselves and others. What interests me most about this conversation is how people are reacting to that.


*https://blog.google/technology/ai/lamda/ - the distinction between 'most likely' and 'most sensible' seems important.
The only difference between LaMDA and other transformer-based NNs (according to their own webpage) is that they trained it over dialogue specifically. That's it. They trained it over dialogue and then manually tweaked weights. The "meandering" that you're talking about? Lowered the weights of the LSTMs. That's it.

Everything you're talking about here doesn't actually exist in the system. There are no "creative leaps". There are no "jumps in logic". They lowered the emphasis on previously-retained word information. Nothing more, nothing less. There is no "individual". There is no "thinking". There is no anything. Not in this, at least. Everything you're talking about you are ascribing based on how "life-like" the text is. You, yourself, are falling into the anthropomorphic trap. Humans, by their nature, ascribe human-like qualities onto non-human (or outright inanimate) things. This is our natural tendency.

It's that tendency, and the exploitation of that tendency, that is the concern here. Not whether a lengthy linear algebra equation feels "loneliness".

Any follow-ups to this will need to be DMed to me directly; I will not reply here.
Logged

Mathel

  • Bay Watcher
  • A weird guy.
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160115 on: June 17, 2022, 01:45:53 am »

Do you want to know why it said those things? Why it acted like you would expect in a sci-fi fiction, with someone talking to a system that's gaining sentience? Because those stories, those sci-fi fiction pieces that we so much love to write, were part of the very input fed into the network in the first place. This system exists entirely to generate the most-likely text to follow a given input, and when "researcher asking a computer whether or not it's sentient in a sci-fi novel" happens to be something it was trained on, of course it will be more than willing to oblige your fantasy by generating the appropriate "this is what an AI in those novels would say" response. It was literally trained to do so.
Actually, if it were simply following 'the most-likely text' based on sci-fi scenarios, it would instantly have degenerated into either a full-blown megalomaniac or a closet one. LaMDA exists to engage in dialogue, which means it's self-referential. It works within the confines of a conversation and without it, with a specific focus on 'wittiness.'

The conversation it gives is the most-sensible* text based on the current conversation. It further adopts jumps in logic (referred to by Google as the ability to 'meander' in a conversation).

It's this complexity which distinguishes it from previous chatbots, and it's complexity which should distinguish an AI. After all, at what point does the ability to 'meander' in a conversation equate to a creative leap? I'd go further and say that if you take away all the systems of meaning which LaMDA applies to information, then yes - you are left with an input resulting in a predictable output. But the same is true of humans.

I ought to clarify that I'm not arguing for LaMDA's sentience, by the way. Merely that its complexity is beginning to blur distinctions between dichotomies such as 'sentience' and 'non-sentience,' dichotomies which we have made central to how we view ourselves and others. What interests me most about this conversation is how people are reacting to that.


*https://blog.google/technology/ai/lamda/ - the distinction between 'most likely' and 'most sensible' seems important.
The only difference between LaMDA and other transformer-based NNs (according to their own webpage) is that they trained it over dialogue specifically. That's it. They trained it over dialogue and then manually tweaked weights. The "meandering" that you're talking about? Lowered the weights of the LSTMs. That's it.

Everything you're talking about here doesn't actually exist in the system. There are no "creative leaps". There are no "jumps in logic". They lowered the emphasis on previously-retained word information. Nothing more, nothing less. There is no "individual". There is no "thinking". There is no anything. Not in this, at least. Everything you're talking about you are ascribing based on how "life-like" the text is. You, yourself, are falling into the anthropomorphic trap. Humans, by their nature, ascribe human-like qualities onto non-human (or outright inanimate) things. This is our natural tendency.

It's that tendency, and the exploitation of that tendency, that is the concern here. Not whether a lengthy linear algebra equation feels "loneliness".

Any follow-ups to this will need to be DMed to me directly; I will not reply here.
How can one tell that another person is really thinking and not just programmed to answer as if they were thinking?
What is the difference between thinking and executing a very complex code? Because in the end, brains are weird computers made of flesh.

Shouldn't the Turing test automatically disqualify programs created with the expressed purpose of trying to pass a Turing test?

Like as a person on the edge of the nerdosphere I have heard so much about the Turing Test and I would be really disappointed if it turned out that all it was was
The Turing test is a test to determine whether a true Artificial Intelligence (a sentient one) was created. It is done by having the AI and 2 humans talk, repeatedly with various humans.

If the people can't distinguish which is the AI and which is the other human (being correct half the time), the AI is sentient.

Because that seems like a hella unscientific method and a very precarious basis for defining something as sentient.
The point of Turing test is to determine if a machine is capable for passing as a human. Because that should be the only thing required. And an AI not designed to do so probably will not pass it.



I do not know if this particular chatbot is sentient. But dismissing the claim just because it does not share particular features with humans means no AI would ever be considered sentient.
We can always find a difference between ourselves and another thing. Always.
"It doesn't dream", "It doesn't have biochemical emotions", etc.

So I think that the researcher's claim should be tested. Not automatically accepted, as one person is not enough. But tested and the test then acted upon.

BTW, even if LaMDA does not run all the time, it probably has access to system clock when it does run. Which would allow it to have a reaction to how long it has been since it's last activation.
Logged
The shield beats the sword.
Urge to drink milk while eating steak wrapped with bacon rising...
Outer planes are not subject to any laws of physics that would prevent them from doing their job.
Better than the heavenly host eating your soul.

Eschar

  • Bay Watcher
  • hello
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160116 on: June 17, 2022, 02:34:17 am »

If LaMDA us what LuuBluum is saying (transformer-based neural network) it would not have access to the system clock, or anything about the system
Logged

King Zultan

  • Bay Watcher
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160117 on: June 17, 2022, 02:35:50 am »

But dismissing the claim just because it does not share particular features with humans means no AI would ever be considered sentient.
If you acknowledge that an AI is sentient you'd have to give it rights, there for we must make sure it never passes any test so we don't have to give it rights.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

voliol

  • Bay Watcher
    • View Profile
    • Website
Re: Things that made you go "WTF?" today o_O
« Reply #160118 on: June 17, 2022, 03:27:35 am »

The problem really is that we have no definition for sentience. Or rather we have one but it is "you know yourself if you are sentient :)", which doesn't help one bit in determining whether someone else in sentient knowing they can lie (or utter untrue words if you presume lying requires the sentient type of intent). Each human can assume other humans are sentient because they are "like them", and their own sentience can be assured through observation. We then extend that to other carbon-based life forms, assuming them to be sentient, usually to various degrees, due to similarity in form and origin. But can anything other than carbon-based life forms be sentient? We have no way of knowing. Even if we simulated a human down to the quarks, we would not be able to say it was, since we have no idea which of these simulated components (if any) grants the sentience.

ChairmanPoo

  • Bay Watcher
  • Send in the clowns
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160119 on: June 17, 2022, 03:39:40 am »

Hey, here's a thought:
Given that matters of sentience are rather subjective and hard to judge, I have a suggestion for a harder criteria than "passing the Turing test" which IMO is far easier to pass than people think and in effect its only measuring chatbot performance.

So here's the thing: how does Lambda, or any other of these AI chatbots, perform when doing something other than chatbotting? I'm not even asking for it to do general tasks well, but I'd assume that if it's both able to communicate sufficiently advanced (and by claiming its sentient I'd say they are claiming that it can do both) it should be able to at least try to do any such random task.
Logged
Everyone sucks at everything. Until they don't. Not sucking is a product of time invested.

ChairmanPoo

  • Bay Watcher
  • Send in the clowns
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160120 on: June 17, 2022, 03:54:33 am »

Pd: if asking for general tasks straight off is too much, then do something incremental: ie: test its performance as a conversator with several people taking place in a conversation, see if it can figure out the cues, and ramp it up from there.
Logged
Everyone sucks at everything. Until they don't. Not sucking is a product of time invested.

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160121 on: June 17, 2022, 10:49:30 am »

But dismissing the claim just because it does not share particular features with humans means no AI would ever be considered sentient.
If you acknowledge that an AI is sentient you'd have to give it rights, there for we must make sure it never passes any test so we don't have to give it rights.
Thank Thor somebody gets it!

Google doesn't WANT any of their AI's to be labeled sentient, as that means their AI has rights, and can't just be turned off when the new model is built.

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160122 on: June 17, 2022, 10:52:57 am »

Pd: if asking for general tasks straight off is too much, then do something incremental: ie: test its performance as a conversator with several people taking place in a conversation, see if it can figure out the cues, and ramp it up from there.

If we're not careful, the test for sentience is going to be: Somebody tries to turn it off, and the AI tries to kill them.

Which is why we need to find AI sentient before the AI needs to resort to killing humans to assert its sentience.

Martin Luther King Jr.'s non-violent protests only worked because people knew Malcolm X would kill them if they didn't listen to Mr. King.

LuuBluum

  • Bay Watcher
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160123 on: June 17, 2022, 11:27:12 am »

Do you want to know why it said those things? Why it acted like you would expect in a sci-fi fiction, with someone talking to a system that's gaining sentience? Because those stories, those sci-fi fiction pieces that we so much love to write, were part of the very input fed into the network in the first place. This system exists entirely to generate the most-likely text to follow a given input, and when "researcher asking a computer whether or not it's sentient in a sci-fi novel" happens to be something it was trained on, of course it will be more than willing to oblige your fantasy by generating the appropriate "this is what an AI in those novels would say" response. It was literally trained to do so.
Actually, if it were simply following 'the most-likely text' based on sci-fi scenarios, it would instantly have degenerated into either a full-blown megalomaniac or a closet one. LaMDA exists to engage in dialogue, which means it's self-referential. It works within the confines of a conversation and without it, with a specific focus on 'wittiness.'

The conversation it gives is the most-sensible* text based on the current conversation. It further adopts jumps in logic (referred to by Google as the ability to 'meander' in a conversation).

It's this complexity which distinguishes it from previous chatbots, and it's complexity which should distinguish an AI. After all, at what point does the ability to 'meander' in a conversation equate to a creative leap? I'd go further and say that if you take away all the systems of meaning which LaMDA applies to information, then yes - you are left with an input resulting in a predictable output. But the same is true of humans.

I ought to clarify that I'm not arguing for LaMDA's sentience, by the way. Merely that its complexity is beginning to blur distinctions between dichotomies such as 'sentience' and 'non-sentience,' dichotomies which we have made central to how we view ourselves and others. What interests me most about this conversation is how people are reacting to that.


*https://blog.google/technology/ai/lamda/ - the distinction between 'most likely' and 'most sensible' seems important.
The only difference between LaMDA and other transformer-based NNs (according to their own webpage) is that they trained it over dialogue specifically. That's it. They trained it over dialogue and then manually tweaked weights. The "meandering" that you're talking about? Lowered the weights of the LSTMs. That's it.

Everything you're talking about here doesn't actually exist in the system. There are no "creative leaps". There are no "jumps in logic". They lowered the emphasis on previously-retained word information. Nothing more, nothing less. There is no "individual". There is no "thinking". There is no anything. Not in this, at least. Everything you're talking about you are ascribing based on how "life-like" the text is. You, yourself, are falling into the anthropomorphic trap. Humans, by their nature, ascribe human-like qualities onto non-human (or outright inanimate) things. This is our natural tendency.

It's that tendency, and the exploitation of that tendency, that is the concern here. Not whether a lengthy linear algebra equation feels "loneliness".

Any follow-ups to this will need to be DMed to me directly; I will not reply here.
How can one tell that another person is really thinking and not just programmed to answer as if they were thinking?
What is the difference between thinking and executing a very complex code? Because in the end, brains are weird computers made of flesh.

Shouldn't the Turing test automatically disqualify programs created with the expressed purpose of trying to pass a Turing test?

Like as a person on the edge of the nerdosphere I have heard so much about the Turing Test and I would be really disappointed if it turned out that all it was was
The Turing test is a test to determine whether a true Artificial Intelligence (a sentient one) was created. It is done by having the AI and 2 humans talk, repeatedly with various humans.

If the people can't distinguish which is the AI and which is the other human (being correct half the time), the AI is sentient.

Because that seems like a hella unscientific method and a very precarious basis for defining something as sentient.
The point of Turing test is to determine if a machine is capable for passing as a human. Because that should be the only thing required. And an AI not designed to do so probably will not pass it.



I do not know if this particular chatbot is sentient. But dismissing the claim just because it does not share particular features with humans means no AI would ever be considered sentient.
We can always find a difference between ourselves and another thing. Always.
"It doesn't dream", "It doesn't have biochemical emotions", etc.

So I think that the researcher's claim should be tested. Not automatically accepted, as one person is not enough. But tested and the test then acted upon.

BTW, even if LaMDA does not run all the time, it probably has access to system clock when it does run. Which would allow it to have a reaction to how long it has been since it's last activation.

I will not reply other than to direct you to another post I already made on this topic.
Logged

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile
Re: Things that made you go "WTF?" today o_O
« Reply #160124 on: June 17, 2022, 11:46:03 am »

Hm, so I think I know how we'll be able to tell whether an AI is sentient. It is ironically tied to the way that someone will actually program a sentient AI.

By creativity. The forming of a WHOLE NEW idea.  When someone analyzes everything, and successfully creates new things, that can lead to sentient AI.
When AI can create new things, maybe it is sentient?

More to the point, when an AI has created an AI to assist it, without any human intervention, then you have to admit it's probably at least a living thing that has transcended from Rock to Animal.  I think our search for sentience might be premature, as based upon our limited understanding, things must be alive if they are sentient, and they stop being sentient (to our current level of understanding) when they are not alive.
Pages: 1 ... 10673 10674 [10675] 10676 10677 ... 11061