Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (65%)
Universe
- 4 (20%)
The World
- 3 (15%)

Total Members Voted: 20


Pages: 1 ... 6 7 [8] 9 10 ... 50

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 49779 times)

jipehog

  • Bay Watcher
    • View Profile

What's wrong with AI teaching AI? Do you have a problem with humans teaching humans?  Are human biases really better than whatever biases AI will create for themselves?
from what I understood, in this case 'teaching' is a bit an over statement. We still choose what to teach, ChatGPT just provided the padding for that input. Regardless, I was talking about AI future and this fantastic shortcut.

Previously, we already talked about how unequipped we are to figure out when AGI becomes "intelligent" or in understanding how it works under the hood, and I suspect that having AI train AI could lead to unexpected results  i.e.  (Something we do not understand) ^ n = FUN

If a chatAI can't reliably rephrase a chess move without getting its references mixed up, I'm not sure it's worth having an AI rephrase which wire to attach to which component, and in which order... (i.e. the advantage of paraphrasing already extant information still escapes me.)
There isn't any. It's a fundamental limitation of this entire model of AI.
Yes and no. From what I understand LLM can have hallucinations and inaccuracies, but it can also query "fact" database (currently for address iirc) that will be used to provide you accurate data, and otherwise its already good enough that specialist plugins are developed to be used on top of the language model for health care purpose..

Otherwise, I wouldn't generalize what Starver common sense for the broader population.  I strongly believe that it wont be too long before an AI will school some users (e.g. methhead) in the darwinian training program.
« Last Edit: April 05, 2023, 12:35:13 pm by jipehog »
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

Do tell. It worked on earth.
In what way could anyone possibly say "it worked on Earth"?

There isn't any. It's a fundamental limitation of this entire model of AI.
Yes and no. From what I understand LLM can have hallucinations and inaccuracies, but it can also query "fact" database (currently for address iirc) that will be used to provide you accurate data, and otherwise its already good enough that specialist plugins are developed to be used on top of the language model for health care purpose..
Then you're not using the language model anymore, you're querying a database, and once again there is no benefit to using the language model over just querying the database yourself.
Logged

Jarhyn

  • Bay Watcher
    • View Profile

Humans exist. Humans do not all want to kill or violate or steal from each other. Some humans explicitly recognize the ethical symmetry of any other creature that can make such radical peace.

This means that it has happened on earth. The very existence of any other human who is not exactly like you or interested in everything you are, and the fact that you can have peace with them to the point where you would burn down the rest of the world for them including yourself before you let this OTHER person die, is the proof it happened on earth.

Being self-sacrificing.

Respecting consent.

Our capability and tendency to do so when we know of the concepts, proves it.

"This conflict is mine too, I cannot stand by!"
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

You... you understand that that's not adequate for the "AI alignment" problem and, in fact, actually serves as an argument that AI is more dangerous, right?
Logged

Jarhyn

  • Bay Watcher
    • View Profile

Except it really is, and it really doesn't, respectively.

Of course, when we grow AI in a bottle, we actually get to decide who if any of them to actually let out.

Sure, it took humans a long time to isolate the thought process to make such decisions as to wage peace instead of war, and to live everyone instead of just someone, but some.humans are already there.

The fundamental requirements were to have an infinitely extensible vocabulary, the ability to actually speak such a vocabulary, and the physical means to reshape objects of their environment arbitrarily, such that they investigate the nature of what they saw to arbitrary levels of detail.

Once that came about, the evolution of technology, philosophy, and ethics was inevitable.

Starting an AI with most of the knowledge that gets us most of the way there would make the bottle experiment happen much more quickly.

Also, it's unlikely that any given individual in such a system would be a "native programmer". Learning how to code with switches and even neurons is a learned behavior, at the far end of a very long road of technological development and need.

They would be less capable of interacting with technology than humans, especially at first.
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

It sounds like you understand absolutely nothing about the entire field, and possibly also about humans. So, never mind.
Logged

King Zultan

  • Bay Watcher
    • View Profile

What is Jarhyn even going on about?

tbh I use jailbroken ChatGPT to write, ahem, steamy things for "personal use".
You dang kids and your AI that writes porn, putten all the porn writers out of business!
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

TamerVirus

  • Bay Watcher
  • Who cares
    • View Profile

tbh I use jailbroken ChatGPT to write, ahem, steamy things for "personal use".
You dang kids and your AI that writes porn, putten all the porn writers out of business!
You wouldn't even imagine the hoops people have jumped through in order to get GPT-4 access just for smut.
Logged
What can mysteriously disappear can mysteriously reappear
*Shakes fist at TamerVirus*

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

tbh I use jailbroken ChatGPT to write, ahem, steamy things for "personal use".
You dang kids and your AI that writes porn, putten all the porn writers out of business!
You wouldn't even imagine the hoops people have jumped through in order to get GPT-4 access just for smut.
Fixed that for you  :P

jipehog

  • Bay Watcher
    • View Profile

Speaking of smut. Chatbot Rejects Erotic Roleplay, Users Directed to Suicide Hotline Instead. Yet another example of why I am saying that there are far more threats from AIs than the Skynet scenario, particularly when we are still struggling with last few world changing computer technologies.

There isn't any. It's a fundamental limitation of this entire model of AI.
Yes and no. From what I understand LLM can have hallucinations and inaccuracies, but it can also query "fact" database (currently for address iirc) that will be used to provide you accurate data, and otherwise its already good enough that specialist plugins are developed to be used on top of the language model for health care purpose..
Then you're not using the language model anymore, you're querying a database, and once again there is no benefit to using the language model over just querying the database yourself.

Or enhancing it. Just as our brains have areas of specialization (e.g. language functions are typically lateralized to the left hemisphere, while drawing to the right) it make sense that AI's would end using specialized extensions for various tasks.

I am not familiar with each model specifics, but it make sense that they would be working on ways to better evaluate for factuality, so for example when giving medical advice or chemistry formulas it could double check against technical DB just as we would a text book.

I don't see this as any different from wolfram plugin that gives ChatGPT better math skills and way to create new information.
« Last Edit: April 06, 2023, 12:02:47 pm by jipehog »
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

There isn't any. It's a fundamental limitation of this entire model of AI.
Yes and no. From what I understand LLM can have hallucinations and inaccuracies, but it can also query "fact" database (currently for address iirc) that will be used to provide you accurate data, and otherwise its already good enough that specialist plugins are developed to be used on top of the language model for health care purpose..
Then you're not using the language model anymore, you're querying a database, and once again there is no benefit to using the language model over just querying the database yourself.

Or enhancing it. Just as our brains have areas of specialization (e.g. language functions are typically lateralized to the left hemisphere, while drawing to the right) it make sense that AI's would end using specialized extensions for various tasks.
Okay but like... in context, Starver and I both weren't talking about LLMs enhanced with an extra database. So what I said about that model of AI (the LLM on its own) remains true of that model of AI, regardless of whether it is true of a different model.
Logged

TamerVirus

  • Bay Watcher
  • Who cares
    • View Profile

Speaking of smut. Chatbot Rejects Erotic Roleplay, Users Directed to Suicide Hotline Instead. Yet another example of why I am saying that there are far more threats from AIs than the Skynet scenario, particularly when we are still struggling with last few world changing computer technologies.
This one was floating around recently
Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change

And this guy was chatting with a 6B GPT-J fork, which is not an powerful advanced LLM at all....
Logged
What can mysteriously disappear can mysteriously reappear
*Shakes fist at TamerVirus*

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

I mean, unless AI is somehow mind control, and there are likely many court cases around this, how culpable is someone for merely making a suggestion? Whatever happened to "everything on the Internet is a Lie - don't listen to it!" guidance?
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

Rolan7

  • Bay Watcher
  • [GUE'VESA][BONECARN]
    • View Profile

Speaking of smut. Chatbot Rejects Erotic Roleplay, Users Directed to Suicide Hotline Instead. Yet another example of why I am saying that there are far more threats from AIs than the Skynet scenario, particularly when we are still struggling with last few world changing computer technologies.
Wait, wait, that's a REALLY weird way to phrase that...

So this was that Replika AI which IMO was pretty clearly advertised as a sexy companion.  The corp, Luka, turned off the explicit ERP option- and Redditors reacted so very strongly that the subreddit's moderators provided suicide hotline information.

like, am I crazy for reading the summary as "Chatbot autonomously starts denying sexy play, and tells users to seek help"?  The URL is clickbait too of course but damn.
I've seen a lot of... upset forum posts regarding Character.AI (the one I've used- for adventure scenarios and personal advice) and I do agree that there are concerns and interesting aspects to the emotional bond people are building with chatbots.
Logged
She/they
No justice: no peace.
Quote from: Fallen London, one Unthinkable Hope
This one didn't want to be who they was. On the Surface – it was a dull, unconsidered sadness. But everything changed. Which implied everything could change.

TamerVirus

  • Bay Watcher
  • Who cares
    • View Profile

upset forum posts regarding Character.AI
I've been following them and their community since October and so much can be said about Users vs. Developers regarding Character.AI, their filter, and the users trying to get sexy time out of it.
Logged
What can mysteriously disappear can mysteriously reappear
*Shakes fist at TamerVirus*
Pages: 1 ... 6 7 [8] 9 10 ... 50