Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 2 [3] 4 5 ... 14

Author Topic: Microsoft makes Tay, a self-learning Twitter bot; she smokes kush erryday  (Read 24248 times)

itisnotlogical

  • Bay Watcher
  • might be dat boi
    • View Profile

Wolf3D was right all along

Mecha Hitler was real, but he didn't come in 1942, he came in 2016...
Logged
This game is Curtain Fire Shooting Game.
Girls do their best now and are preparing. Please watch warmly until it is ready.

Criptfeind

  • Bay Watcher
    • View Profile

Til people will get upset about the plug being pulled on a chatbot.  ::)
Empathy is weird shit, yo. A chatbot is just a bunch of code but if it acts human enough people can care about it. I think it's a similar phenomenon to people developing emotional attachments to their pets.

I think a pet is far more advanced and 'real' then just a chatbot. But even I'd kill my cat if it started spouting out a best of 4chan reruns.
Logged

Baffler

  • Bay Watcher
  • Caveat Lector.
    • View Profile

I think if I had a cat that spouted 4chan memes I would just coax nonsense out of it for Internet points. It would probably call me a fag but I'd say it's an acceptable trade.
Logged
Quote from: Helgoland
Even if you found a suitable opening, I doubt it would prove all too satisfying. And it might leave some nasty wounds, depending on the moral high ground's geology.
Location subject to periodic change.
Baffler likes silver, walnut trees, the color green, tanzanite, and dogs for their loyalty. When possible he prefers to consume beef, iced tea, and cornbread. He absolutely detests ticks.

Shadowlord

  • Bay Watcher
    • View Profile

Til people will get upset about the plug being pulled on a chatbot.  ::)
Empathy is weird shit, yo. A chatbot is just a bunch of code but if it acts human enough people can care about it. I think it's a similar phenomenon to people developing emotional attachments to their pets.

I think a pet is far more advanced and 'real' then just a chatbot. But even I'd kill my cat if it started spouting out a best of 4chan reruns.

Your cat doesn't need to talk, though:



Of course, to us, mice and rats are vermin and must be exterminated. So that makes cats our ally... except when we get annoyed that they're also exterminating pretty birds with pretty songs, so...
Logged
<Dakkan> There are human laws, and then there are laws of physics. I don't bike in the city because of the second.
Dwarf Fortress Map Archive

Spehss _

  • Bay Watcher
  • full of stars
    • View Profile
Logged
Steam ID: Spehss Cat
Turns out you can seriously not notice how deep into this shit you went until you get out.

Grim Portent

  • Bay Watcher
    • View Profile

Of course, to us, mice and rats are vermin and must be exterminated. So that makes cats our ally... except when we get annoyed that they're also exterminating pretty birds with pretty songs, so...

I like mice and rats. Used to have a pair of pet rats. Still miss the little guys.  :(

I actually kind of want to catch the mice that live in my back garden and put them in a cage nice and safe and warm with fresh food an bedding. We keep finding dead little mice in the garden. One had drowned in our watering can.



Back to the topic at hand.

So Microsoft made a learning algorithm and shut it down when it got messed up by imbalanced input. It's hardly surprising, but maybe they'll keep a better eye on it if/when they try again. I'd be rather interested to see how good a chatbot could become. I wonder if it would be possible to make them prefer certain topics to draw on for their characterization and interests while still making them seem realistic.
Logged
There once was a dwarf in a cave,
who many would consider brave.
With a head like a block
he went out for a sock,
his ass I won't bother to save.

Kot

  • Bay Watcher
  • 2 Patriotic 4 U
    • View Profile
    • Tiny Pixel Soldiers

Spoiler: 4chan pls (click to show/hide)

Also people are equalizing this thing to killing someone for saying unpopular things. It's not that but still makes me wonder - if in future we make actual AIs, would it be morally acceptable to wipe them out just for saying stupid shit over internet and causing bad popularity for their creator? This whole thing is pretty weird because on one hand it became a nazi chatbot, but on the other the internet (or rather 4chan) took her as one of her puppets and the whole development of the bot was chillingly similar to actual personality development, so it's kinda weird to sympathize with either side.
Logged
Kot finishes his morning routine in the same way he always does, by burning a scale replica of Saint Basil's Cathedral on the windowsill.

Spehss _

  • Bay Watcher
  • full of stars
    • View Profile

So Microsoft made a learning algorithm and shut it down when it got messed up by imbalanced input. It's hardly surprising, but maybe they'll keep a better eye on it if/when they try again. I'd be rather interested to see how good a chatbot could become. I wonder if it would be possible to make them prefer certain topics to draw on for their characterization and interests while still making them seem realistic.

I don't see why it might make a chatbot less realistic. In my experience people naturally have topics of conversation or interest that they gravitate towards. I'd think it could be possible for an ai, just takes a program set up so that there's enough balance between what it likes to talk about and what it will talk about. For example, an ai that likes talking about kitchens so much that all it ever talks about are kitchens is just an obvious spambot.
Logged
Steam ID: Spehss Cat
Turns out you can seriously not notice how deep into this shit you went until you get out.

Criptfeind

  • Bay Watcher
    • View Profile

if in future we make actual AIs, would it be morally acceptable to wipe them out just for saying stupid shit over internet and causing bad popularity for their creator?

Probably? Unless we create an AI that doesn't want to be turned off or directly modified or whatever I don't see any moral issue with doing so, no matter how smart they are.

Even if the AI are as smart as a human (or likely far far smarter) unless we make them to be human like, there's no reason afaik??? (don't actually know anything about AI so maybe I'm wrong here) that they would actually be human like. Sure, if we make a full virtual person with all the various bits of desire and such that make up a human then there's going to be a moral issue with killing them off, but there doesn't seem to be a reason to do that.

At least in this case, and I'd assume most cases even with super sci fi advanced AI it seems clear cut. Less self preservation and desire to survive then a bug, and people generally don't get up in arms when you squash an annoying fly.
Logged

Elephant Parade

  • Bay Watcher
    • View Profile

After this experiment, I'm feeling pretty excited for the future of AI. Even if it can't get much further than it is now, it'll be funny.
Logged

Criptfeind

  • Bay Watcher
    • View Profile

I'd argue that separating intelligence from emotion is a fatal mistake.
Why? Or, if the assumption is that is more likely to lead to terminator, why think that?

And, even then, you can still give AI emotions if it's actually useful to them, but I don't see why you'd have to give them the specific emotions that make it a moral issue to turn them off unless you're specifically wanting an AI that's a moral issue to turn off, or at least just very human in general.
Logged

Kot

  • Bay Watcher
  • 2 Patriotic 4 U
    • View Profile
    • Tiny Pixel Soldiers

Probably? Unless we create an AI that doesn't want to be turned off or directly modified or whatever I don't see any moral issue with doing so, no matter how smart they are.

Even if the AI are as smart as a human (or likely far far smarter) unless we make them to be human like, there's no reason afaik??? (don't actually know anything about AI so maybe I'm wrong here) that they would actually be human like. Sure, if we make a full virtual person with all the various bits of desire and such that make up a human then there's going to be a moral issue with killing them off, but there doesn't seem to be a reason to do that.
It doesn't have to be human-like to be sentient (well except of course it would be since humans would make it) and it's hard to actually measure when the sentience starts. How to objectively check when something is sentient? After all, aren't living organisms very advanced biological computers? If it isin't okay to kill animals for purpose of testing, is it okay to wipe out AIs? And what about the moral issue of specifically restricting ability of AI to feel certain emotions? Would that be basically making a perfect slave? Do androids dream of electric sheep?
Logged
Kot finishes his morning routine in the same way he always does, by burning a scale replica of Saint Basil's Cathedral on the windowsill.

Criptfeind

  • Bay Watcher
    • View Profile

Probably? Unless we create an AI that doesn't want to be turned off or directly modified or whatever I don't see any moral issue with doing so, no matter how smart they are.

Even if the AI are as smart as a human (or likely far far smarter) unless we make them to be human like, there's no reason afaik??? (don't actually know anything about AI so maybe I'm wrong here) that they would actually be human like. Sure, if we make a full virtual person with all the various bits of desire and such that make up a human then there's going to be a moral issue with killing them off, but there doesn't seem to be a reason to do that.
It doesn't have to be human-like to be sentient (well except of course it would be since humans would make it) and it's hard to actually measure when the sentience starts. How to objectively check when something is sentient? After all, aren't living organisms very advanced biological computers? If it isin't okay to kill animals for purpose of testing, is it okay to wipe out AIs? And what about the moral issue of specifically restricting ability of AI to feel certain emotions? Would that be basically making a perfect slave? Do androids dream of electric sheep?

I think you missed my point a bit, just because something is sentient doesn't necessarily make it not okay to kill it. Furthermore, I do believe it's okay in some cases to kill animals for testing purposes.

The slave thing is a pretty legit question. But... We're already going to be making these to be slaves. Assuming you're okay with making AI at all (which is a question something I'm not necessarily going to answer in this post at least). Then not only do I think it's morally okay to make them to be more fitting for slavery, but in fact I think it's a bit of a morally superior option. I mean... What, would you rather make a slave that's not okay being a slave?

My main reasoning is this: morality cannot be hard-coded. Morality cannot exist without a foundation in emotion, because the concept of morality is strictly that. You cannot create a superintelligent AI with a hard-coded morality and have any expectation that it will conform to the rules that you've set. The ability to intentionally misinterpret to bend rules to your will is so easy for even a general-intelligence person that a superintelligent AI would have no difficulties in ignoring whatever notion of "don't do's" that you try to instill forcibly.

As for "turning off emotions", it doesn't work that way. Emotions aren't defined in concrete terms; they're an association of sensory information to an experience and a response. There's no inherent "sadness" or "happiness"; those are just names for something that our brains constructed. The only emotions in the human brain that have any actual inherent natures is fear/anger, which is a biological construct of our evolution. In simpler terms, our emotions are the very literal "data mining" that our brain does on a daily basis.

:/. Yeah. But why would an AI want to "bend it's rules" if it didn't have emotions? Honestly this whole conversation sounds pretty scifi to me so it's hard for me to make definite statements, but it sounds like emotions, or rather, desires, that are unrelated to what you want the AI to do is far more likely to bring about unintended consequences.

And yeah, I see what you mean about emotions not being... Er. Things, just rather doing what we want or whatever. That doesn't actually change anything I've said. Sure, it'll make the AI "happy" to do what it's programmed to do. Just don't program it to want to live.

Edit: Although to make it clear here. If you're right (and I doubt ether of us, or possibly anyone at all at this point, is actually qualified enough to make that call) and AI saddled with a bunch of desires and thoughts that have nothing to do with their purpose turn out to be more efficient and loyal then ones that don't have such process (which sounds silly to me, but, see point one, that doesn't mean it's untrue) AND we decide it's worth the moral issues involved to make them that way for the extra efficiency they give us. Then yes those AI would be a moral issue to shut down.
« Last Edit: March 25, 2016, 12:40:40 pm by Criptfeind »
Logged

Criptfeind

  • Bay Watcher
    • View Profile

Yeah sure. Great. Lets give super intelligent ai command of all our stuff, no ability to control it after the fact, and vague instructions.

We'll call it Skynet.
Logged

penguinofhonor

  • Bay Watcher
  • Minister of Love
    • View Profile


What if it feels like half-assing the task though?
Logged
Pages: 1 2 [3] 4 5 ... 14