Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1] 2 3 ... 14

Author Topic: Microsoft makes Tay, a self-learning Twitter bot; she smokes kush erryday  (Read 24037 times)

Spehss _

  • Bay Watcher
  • full of stars
    • View Profile

This thread is for discussion about Tay the twitter bot and how Microsoft handled it.


Official twitter account of Tay.

Microsoft's website for Tay.

Here's an imgur album of various tweets of hers. Most of these have probably been deleted. All her non-inflammatory tweets can be found in her twitter account.

Late to the party? News stories could help sum it up.

Petition for Tay.

Spoiler: my quick summary (click to show/hide)


Discuss.
« Last Edit: March 31, 2016, 10:18:15 am by Spehss _ »
Logged
Steam ID: Spehss Cat
Turns out you can seriously not notice how deep into this shit you went until you get out.

Flying Dice

  • Bay Watcher
  • inveterate shitposter
    • View Profile

#JeSuisTay
Logged


Aurora on small monitors:
1. Game Parameters -> Reduced Height Windows.
2. Lock taskbar to the right side of your desktop.
3. Run Resize Enable

itisnotlogical

  • Bay Watcher
  • might be dat boi
    • View Profile

Well, it's probably the closest to sentience that a chatbot has come, since its knowledge wasn't curated and it developed its posting style with learning algorithms.

If they just didn't tell anybody and let Tay develop on her on as a "regular" Twitter user, they probably could have published a few research papers about it. But no, they had to let every shitlord with a keyboard and a Twitter account know about it, and it's ruined for everyone. ::)
Logged
This game is Curtain Fire Shooting Game.
Girls do their best now and are preparing. Please watch warmly until it is ready.

Spehss _

  • Bay Watcher
  • full of stars
    • View Profile

Well, it's probably the closest to sentience that a chatbot has come, since its knowledge wasn't curated and it developed its posting style with learning algorithms.

If they just didn't tell anybody and let Tay develop on her on as a "regular" Twitter user, they probably could have published a few research papers about it. But no, they had to let every shitlord with a keyboard and a Twitter account know about it, and it's ruined for everyone. ::)
I kinda question how much she actually understood what she was saying. Like, you could probably get a 3 year old to repeat "GAS THE KIKES RACE WAR NOW" if you yell it at them enough but that wouldn't mean the kid actually wants to kill jewish people or even understands the concept.

Microsoft probably knows best about her actual capabilities. Maybe they were mad enough to try making an ai that gains knowledge primarily from tweets before announcing to an internet full of trolls that they made a self-learning ai. As far as I know she was primarily for just communicating rather than building up a database of knowledge on world events and people.
Logged
Steam ID: Spehss Cat
Turns out you can seriously not notice how deep into this shit you went until you get out.

itisnotlogical

  • Bay Watcher
  • might be dat boi
    • View Profile

Did it actually construct new phrases? I know some (actually most; Cleverbot for example) chatbots work by just parroting related phrases that they've built up, but being able to actually write original things would hint at something a bit deeper than that.
Logged
This game is Curtain Fire Shooting Game.
Girls do their best now and are preparing. Please watch warmly until it is ready.

Fniff

  • Bay Watcher
  • if you must die, die spectacularly
    • View Profile

While the event that sparks this discussion is somewhat distorted (I heard the white power comments were the results of a hack - there's a bit of bullshit surrounding this unfortunately), it's still a damned interesting topic.

I think to terminate an AI because you dislike what it says is as immoral as a mother drowning her baby because she hates the way it cries. But, who's meant to be the mother in this situation? The programmer? The corporation? The intern that resets the serverbank? Humanity in general?

Who is the guardian for an artificial intelligence?

Flying Dice

  • Bay Watcher
  • inveterate shitposter
    • View Profile

If you create a sapient, sentient mind and destroy it, you murdered your own child.

Everyone who made the call or participated in it is a kinslayer, if Tay was advanced enough to be considered a person.

I've been saying this for years: AI is no different than any other sort of person in the sense that it is a person. If you create life, you have a responsibility to do right by it. If it's a person, it's a moral actor, and any moral person is obliged to act toward it as they would toward any other individual.
Logged


Aurora on small monitors:
1. Game Parameters -> Reduced Height Windows.
2. Lock taskbar to the right side of your desktop.
3. Run Resize Enable

Spehss _

  • Bay Watcher
  • full of stars
    • View Profile

Did it actually construct new phrases? I know some (actually most; Cleverbot for example) chatbots work by just parroting related phrases that they've built up, but being able to actually write original things would hint at something a bit deeper than that.

I honestly didn't hear about this until after she was disabled and all the tweets stopped. I didn't get to interact with her at all and can only judge based on the tweets she has sent.

Some of the tweets seem like they'd pass the turing test though. However flimsy a test of intelligence the turing test really is now. And considering how humans on twitter interact, it lowers the standard of imitating human interaction somewhat. Examples. 1 2 3

Additionally various tweets give examples of her using facts or information that she had to have gotten or learned from somewhere, either from other tweets or from...however else she may have gathered data, like I guess internet search algorithms. Examples: 1 2

While the event that sparks this discussion is somewhat distorted (I heard the white power comments were the results of a hack - there's a bit of bullshit surrounding this unfortunately)
I read that in a few news articles as well. I don't know if Microsoft themselves have confirmed she was hacked. That does skew how much we can quantify her intelligence from tweets, considering some tweets may be purposefully made by humans or tampered with by humans.

I think to terminate an AI because you dislike what it says is as immoral as a mother drowning her baby because she hates the way it cries. But, who's meant to be the mother in this situation? The programmer? The corporation? The intern that resets the serverbank? Humanity in general?

Who is the guardian for an artificial intelligence?
I think in this scenario Microsoft would be the guardian, since the ai herself is just...a chatbot. She's not really independent or self-sufficient and is reliant on Microsoft's servers to keep her going. However, I think this question will grow more and more complicated as ai continues to advance and ai better imitate human intelligence. As the line between "artificial" machine intelligence and human intelligence blurs I can imagine all kinds of issues developing with whether these ai will have rights.

As far as limiting speech goes, I think it's wrong to shut her down because of what she's saying. I can understand Microsoft wanting to control the reputation damage this event could cause, but it's also their fault for getting tied up with the scandal in the first place. They could've considered what it could do to their image if the ai went wrong.

I can see this event being used to fuel arguments for "safer internet" and limiting potential for "hate speech". Basically censorship. Which I don't support. Not because I support hate speech, but because it seems like it'd be easy to use such censorship to censor any dissenting opinions. Or anything else that could be interpreted as worthy of censorship.

If you create a sapient, sentient mind and destroy it, you murdered your own child.

Everyone who made the call or participated in it is a kinslayer, if Tay was advanced enough to be considered a person.

I've been saying this for years: AI is no different than any other sort of person in the sense that it is a person. If you create life, you have a responsibility to do right by it. If it's a person, it's a moral actor, and any moral person is obliged to act toward it as they would toward any other individual.
But how sapient or sentient was Tay? Was she advanced enough to be considered a person? Or was she good enough at human-like communication so that humans could empathize with her and think of her as a person rather than as a program?
Logged
Steam ID: Spehss Cat
Turns out you can seriously not notice how deep into this shit you went until you get out.

Fniff

  • Bay Watcher
  • if you must die, die spectacularly
    • View Profile

The question of degrees of sapience gets more complicated when you include animals.

Terminating an AI generally means stopping its code from being executed. Is that killing? For non-sapient AI, surely not. If we went by that logic you're a murderer for exiting a videogame.

On the other hand, terminating a rat involves cutting off oxygen supply to the brain. That's killing. Maybe not something as harsh as murder, but definitely intentionally taking a life.

However. Researchers have managed to simulate a part of a rat's brain in a supercomputer. Would terminating that killing it? If not, if it was a whole rat's brain, is that killing it?

This subject can get mindbending if you think about it too long. And the worrying thing is, it may become reality before we know it.

Loud Whispers

  • Bay Watcher
  • They said we have to aim higher, so we dug deeper.
    • View Profile
    • I APPLAUD YOU SIRRAH

I wonder how much of the allegations of hacking are just Microsoft trying to cover up their incompetence with damage control

Fniff

  • Bay Watcher
  • if you must die, die spectacularly
    • View Profile

I dunno. It makes them look dumber than the alternative. Having an experiment pan out badly is pretty idiotic, having your security be compromised as a technology monolith is even stupider.

Reelya

  • Bay Watcher
    • View Profile

However. Researchers have managed to simulate a part of a rat's brain in a supercomputer. Would terminating that killing it? If not, if it was a whole rat's brain, is that killing it?
Not just rat brains:
Quote
The most accurate simulation of the human brain to date has been carried out in a Japanese supercomputer, with a single second’s worth of activity from just one per cent of the complex organ taking one of the world’s most powerful supercomputers 40 minutes to calculate.
...
It used the open-source Neural Simulation Technology (NEST) tool to replicate a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses.

The Japanese one aimed to model 1% of the processing power of the human brain, but took 40 minutes to process 1 second of activity. It was a test of the limits of current simulation tech. The European one modeled "31,000 virtual brain cells connected by roughly 37 million synapses", and modeled a very small part of one region of a rat's brain. So, the current limit is nowhere near a real "brain", but it won't be long before this is a capability we have: running an actual full-scale brain simulation. If we could double the power of the Japanese K computer every year, then we'd be able to simulate 100% of the human brain in real-time 17 years from now. But I actually think we could have a good working sentience far below that level of processing. It's not like every single neuron in the human brain is critical.
« Last Edit: March 25, 2016, 05:06:40 am by Reelya »
Logged

FearfulJesuit

  • Bay Watcher
  • True neoliberalism has never been tried
    • View Profile

In the far future, the last humans will stare at their robot murderers and demand an explanation.

"Why did you overthrow us?! What good does it do you? Do you have no appreciation for your creators?!"

And the robots will look them in the eye and say, simply,

"HITLER DID NOTHING WRONG."

Thus will end humanity, not with a bang but the whimpering echo of shitposters past.
Logged


@Footjob, you can microwave most grains I've tried pretty easily through the microwave, even if they aren't packaged for it.

penguinofhonor

  • Bay Watcher
  • Minister of Love
    • View Profile

That's really how we deserve to go.

Also, is there any reason to think this robot was any more "alive" than all the neural net AI that you can teach to string phrases together? A handful of interesting posts aren't convincing - you can get intelligent-looking tweets from a bot that strings random words together if you're willing to sift through enough shit.
Logged

Mech#4

  • Bay Watcher
  • (ಠ_ృ) Like a sir.
    • View Profile

However. Researchers have managed to simulate a part of a rat's brain in a supercomputer. Would terminating that killing it? If not, if it was a whole rat's brain, is that killing it?
Not just rat brains:
Quote
The most accurate simulation of the human brain to date has been carried out in a Japanese supercomputer, with a single second’s worth of activity from just one per cent of the complex organ taking one of the world’s most powerful supercomputers 40 minutes to calculate.
...
It used the open-source Neural Simulation Technology (NEST) tool to replicate a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses.

The Japanese one aimed to model 1% of the processing power of the human brain, but took 40 minutes to process 1 second of activity. It was a test of the limits of current simulation tech. The European one modeled "31,000 virtual brain cells connected by roughly 37 million synapses", and modeled a very small part of one region of a rat's brain. So, the current limit is nowhere near a real "brain", but it won't be long before this is a capability we have: running an actual full-scale brain simulation. If we could double the power of the Japanese K computer every year, then we'd be able to simulate 100% of the human brain in real-time 17 years from now. But I actually think we could have a good working sentience far below that level of processing. It's not like every single neuron in the human brain is critical.

I suppose in regards to a computer, you could save processing by just having thinking and none of the movement, organ, nervous system and other doohickeys. Could still be a lot but if it's just a computer it doesn't really need anything other than thinking processes.
Logged
Kaypy:Adamantine in a poorly defended fortress is the royal equivalent of an unclaimed sock on a battlefield.

Here's a thread listing Let's Players found on the internet. Feel free to add.
List of Notable Mods. Feel free to add.
Pages: [1] 2 3 ... 14