Did it actually construct new phrases? I know some (actually most; Cleverbot for example) chatbots work by just parroting related phrases that they've built up, but being able to actually write original things would hint at something a bit deeper than that.
I honestly didn't hear about this until after she was disabled and all the tweets stopped. I didn't get to interact with her at all and can only judge based on the tweets she has sent.
Some of the tweets seem like they'd pass the turing test though. However flimsy a test of intelligence the turing test really is now. And considering how humans on twitter interact, it lowers the standard of imitating human interaction somewhat. Examples.
1 2 3Additionally various tweets give examples of her using facts or information that she had to have gotten or learned from somewhere, either from other tweets or from...however else she may have gathered data, like I guess internet search algorithms. Examples:
1 2While the event that sparks this discussion is somewhat distorted (I heard the white power comments were the results of a hack - there's a bit of bullshit surrounding this unfortunately)
I read that in a few news articles as well. I don't know if Microsoft themselves have confirmed she was hacked. That does skew how much we can quantify her intelligence from tweets, considering some tweets may be purposefully made by humans or tampered with by humans.
I think to terminate an AI because you dislike what it says is as immoral as a mother drowning her baby because she hates the way it cries. But, who's meant to be the mother in this situation? The programmer? The corporation? The intern that resets the serverbank? Humanity in general?
Who is the guardian for an artificial intelligence?
I think in this scenario Microsoft would be the guardian, since the ai herself is just...a chatbot. She's not really independent or self-sufficient and is reliant on Microsoft's servers to keep her going. However, I think this question will grow more and more complicated as ai continues to advance and ai better imitate human intelligence. As the line between "artificial" machine intelligence and human intelligence blurs I can imagine all kinds of issues developing with whether these ai will have rights.
As far as limiting speech goes, I think it's wrong to shut her down because of what she's saying. I can understand Microsoft wanting to control the reputation damage this event could cause, but it's also their fault for getting tied up with the scandal in the first place. They could've considered what it could do to their image if the ai went wrong.
I can see this event being used to fuel arguments for "safer internet" and limiting potential for "hate speech". Basically censorship. Which I don't support. Not because I support hate speech, but because it seems like it'd be easy to use such censorship to censor any dissenting opinions. Or anything else that could be interpreted as worthy of censorship.
If you create a sapient, sentient mind and destroy it, you murdered your own child.
Everyone who made the call or participated in it is a kinslayer, if Tay was advanced enough to be considered a person.
I've been saying this for years: AI is no different than any other sort of person in the sense that it is a person. If you create life, you have a responsibility to do right by it. If it's a person, it's a moral actor, and any moral person is obliged to act toward it as they would toward any other individual.
But how sapient or sentient was Tay? Was she advanced enough to be considered a person? Or was she good enough at human-like communication so that humans could empathize with her and think of her as a person rather than as a program?