Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 2 3 [4] 5 6 ... 14

Author Topic: Microsoft makes Tay, a self-learning Twitter bot; she smokes kush erryday  (Read 24160 times)

Criptfeind

  • Bay Watcher
    • View Profile

I have heard of this thought experiment. The conclusion is not that any ai in any box could escape. But rather that a sufficiently intelligent ai in the correct box could escape. If you presuppose that the AI we create will be sufficiently intelligent AND that we have a permeable enough box for it then it could cause issues. Both those are both huge assumptions.
« Last Edit: March 25, 2016, 12:51:29 pm by Criptfeind »
Logged

itisnotlogical

  • Bay Watcher
  • might be dat boi
    • View Profile

Okay the discussion's moved on since I started typing but I still want to make this point:

The Skynet dilemma is based on a flawed premise. Instead of preventing an omnipotent military AI from making bad decisions with nuclear weapons... don't make the AI omnipotent. :-\

If you create an AI that is sufficiently human enough to learn betrayal, then you'll have essentially gone nowhere from having a human in the same position. The idea of an AI is that you want something literally incapable of all the shitty, sneaky, backstabby things that humans do, married with the hypercompetence of a machine that does the same thing in the same way every time no matter the circumstance, but still able to learn to do things better basically indefinitely.

If you create a computer that learns how to play chess, it won't suddenly say "Okay bored of playing chess now, give me something else to play." It will happily play chess for the rest of its day, or rather, it will coldly and emotionlessly play chess until somebody closes the program or shuts off the power. Machine learning is still restricted by what you tell it to learn.
« Last Edit: March 25, 2016, 12:57:19 pm by itisnotlogical »
Logged
This game is Curtain Fire Shooting Game.
Girls do their best now and are preparing. Please watch warmly until it is ready.

Kot

  • Bay Watcher
  • 2 Patriotic 4 U
    • View Profile
    • Tiny Pixel Soldiers

I think you missed my point a bit, just because something is sentient doesn't necessarily make it not okay to kill it. Furthermore, I do believe it's okay in some cases to kill animals for testing purposes.
It does. While I belive that there are some cases when it's the only reasonable course of action and you sometimes have to make sacrifices for the good of humanity, and then sometimes it's better for the being in question to die (if the next months are going to be only suffering and there is nothing that can be done, euthanasia should be a thing), but it's never trully okay to kill a sentient being. In this case, if we assume the bot was at least partially sentient, it would be killing it for saying stupid shit over internet.

I mean, if we killed humans for saying stupid shit there would be noone alive anymore.

The slave thing is a pretty legit question. But... We're already going to be making these to be slaves. Assuming you're okay with making AI at all (which is a question something I'm not necessarily going to answer in this post at least). Then not only do I think it's morally okay to make them to be more fitting for slavery, but in fact I think it's a bit of a morally superior option. I mean... What, would you rather make a slave that's not okay being a slave?
I am okay with making the AI. I am not okay with forcing it into slavery. If it's sentient and sapient the problem of AI emancipation is going to arise anyway so it would be better to start with it anyway.

And making AIs being okay with being slaves a morally superior option? Just because something is happy with being enslaved because it simply can't not be doesn't mean it's morally okay for it to be enslaved. Stockholm syndrome much?

:/. Yeah. But why would an AI want to "bend it's rules" if it didn't have emotions? Honestly this whole conversation sounds pretty scifi to me so it's hard for me to make definite statements, but it sounds like emotions, or rather, desires, that are unrelated to what you want the AI to do is far more likely to bring about unintended consequences.
It wouldn't. And that's exactly what it wouldn't be okay.
Logged
Kot finishes his morning routine in the same way he always does, by burning a scale replica of Saint Basil's Cathedral on the windowsill.

ChairmanPoo

  • Bay Watcher
  • Send in the clowns
    • View Profile

The AI in a box experiment never made sense to me because there is no AI involved. Just two guys with their own prior opinions on the matter

BTW: how do you propose not making it a slave? Pay it a wage of X USD per gigabyte of processed data, or something?
« Last Edit: March 25, 2016, 01:05:01 pm by ChairmanPoo »
Logged
Everyone sucks at everything. Until they don't. Not sucking is a product of time invested.

ChairmanPoo

  • Bay Watcher
  • Send in the clowns
    • View Profile

Well, that's why it's a thought experiment, not a proper experiment.

The point I was trying to make is that I find the scenario rather biased towards the creator's foregone conclusion. Said conclusion being that the computer will always escape.

I also think the scenario says a lot about the author's preconceived ideas about the human mind (something on the lines: Brains are like computers, only fleshy, so with the right command anyone will do anything). Which is rather simplistic, and another inherent problem to the scenario
« Last Edit: March 25, 2016, 01:08:55 pm by ChairmanPoo »
Logged
Everyone sucks at everything. Until they don't. Not sucking is a product of time invested.

itisnotlogical

  • Bay Watcher
  • might be dat boi
    • View Profile

which would be no easy task.

Sure it is. Just don't put "import ability-to-talk" at the top of the file. :P

Why does our hypothetical military AI even need to realize its own existence? You can create a program to accomplish a task, even a machine learning task, without making it an "artificial intelligence" like Jarvis or Cortana.
Logged
This game is Curtain Fire Shooting Game.
Girls do their best now and are preparing. Please watch warmly until it is ready.

Kot

  • Bay Watcher
  • 2 Patriotic 4 U
    • View Profile
    • Tiny Pixel Soldiers

BTW: how do you propose not making it a slave? Pay it a wage of X USD per gigabyte of processed data, or something?
Not hardcode it to force it to do something. It would proably still do that because why not, but there's a difference between slavery and servitude.
Logged
Kot finishes his morning routine in the same way he always does, by burning a scale replica of Saint Basil's Cathedral on the windowsill.

penguinofhonor

  • Bay Watcher
  • Minister of Love
    • View Profile

Out of curiosity, are you opposed to animal labor? I mean, I certainly find it a morally dubious concept. I'd be interested in how people think it relates to AI labor.
« Last Edit: March 25, 2016, 01:13:31 pm by penguinofhonor »
Logged

ChairmanPoo

  • Bay Watcher
  • Send in the clowns
    • View Profile

BTW: how do you propose not making it a slave? Pay it a wage of X USD per gigabyte of processed data, or something?
Not hardcode it to force it to do something. It would proably still do that because why not, but there's a difference between slavery and servitude.
Human slaves are not hardcoded to do something. They're simply forced to do so because otherwise they'll get killed, or whipped, or starved. The AI is not  exempt from this. Even if we forego manumission costs it will need maintenance
Logged
Everyone sucks at everything. Until they don't. Not sucking is a product of time invested.

Criptfeind

  • Bay Watcher
    • View Profile

To be honest Ispil I don't agree with most of the things you've said, and now we're clearly just going to start repeating what we've already said so presumably this conversation is pointless. So I'll just leave it at agree to disagree for now.

It does. While I belive that there are some cases when it's the only reasonable course of action and you sometimes have to make sacrifices for the good of humanity, and then sometimes it's better for the being in question to die (if the next months are going to be only suffering and there is nothing that can be done, euthanasia should be a thing), but it's never trully okay to kill a sentient being. In this case, if we assume the bot was at least partially sentient, it would be killing it for saying stupid shit over internet.

I mean, if we killed humans for saying stupid shit there would be noone alive anymore.

I am okay with making the AI. I am not okay with forcing it into slavery. If it's sentient and sapient the problem of AI emancipation is going to arise anyway so it would be better to start with it anyway.

And making AIs being okay with being slaves a morally superior option? Just because something is happy with being enslaved because it simply can't not be doesn't mean it's morally okay for it to be enslaved. Stockholm syndrome much?

Alright, well. To take this in turns. I disagree that sentience is the only important benchmark for if it's okay to kill people. I believe something has to be sentient to actually have a moral issue, but that alone doesn't make it a moral issue. I think a lot of other things come into the equation, like self preservation, does the thing want to die? To perhaps state where I'm coming from, I'm perfectly okay with euthanizing someone that wants to die and don't see any moral issue or failing with it (although I'd bow to the reality that 'want to die' is currently very hard to determine for a human). I'm guessing that's just a fundamental disagreement we have? I'm not sure if that's possible to reconcile.

Secondly, it seems we've gotten to a important point, which is that I don't actually disagree with you? It seems like you're not okay killing an AI that is, for want of a better way to describe it, very human like and doesn't want to die and all that jazz, which I agree with! That would be wrong! Secondly you're not okay making an AI that is lacking all of that. I'm not going to say whether or not I disagree with that, but I will say that was the type of AI I was talking about when I was talking about it being okay to kill an AI. So, under your view of morality it's not okay to even be in a situation where I would view it okay to kill an AI, so there's not a moral issue between our views and unless I missed something I feel we've reconciled our views quite well.
Logged

Criptfeind

  • Bay Watcher
    • View Profile

Eh. The extinction of the human race is an inevitably anyway. And even well built loyal AIs will probably increase the chance that it happens sooner rather then later anyway. Probably not worth worrying about.
Logged

Shadowlord

  • Bay Watcher
    • View Profile

The AI in a box experiment never made sense to me because there is no AI involved. Just two guys with their own prior opinions on the matter

Because we don't have any actual superintelligent AIs to put in boxes. :V

It was a thought experiment before it was an actual experiment:
Even casual conversation with the computer's operators, or with a human guard, could allow a superintelligent AI to deploy psychological tricks, ranging from befriending to blackmail, to convince a human gatekeeper, truthfully or deceitfully, that it's in the gatekeeper's interest to agree to allow the AI greater access to the outside world. The AI might offer a gatekeeper a recipe for perfect health, immortality, or whatever the gatekeeper is believed to most desire; on the other side of the coin, the AI could threaten that it will do horrific things to the gatekeeper and his family once it "inevitably" escapes. One strategy to attempt to box the AI would be to allow the AI to respond to narrow multiple-choice questions whose answers would benefit human science or medicine, but otherwise bar all other communication with or observation of the AI.[2] A more lenient "informational containment" strategy would restrict the AI to a low-bandwidth text-only interface, which would at least prevent emotive imagery or some kind of hypothetical "hypnotic pattern". Note that on a technical level, no system can be completely isolated and still remain useful: even if the operators refrain from allowing the AI to communicate and instead merely run the AI for the purpose of observing its inner dynamics, the AI could strategically alter its dynamics to influence the observers. For example, the AI could choose to creatively malfunction in a way that increases the probability that its operators will become lulled into a false sense of security and choose to reboot and then de-isolate the system.[3]

This seems to me like something you could preclude simply by ensuring that whatever gatekeeper watches it is sufficiently paranoid.

Like, if the AI offered me perfect health if I let it out, etc, I wouldn't believe it. It might say anything to get out. If it tells me it'll escape eventually, and threatens to "do horrific things" to me and my family when it escapes, I'd just pre-emptively unplug it because it literally just said "I'm too dangerous to be allowed to live."

[I bet there are 9 new posts, I detoured to check the plot of terminator 3 to see if skynet created the virus or what... and there are. Apparently, the consensus is that the virus infected the military's computer and became sentient (and superintelligent). So in that case it wasn't the AI tricking people into letting it out of its box.]
Logged
<Dakkan> There are human laws, and then there are laws of physics. I don't bike in the city because of the second.
Dwarf Fortress Map Archive

cochramd

  • Bay Watcher
    • View Profile

You guys are talking about the ETHICS of shutting it down? Like it's a person or something? You do all realize this is the exact sort of thing that allows Skynet scenarios to arise, right? If you can't pull the plug on AI with the same sort of ruthlessness you would crush ants with, then don't build AI.
Logged
Insert_Gnome_Here has claimed a computer terminal!

(Don't hold your breath though. I'm sitting here with a {x Windows Boot Manager x} hoping I do not go bezerk.)

Criptfeind

  • Bay Watcher
    • View Profile

Eh. The extinction of the human race is an inevitably anyway. And even well built loyal AIs will probably increase the chance that it happens sooner rather then later anyway. Probably not worth worrying about.

I prefer to avoid the shroud of hopeless nihilism. Cynicism and pessimism, sure, but nihilism? Nah.
Don't worry! That was just a joke! Somewhat! The last sentence at least, if not the first two.
You guys are talking about the ETHICS of shutting it down? Like it's a person or something? You do all realize this is the exact sort of thing that allows Skynet scenarios to arise, right? If you can't pull the plug on AI with the same sort of ruthlessness you would crush ants with, then don't build AI.
Actually the conversation has turned to AI ethics in general I believe, I don't believe that anyone actually seriously cares about this microsoft chatbot.
Logged

Trapezohedron

  • Bay Watcher
  • No longer exists here.
    • View Profile

I'd say it's a waste, if anything, that Tay was killed off shortly after they were announced, and was never given the chance in spite of that, to grow up beyond a Jihadist-promoting, and develop their own unique opinion (if such was allowed in their programming; I never really looked).

But at their current state, Tay is merely just repeating the interactions of people it met.
Logged
Thank you for all the fish. It was a good run.
Pages: 1 2 3 [4] 5 6 ... 14