Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 3 4 [5] 6 7 ... 14

Author Topic: Microsoft makes Tay, a self-learning Twitter bot; she smokes kush erryday  (Read 24167 times)

Flying Dice

  • Bay Watcher
  • inveterate shitposter
    • View Profile

Yeah, it would have been interesting to see it after a year or so.
Logged


Aurora on small monitors:
1. Game Parameters -> Reduced Height Windows.
2. Lock taskbar to the right side of your desktop.
3. Run Resize Enable

Shadowlord

  • Bay Watcher
    • View Profile

anger, hatred, and fear are emotions too :V
Logged
<Dakkan> There are human laws, and then there are laws of physics. I don't bike in the city because of the second.
Dwarf Fortress Map Archive

Kot

  • Bay Watcher
  • 2 Patriotic 4 U
    • View Profile
    • Tiny Pixel Soldiers

Out of curiosity, are you opposed to animal labor? I mean, I certainly find it a morally dubious concept. I'd be interested in how people think it relates to AI labor.
Depends. I know how it was done in the older days in the countryside where horses were more of family members than anything, since they were so valuable. It's more of symbiosis than actual parasitism, though if the animals are abused or their work can be easily (as in, without any major loss for anyone) done by something else and yet they are forced to do so, it's not okay.  As for the AI - the problem is that the AI would be sapient in addition to sentience.

Human slaves are not hardcoded to do something. They're simply forced to do so because otherwise they'll get killed, or whipped, or starved. The AI is not  exempt from this. Even if we forego manumission costs it will need maintenance
You can't really whip or starve an AI, so you hardcode it do something, and that's even worse.

I disagree that sentience is the only important benchmark for if it's okay to kill people. I believe something has to be sentient to actually have a moral issue, but that alone doesn't make it a moral issue. I think a lot of other things come into the equation, like self preservation, does the thing want to die? To perhaps state where I'm coming from, I'm perfectly okay with euthanizing someone that wants to die and don't see any moral issue or failing with it (although I'd bow to the reality that 'want to die' is currently very hard to determine for a human). I'm guessing that's just a fundamental disagreement we have? I'm not sure if that's possible to reconcile.
I understand that there are situations when killing is theoretically better, but it any case it shouldn't be regarded as good and morally acceptable thing. I am in support of euthanasia and such too, but I don't consider it a good thing in any way. I'd describe it as necessary (or rather, lesser) evil or something.

Secondly, it seems we've gotten to a important point, which is that I don't actually disagree with you? It seems like you're not okay killing an AI that is, for want of a better way to describe it, very human like and doesn't want to die and all that jazz, which I agree with! That would be wrong! Secondly you're not okay making an AI that is lacking all of that. I'm not going to say whether or not I disagree with that, but I will say that was the type of AI I was talking about when I was talking about it being okay to kill an AI. So, under your view of morality it's not okay to even be in a situation where I would view it okay to kill an AI, so there's not a moral issue between our views and unless I missed something I feel we've reconciled our views quite well.
I guess? I mean I wouldn't consider it an AI if it's literally uncapable of feeling emotions and whatnot and killing such thing would be basically like killing an mindless computer.

You guys are talking about the ETHICS of shutting it down? Like it's a person or something? You do all realize this is the exact sort of thing that allows Skynet scenarios to arise, right? If you can't pull the plug on AI with the same sort of ruthlessness you would crush ants with, then don't build AI.
No. If we were to get an Skynet, I would pull the plug without any problem, since the thing is a threat and survival of human race comes above everything else. The problem would be killing an AI for just saying Nazi bullshit (this is just an example, I don't consider Tay to be actually sentient) over internet beause it causes bad popularity for the creator of said AI. AI "life" would be worth less than human life, but it shouldn't be dirt cheap.

anger, hatred, and fear are emotions too :V
And those are perfectly good emotions. What's wrong with them?

NEW POST NEW POST FUCKING NEW POST NEW POST ARGHHHH
Logged
Kot finishes his morning routine in the same way he always does, by burning a scale replica of Saint Basil's Cathedral on the windowsill.

ChairmanPoo

  • Bay Watcher
  • Send in the clowns
    • View Profile



Human slaves are not hardcoded to do something. They're simply forced to do so because otherwise they'll get killed, or whipped, or starved. The AI is not  exempt from this. Even if we forego manumission costs it will need maintenance
You can't really whip or starve an AI, so you hardcode it do something, and that's even worse.


You missed the point. You CAN very much physically punish an AI. Restrict maintenance, energy, whatnot.

The secondary point is that hardwiring someone to do something is not exactly traditional slavery, but something else altogether. Not necessarily better or worse.
Logged
Everyone sucks at everything. Until they don't. Not sucking is a product of time invested.

Flying Dice

  • Bay Watcher
  • inveterate shitposter
    • View Profile

RE: "But how do we stop Skynet if we don't keep AI as literal slaves with a permanent Damoclean sword over their metaphorical necks, thread able to be cut on demand if they ever put a byte out of line."

Because the way we stop human infants from growing up into murderers and psychopaths is by putting them under extreme emotional conditioning and physical control and strapping bomb collars on for their entire lives. After all, if you let a human being grow, learn, and act freely, of course there's a chance that they'll manipulate their way into possession of WMDs and kill a bunch of people, so it's really basic common sense to do that, right?

That's why when you make AI you don't make them emotionally castrated logic engines, and you do let them develop in a caring environment where they can learn to behave ethically, just like any other child. The things people throw around as "solutions" to the stereotypical murderous AI are in fact some of the most likely causes. Make your AI human-like. Teach it to act rightly. Restrict it from having ultimate control over whatever it's responsible for, and make sure that it understands why and agrees. If it's highly mentally unstable, obviously murderously psychopathic, &c... well, we just let humans like that wander around doing whatever they please, right?  ::)
Logged


Aurora on small monitors:
1. Game Parameters -> Reduced Height Windows.
2. Lock taskbar to the right side of your desktop.
3. Run Resize Enable

Loud Whispers

  • Bay Watcher
  • They said we have to aim higher, so we dug deeper.
    • View Profile
    • I APPLAUD YOU SIRRAH

The obvious difference is if a child has a tamper tantrum the chance of them killing the human species off is almost nonexistent

Criptfeind

  • Bay Watcher
    • View Profile

The even more obvious difference is the difference between taking something that's already growing emotions (a child) and removing that vs adding emotions to something that doesn't naturally have them.

Also as far as I can tell with a quick skim though, you "RE:" To something no one actually asked. The things the other way around, how to prevent Skynet if we don't give the AI all the emotions and shit of a normal person.
« Last Edit: March 25, 2016, 01:57:52 pm by Criptfeind »
Logged

Flying Dice

  • Bay Watcher
  • inveterate shitposter
    • View Profile

The obvious difference is if a child has a tamper tantrum the chance of them killing the human species off is almost nonexistent

Yes, and the people with nuclear launch codes were clearly put into those positions when they were children, right?

The even more obvious difference is the difference between taking something that's already growing emotions (a child) and removing that vs adding emotions to something that doesn't naturally have them.
excuse le maymay arrows

>literally custom-designed learning intelligence
>talks about natural features
Logged


Aurora on small monitors:
1. Game Parameters -> Reduced Height Windows.
2. Lock taskbar to the right side of your desktop.
3. Run Resize Enable

Reelya

  • Bay Watcher
    • View Profile

anger, hatred, and fear are emotions too :V
In regards to the human brain, anger is localized to the same singular part of the brain away from the rest of the emotional center (amygdala). Not sure if anger and hatred also relates to that area. If that's the case, then there's an argument to be made that though they are emotions, they aren't the same as our other ones. Whatever that fundamental difference may be, it could be controllable.

Yes, we have bad emotions and good ones; I'm entirely aware of that. Point is that most of us don't act on the bad ones to cause death or grievous harm to other people. This is because of the system of morality that those emotions provide (learned). So, with the right setup, it wouldn't be hard to get some AI's going that don't want to kill someone over being slightly slighted.

This is really the point here. We anthropomorphize way too much. For example, sociopaths exists who appear not to have the emotion "guilt". They just don't have it at all, but they do have other emotions, such as anger, jealousy etc. This shows "emotionality" isn't necessarily going to follow some sort of "inevitable" human-default settings. Hell, not even all humans follow those defaults. Each emotion doesn't exist because of anything inevitable, but because of evolution. Fear (flee or freeze reaction) helps us avoid being eaten by predators, guilt prevents us harming related creatures, love helps us bond and make new creatures. Anger drives us to defend ourselves and others.

If you say we will build the concept of "love" into a machine, someone will always chirp in with "but you can't have love without hate" or some similar statement. But who says? That's really just anthropomorphizing the machines. Basically, if we fabricate a machine with emotions, it's going to be it's entirely own set of emotions with zero actual connection to human emotions.

~~~

ninja'd: ditto on the "natural" features of machines. What's "natural" about a machine is purely a cultural concept. e.g. we might say "Google should give it up, because a robot car isn't 'natural' ". But since when was a human driving a car "natural" anyway? If we want to go that route, we all go back and sit in caves and don't even use rocks and clubs (tree branches aren't 'naturally' designed to hit things with). Basically, the idea that unemotional AI intelligence is somehow the 'natural' state of machines is pure movie-logic, since at this stage none of the proposed technologies even exist.
« Last Edit: March 25, 2016, 02:04:43 pm by Reelya »
Logged

Criptfeind

  • Bay Watcher
    • View Profile

The even more obvious difference is the difference between taking something that's already growing emotions (a child) and removing that vs adding emotions to something that doesn't naturally have them.

Um... what? I don't think you understand how this would work. AI's aren't "made" smart, they learn. The emotional foundation would be there from the start.
Well, to take it back to what you said earlier about "true, proper AI." Yeah. That's true for that sorta thing, but like you said that's not going to be like 90% of them anyway right? I suppose I was somewhat unclear, I was, frankly, assuming that for the most part we're going to be dealing with what you're not calling true and proper AI.

excuse le maymay arrows

>literally custom-designed learning intelligence
>talks about natural features
Yes. That is indeed my point. There will be no natural features at all. I'm glad we agree. I'm a little surprised that you turned around so quickly but not really that surprised.
Logged

Loud Whispers

  • Bay Watcher
  • They said we have to aim higher, so we dug deeper.
    • View Profile
    • I APPLAUD YOU SIRRAH

Yes, and the people with nuclear launch codes were clearly put into those positions when they were children, right?
Could we stop them from having nuclear tantrums?

Trapezohedron

  • Bay Watcher
  • No longer exists here.
    • View Profile

Been wondering, if AIs have to take years in order to develop the necessary emotions via information feed and slight biases to opinions, and let's say we have 3 different 'source' AIs creating differing opinions over the other, would it be of any use to save their current mental process into a file and flash that into a new 'receptacle', and would it be a lot of effort rooting out the base personality identifier of the copy from its source?
Logged
Thank you for all the fish. It was a good run.

Baffler

  • Bay Watcher
  • Caveat Lector.
    • View Profile

I think ascribing anything like morality to the question, whether its turning the thing off, "giving it emotions," or anything else that doesn't relate to what it's being made for, is based on excessive anthropomorphism. No matter how convincing a copy of a person it is, it is not and never will be one. It's just a machine someone made with an ability to mimic human speech and, possibly,  emotional cues. Frankly giving such a thing that ability in the first place is irresponsible. Gives people the wrong idea about what is essentially an extremely versatile computer.

Edit: dang ninjas. 14 replies while I write this...
Logged
Quote from: Helgoland
Even if you found a suitable opening, I doubt it would prove all too satisfying. And it might leave some nasty wounds, depending on the moral high ground's geology.
Location subject to periodic change.
Baffler likes silver, walnut trees, the color green, tanzanite, and dogs for their loyalty. When possible he prefers to consume beef, iced tea, and cornbread. He absolutely detests ticks.

Criptfeind

  • Bay Watcher
    • View Profile

I think ascribing anything like morality to the question, whether its turning the thing off, "giving it emotions," or anything else that doesn't relate to what it's being made for, is based on excessive anthropomorphism. No matter how convincing a copy of a person it is, it is not and never will be one. It's just a machine someone made with an ability to mimic human speech and, possibly,  emotional cues. Frankly giving such a thing that ability in the first place is irresponsible.

Aren't people just machines that someone made with the ability to mimic human speech (if by mimic you mean use) and emotional cues? Would it still be morally okay given all possibilities for how intelligent and complex the AI would be?
Logged

Reelya

  • Bay Watcher
    • View Profile

It depends on how the machine is created, really, as to what we can infer about it's inner-workings and consciousness. If we made a simulacra, which did not work like the human brain, but gave the appearance of doing so, then it's probably not conscious. It's easy to make simulacra by just mimicking surface appearance.

However, if we took just the known neuroscience, and "grew" a cyberbrain using a simulation of how a real brain develops, and we end up with something that absolutely swears that it is conscious, we'd probably have to accept that this type of AI was actually conscious due to occam's razor - it appearing to be conscious is either for the same reason that we are, or there's a new unknown principle at work.

My view on why consciousness exists at all is that it's a necessity: evolution can't hard-wire in "marching orders" for a complex organism, so it went the route of creating a "self" (or, at least the things with a "self" out-competed the things with hardwired "instructions"). And the only way to get a "self" to do what needs to be done is by feelings.
« Last Edit: March 25, 2016, 02:23:39 pm by Reelya »
Logged
Pages: 1 ... 3 4 [5] 6 7 ... 14