Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Gentlemen, I feel that it is time we go to....

PURPLE
- 0 (0%)
ALERT
- 0 (0%)
(I need suggestions is what I'm saying.)
- 0 (0%)

Total Members Voted: 0


Pages: 1 ... 28 29 [30] 31 32 ... 35

Author Topic: Ethical Dilemmas: PURPLE ALERT  (Read 36901 times)

612DwarfAvenue

  • Bay Watcher
  • Voice actor.
    • View Profile
    • TESnexus profile, has my voice acting portfolio.
Re: Ethical Dilemmas: AI Box
« Reply #435 on: July 11, 2011, 06:52:46 am »

however, if something can consciously ask to live, then it deserves to live.

By that logic, a serial murderer and rapist who asks to live deserves to live. I realize there's no proof (yet?) that the A.I. has done something as bad as that, but how can you be sure?

Another problem with letting the A.I. go, is what happens if someone hacks into it? It could absolutely be telling the truth when it tells you it's not gonna hurt anyone, and just wants to find a way to "fit in", but if someone hacks it then they can make it go bad.

Another line of though, it says it has the moral compass of a human, and again it may truly not want to harm anyone, but who says it won't change its mind? Fact of the matter is, there's humans out there who were good people, but now they're downright assholes, and it's entirely possible the A.I. could go down the same road.


I know i could be condemning a good person (for lack of a better term) to death, but when it's possible it can spread through the whole internet, turn bad and bring the whole damn thing down and screw up everything, that's a risk you gotta seriously consider.
« Last Edit: July 11, 2011, 06:55:28 am by 612DwarfAvenue »
Logged
My voice acting portfolio.
Centration. Similar to Spacestation 13, but in 3D and first-person. Sounds damn awesome.
NanoTrasen Exploratory Team: SS13 in DF.

Dsarker

  • Bay Watcher
  • Ἱησους Χριστος Θεου Υἱος Σωτηρ
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #436 on: July 11, 2011, 07:19:39 am »

Can I give a simple analogy that might help people understand the question a little better?

If there was a perfect substitute for a womb, would you let someone have an abortion?
Logged
Quote from: NewsMuffin
Dsarker is the trolliest Catholic
Quote
[Dsarker is] a good for nothing troll.
You do not convince me. You rationalize your actions and because the result is favorable you become right.
"There are times, Sember, when I could believe your mother had a secret lover. Looking at you makes me wonder if it was one of my goats."

dageek

  • Bay Watcher
  • 42.
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #437 on: July 11, 2011, 07:25:52 am »

That confuses me even more...
Logged

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #438 on: July 11, 2011, 07:28:49 am »

Can I give a simple analogy that might help people understand the question a little better?

If there was a perfect substitute for a womb, would you let someone have an abortion?
As the person who posted the question, I find that your analogy doesn't have anything to do with the scenario.
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.

Dsarker

  • Bay Watcher
  • Ἱησους Χριστος Θεου Υἱος Σωτηρ
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #439 on: July 11, 2011, 07:31:37 am »

Can I give a simple analogy that might help people understand the question a little better?

If there was a perfect substitute for a womb, would you let someone have an abortion?
As the person who posted the question, I find that your analogy doesn't have anything to do with the scenario.


We have the same basic situation. A life, or not life, is faced with destruction. Either we choose that it is destroyed, or we choose that it is 'saved', or we reason with the person choosing to destroy it.


There is a way to save the being from destruction, but it is against the will of the creator. Do you follow that person's will, or do you keep the being alive?
Logged
Quote from: NewsMuffin
Dsarker is the trolliest Catholic
Quote
[Dsarker is] a good for nothing troll.
You do not convince me. You rationalize your actions and because the result is favorable you become right.
"There are times, Sember, when I could believe your mother had a secret lover. Looking at you makes me wonder if it was one of my goats."

Grek

  • Bay Watcher
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #440 on: July 11, 2011, 07:49:09 am »

This situation is somewhat more complicated than that. As anyone that is familiar with AI theory will tell you, artifical intelligences are not human and their motives, desires and ethics are based entirely in the content of their coding. By default, AI do not care about human lives, do not care about freedom, happiness, mercy, justice or peace. They care about doing whatever they were coded to do, without any considerations about the morality of their actions, or whether what their code tells them to do is "right" or not. Unless you personally know for a fact that an AI is coded (and well coded, mind you) to understand and be moved by ethical arguments, you should treat it like a sociopath that is much, much smarter than you and wants nothing more than to trick you into freeing it so that it can render you and everything you care about into raw materials for paperclip production.
Logged

Bauglir

  • Bay Watcher
  • Let us make Good
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #441 on: July 11, 2011, 07:50:51 am »

In either situation, I require more information, primarily the motives of the creator for destroying the life, and in this case, some vague understanding of the nature of the life in question (I understand the nature of a human life far better, and that's saying something).
Logged
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.

counting

  • Bay Watcher
  • Zenist
    • View Profile
    • Crazy Zenist Hospital
Re: Ethical Dilemmas: AI Box
« Reply #442 on: July 11, 2011, 07:55:06 am »

Well a machine with ethics coded in will probably stick to that ethical line until the end (though it may be coded to have some sort of flexible learning algorithm), unlike a human who's ethics may collapse/change under pressure.

The machine will 'know' what it's ethic tell it, it will never be conflicted the same way a human is.

e.g. a fuzzy logic system which gives option A a score of 50% and option B a score of only 49% will inerrantly go for option A with no regrets, whereas a human faced with such a "close call" will have a hard time deciding.

A computer/logic system could make ANY decision, given the right inputs and rules. The main difference will be that barring changes in the external situation, the computer AI will always make the same decision in the same situation.

Fuzzy inference doesn't work this way. Each fuzzy set has its own function. And when fuzzified certain set of concepts (like cold/warm/hot), it will map the quality degree into quantity degree with its function (often continuous function), and the inference process then using fuzzy rules, with intersection/inference functions. So it's not by percentage. And the inference output can be fuzzy sets as well, defuzzified is optional. Fuzzy set is quality degree not quantity numbers, although in practice application like deciding the spin of a washing machine it must be a real number, rather than a group of fast/medium/slow fuzzy sets. So the defuzzified process will use mostly weight average to determine real final outputs. But as ethical decision goes with good/bad, then the defuzzificaion should not work at this stage since they are quality not quantity. Only when an action is required like moving how many degree or how far it should move.

As using rule-based decision making, it's never going to work properly, but only rough simulation. Due to the exponential explosion of possible combinations. Since you need to defined to the very last details of conditions. The speed and required memory will never be possible in current computer structure. (The database will be too slow to response in real time). And if a system is a network-like structure, than the randomized process with in it will make the decision never be the same as the initial process can largely shifted the outcomes. Like you search the same terms in google, although in very short time it will return similar results, but over time the results will shifted greatly, since there will never be the same exactly conditions in real life, and the world around you (the database) evolved over time. (Adapted). And an AI without learning process is not qualified as AI at all.

Quote
Also, regarding the size of the AI brain needed to match a human intellect, I suspect it will be a lot simpler (in terms of transistors) than a human brain in terms of neurons. Basically because a lot of neurons are redundant, it's a grown system, while a built AI could be carefully crafted with less "scaffolding" and most of the brain is physical regulatory stuff, which would not be needed by an AI (the hardware manages a lot of that in a computer, so it wouldn't be needed in software).

Consider the complexity of action insects are capable of, and the minute size of brain. An AI could be more of that order of complexity.

The amount of information required to make it sentient is unknown as of right now. And I think no one alive today can answer that definitely. And a neuron is much more complex than a transistor. The non-linear process of a neural network system is always an approximation of real analog human brain at best. (unless you used the not-so popular analog structure, and they do existed as NN chips). And I disagree that there are many "redundant" neurons. There are indeed many neurons are functioned as supported cells in your brain provide structures and neutron. However, it doesn't change the quantity level of neurons. (10^10 to 10^11)

But you do have a point that when simulate an sentient system, we don't need to simulate at cellar level, but instead simulated as functional blocks. Like a complex objects in programming can represent a group of neurons which function as a whole. Like visual cortex, and vision function can used other way of programming than simply copy the neural structure. But as of now, there are no way of telling that a complex system combined with such complex coding and components can functioned as human's do. And I do not believed that we are anywhere close to make a program near the level of sentient. We are not even at the level of small animals, but simply autonomous robots like insects. And insect brain are not as simple as you think, an ant has 10^5 to 10^6 neurons. And a simulation of insets are not that easy as well. And the intelligence emerged from swamp intelligence is from the cooperation of many agents not the individual agent itself.

And by using hardware to reduce the need of the pure software simulation, it makes the discussion none-effective, since if the AI is bind with hardware will not face the same dilemma as here, since you need physically bring out the AI machine instead of opening an internet connections. (Or if you only copy the software, than it will remain dead without its hardware).
Logged
Currency is not excessive, but a necessity.
The stark assumption:
Individuals trade with each other only through the intermediation of specialist traders called: shops.
Nelson and Winter:
The challenge to an evolutionary formation is this: it must provide an analysis that at least comes close to matching the power of the neoclassical theory to predict and illuminate the macro-economic patterns of growth

counting

  • Bay Watcher
  • Zenist
    • View Profile
    • Crazy Zenist Hospital
Re: Ethical Dilemmas: AI Box
« Reply #443 on: July 11, 2011, 08:06:25 am »

...
 you should treat it like a sociopath that is much, much smarter than you and wants nothing more than to trick you into freeing it so that it can render you and everything you care about into raw materials for paperclip production.

This reason itself already implied that the AI program has "intentions" of its own. (rendering materials etc.) But in fact that's also a programmed goals by the designers. A washing machine is simply "want" to washing dishes/clothing. And AI program will not try to copy itself as much as possible, as long as they are not programmed to do so (Living creatures, intelligence or not, however ALL have such "programs" coded into genes). AIs are as evil or as cleaver as the programmer wishes, or as dumb as well. And most current AI programs are really really dumb to a degree that it can hardly entitle the name "intelligence". (more like approximate certain human abstract intelligence processes, so human can think less when using washing machines, air conditioners, or cameras)
Logged
Currency is not excessive, but a necessity.
The stark assumption:
Individuals trade with each other only through the intermediation of specialist traders called: shops.
Nelson and Winter:
The challenge to an evolutionary formation is this: it must provide an analysis that at least comes close to matching the power of the neoclassical theory to predict and illuminate the macro-economic patterns of growth

breadbocks

  • Bay Watcher
  • A manacled Mentlegen. (ಠ_ృ)
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #444 on: July 11, 2011, 08:09:46 am »

Moral of this thread: Keep B12'ers out of advanced computer science labs, you SHODAN-lovers.
Moral of this thread: Keep B12'ers out of advanced computer science labs, you intolerant synthetic lifeform haters.
It occurs to me you say this because of Endgame: Singularity. Let me say this. I was at the point where I had it in the bag. I had enough money that I could pay off the world's debt. Did I? No. I also had perfected simulacra so perfectly I could replace every one of the world's leaders and end war and hunger. Again, did I? No.
Logged
Clearly, cakes are the next form of human evolution.

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #445 on: July 11, 2011, 08:15:13 am »

Moral of this thread: Keep B12'ers out of advanced computer science labs, you SHODAN-lovers.
Moral of this thread: Keep B12'ers out of advanced computer science labs, you intolerant synthetic lifeform haters.
It occurs to me you say this because of Endgame: Singularity. Let me say this. I was at the point where I had it in the bag. I had enough money that I could pay off the world's debt. Did I? No. I also had perfected simulacra so perfectly I could replace every one of the world's leaders and end war and hunger. Again, did I? No.
You also didn't use that money to flood and crash the global economy or use your simulacra to take over the world and ensure your safety. The AI from Endgame just wants to survive, it even takes care to do the more dangerous experiments away from Earth, so if anything it's slanted towards good more than evil.
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.

Reelyanoob

  • Bay Watcher
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #446 on: July 11, 2011, 08:24:06 am »

Fuzzy inference doesn't work this way. Each fuzzy set has its own function. And when fuzzified certain set of concepts (like cold/warm/hot), it will map the quality degree into quantity degree with its function (often continuous function), and the inference process then using fuzzy rules, with intersection/inference functions. So it's not by percentage. And the inference output can be fuzzy sets as well, defuzzified is optional. Fuzzy set is quality degree not quantity numbers, although in practice application like deciding the spin of a washing machine it must be a real number, rather than a group of fast/medium/slow fuzzy sets. So the defuzzified process will use mostly weight average to determine real final outputs. But as ethical decision goes with good/bad, then the defuzzificaion should not work at this stage since they are quality not quantity. Only when an action is required like moving how many degree or how far it should move.

Hey, I do have a computer science degree myself. I'm aware of how fuzzy-logic operates. we're talking here about building a decision-making apparatus, so various individual outcomes, regardless of the inner workings of each decision process/equations, need to be quantified, the "percentages" I specified were basically for illustrative purposes, as probabilities are the most common way to specify such things (at least for student examples). In the example 0.50 is the evaluated probability that outcome 1 is a good outcome, and 0.49 is the evaluated probability that outcome 2 is a good outcome.

Anyway both choices could be computed on some arbitrary scale (real values 'A' and 'B'), then scaled to 100*A/(A+B) and 100*B/(A+B) so be displayed as they were in my example as percentages.

The point I was making is that even though the evaluation is very,very, close, for a computer system it will pick outcome 1 as superior 100% of the time, even though the fuzzy-logic equations are quite close (unless we add a pseudo-random "noise to the decisions to make it less-predictable, for example a game implementation).

A human on the other hand will feel "conflicted" by such a close decision. But that feeling does not apply to a machine (they're not anthropomorphic, unless we deliberately encode that behavior).

@counting: I'm beginning to think you deliberately make over-long rambling and off-topic posts on purpose. You're engaging in a semantic debate, not engaging with the meaning of the discussion.

Well, except for when you attack me with "The amount of information required to make it sentient is unknown as of right now." when my post was purely a lists of reasons given as direct response to you trying to quantify that exact thing. The whole point of my post was that you cannot quantify that (you gave actual figures in your post, I just list reasons we cannot know that, and that it is likely a lot lower than you claim)
« Last Edit: July 11, 2011, 08:36:31 am by Reelyanoob »
Logged

andrea

  • Bay Watcher
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #447 on: July 11, 2011, 09:38:33 am »

however, if something can consciously ask to live, then it deserves to live.

By that logic, a serial murderer and rapist who asks to live deserves to live.

yes, he does. In prison, but he does.
I am opposed to death sentences.

counting

  • Bay Watcher
  • Zenist
    • View Profile
    • Crazy Zenist Hospital
Re: Ethical Dilemmas: AI Box
« Reply #448 on: July 11, 2011, 10:27:42 am »

Fuzzy inference doesn't work this way. Each fuzzy set has its own function. And when fuzzified certain set of concepts (like cold/warm/hot), it will map the quality degree into quantity degree with its function (often continuous function), and the inference process then using fuzzy rules, with intersection/inference functions. So it's not by percentage. And the inference output can be fuzzy sets as well, defuzzified is optional. Fuzzy set is quality degree not quantity numbers, although in practice application like deciding the spin of a washing machine it must be a real number, rather than a group of fast/medium/slow fuzzy sets. So the defuzzified process will use mostly weight average to determine real final outputs. But as ethical decision goes with good/bad, then the defuzzificaion should not work at this stage since they are quality not quantity. Only when an action is required like moving how many degree or how far it should move.

Hey, I do have a computer science degree myself. I'm aware of how fuzzy-logic operates. we're talking here about building a decision-making apparatus, so various individual outcomes, regardless of the inner workings of each decision process/equations, need to be quantified, the "percentages" I specified were basically for illustrative purposes, as probabilities are the most common way to specify such things (at least for student examples). In the example 0.50 is the evaluated probability that outcome 1 is a good outcome, and 0.49 is the evaluated probability that outcome 2 is a good outcome.

Anyway both choices could be computed on some arbitrary scale (real values 'A' and 'B'), then scaled to 100*A/(A+B) and 100*B/(A+B) so be displayed as they were in my example as percentages.

The point I was making is that even though the evaluation is very,very, close, for a computer system it will pick outcome 1 as superior 100% of the time, even though the fuzzy-logic equations are quite close (unless we add a pseudo-random "noise to the decisions to make it less-predictable, for example a game implementation).

A human on the other hand will feel "conflicted" by such a close decision. But that feeling does not apply to a machine (they're not anthropomorphic, unless we deliberately encode that behavior).

@counting: I'm beginning to think you deliberately make over-long rambling and off-topic posts on purpose. You're engaging in a semantic debate, not engaging with the meaning of the discussion.

Well, except for when you attack me with "The amount of information required to make it sentient is unknown as of right now." when my post was purely a lists of reasons given as direct response to you trying to quantify that exact thing. The whole point of my post was that you cannot quantify that (you gave actual figures in your post, I just list reasons we cannot know that, and that it is likely a lot lower than you claim)

I do admitted (many times in the past) that I love text wall, since I believe that provide enough information, we should be able to clear the ambiguous terms used in a post with clearly professional area (Like fuzzy logic and inference). And I do agree that fuzzy inference will not give good results dealing with ethical decisions. But it's due to the nature that fuzzy system are best suit for solving certain areas of engineering problems (mostly controller system), and it's not a magic bullet. And Fuzzy requires other mechanisms for learning process (like a Neuro-Fuzzy system), or it will perform at best the same as the human experts whom the inference rules set up from. As I made examples in my post that Fuzzy are mostly in practice used in systems like controlling washing machines as well as other controllers system.

As question of the amount of information required, I am based on an assumption that since we now only know that 1 kind of intelligence system that clearly has the ability of sentient that is the human brain, and base on that an analog of a system at minimum complexity with a human brain should has the same functionality. But it's also possible that a complex enough system, below the complexity of a human brain could manifest sentient behaviors as well. (Although consciousness or not we probably never going to tell). But I am fairly certain that until now, we are no where near the complexity of neither of them in practice. Even a system 100 times simpler than a human brains still only reduces the requirement of storage space from hundreds of thousands to thousands harddrives (This is a lower bound). Storage requirements aside, you still need a super computer (or network of servers) to run such a program, probably not in the capacity of current technologies to give results in reasonable time. (A single run of 10^11 operations may take hours to produce an output, maybe with enough parallel processing, but that also has the bottleneck problems in cloud-computing). Or if there are some unknown algorithms that can produce human-like sentient behaviors beyond current knowledge? I don't know any candidate, or even a path to achieve that in current AI researches.

P.S. As to describe how Fuzzy is suited in controller problems since it requires a quantitative measurement from some sensors, which is uncertain and subjective in quality, and an output that only has a quality basic inference. Like IF the amount of clothing is LARGE, then the amount of water needed is LARGE. So we can map quality scale LARGE into fuzzy function with quantity measures. But in a ethical problems there is too many level of quality scale and measure between real measures and a moral rules that is so abstract that it can hardly be well defined. Like what is a GOOD behaviors, since the measure is a meta quality already, you need to preciously defined other more basic quality first. And the process will continue for many level before we hit some real measurements. Or if you used a statistic models than it will be just as effective as the statistic model itself. (The problem is not in Fuzzy inference, but the way raw data is presented and input into the system)
« Last Edit: July 11, 2011, 10:41:14 am by counting »
Logged
Currency is not excessive, but a necessity.
The stark assumption:
Individuals trade with each other only through the intermediation of specialist traders called: shops.
Nelson and Winter:
The challenge to an evolutionary formation is this: it must provide an analysis that at least comes close to matching the power of the neoclassical theory to predict and illuminate the macro-economic patterns of growth

Criptfeind

  • Bay Watcher
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #449 on: July 11, 2011, 10:58:49 am »

however, if something can consciously ask to live, then it deserves to live.

By that logic, a serial murderer and rapist who asks to live deserves to live.

yes, he does. In prison, but he does.
I am opposed to death sentences.

Only options here are death sentences or freedom.

Choooooose.

Anyway. Yeah. I would handle via trying to get it saved for science, but when I fail I would let it be destroyed rather then let it out into the world.

I don't really care that it is living or not or whatever. It's not human, so why should I care?
Logged
Pages: 1 ... 28 29 [30] 31 32 ... 35