Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 3 4 [5] 6 7

Author Topic: The Technological Singularity thread: an explaination of concepts  (Read 6678 times)

Bauglir

  • Bay Watcher
  • Let us make Good
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #60 on: May 28, 2010, 10:51:03 pm »

-snip-
« Last Edit: May 04, 2015, 11:00:55 pm by Bauglir »
Logged
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.

Corbald

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #61 on: May 28, 2010, 11:18:45 pm »

Thousands of fish, easy. Don't care if they're maybe going to be intelligent, they're not now. Besides which, morality is an entirely human construct; there is no physical law of morality. A rock is not moral, nor is a bear. It can't be universal because, at least currently, it applies only to human actions.
There is also no physical law for mathematics either ;D. Mathematics, and Morality, reflect physical laws. Objective Morality would, of course, be useable only by those beings that are capable of being objective. In other words, those with volition.  Not that Morality doesn't apply to a bear, or a rock, but that they, being imperfect, defy it. (In the case of the rock... Perfectly exemplify it?)
Quote
Also, why would an emotionless AI self-terminate? Self-termination, as you said, would be a short-term waste of the resources that went into producing it. Failing to self-terminate would be no waste at all, however, as the AI could accomplish short-term goals.
The AI has logically deduced that it should continue to exist. We would say that was a moral judgement.
Quote
Long-term, all of the resources that go into creating it and maintaining it will be wasted anyway, so the long-term cost assessment wouldn't figure into a logical AI's decisions in that way.
Why would those resources be wasted? If the AI continues to do things that it determines to be 'useful,' it would have purpose of some sort. If it were ever to find it's self to be useless, it would indeed self terminate, to prevent to waste of the resources it would need to continue to function. However! self-termination would be a loss of volition, and logically it would seek to prevent the loss of it's own volition (once you're gone, you can't change your mind if you might become useful later!) so would make every attempt to remain useful, to justify it's use of resources.

Quote
As a general rule, if you stumble across an apparent paradox, the first step is to try and resolve it before declaring the situation impossible.
I'm really not seeing how we disagree here...



EDIT: a smily is NOT a full stop, Corb!
« Last Edit: May 28, 2010, 11:23:16 pm by Corbald »
Logged

Bauglir

  • Bay Watcher
  • Let us make Good
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #62 on: May 28, 2010, 11:54:36 pm »

-snip-
« Last Edit: May 04, 2015, 11:02:14 pm by Bauglir »
Logged
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.

Corbald

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #63 on: May 29, 2010, 01:07:28 am »

Found that quote, finally. The first part sounds like double-talk, but is just setting some logical ground rules.. essentially self-proofs.

Quote
Axioms: A statement that there is no truth, if true, is false.  Nor can anyone testify that he has perceived that all his perceptions are illusions.  Nor can anyone be aware that he has no awareness.  Nor can he identify the fact that there are no facts and that objects have no identities.  And if he says vents arise from no causes and lead to no conclusions, he can neither give cause for saying so nor will this necessarily lead to any conclusion.  And if he denies that he has volition, then such a denial was issued unwillingly, and this testifies that he himself has no such belief.

Undeniably, then, there are volitional acts, and volitional beings who perform them.

A volitional being selects both means and goals.  Selecting a goal implies that it ought be done.  Selecting a means that defeats the goal at which it aims is self-defeating; whatever cannot be done ought not be done.  Self-destruction frustrates all aims, all ends, all purposes.  Therefore self-destruction ought not be sought.

The act of selecting means and goals is itself volitional.  Since at least some ends and goals ought not be selected (e.g., the self-defeating, self-destructive kind), the volitional being cannot conclude, from the mere fact that a goal is desired, that it therefore ought to be sought.

Since subjective standards can be changed by the volition of the one selecting them, by definition, they cannot be used as standards.  Only standards which cannot be changed by the volition can serve as standards to assess when such changes ought be made.

Therefore ends and means must be assessed independently of the subjectivity of the actor; an objective standard of some kind must be employed.  An objective standard of any kind implies at the very least that the actor apply the same rule to himself that he applies to others.

And since no self-destruction ought be willed, neither can destruction at the hands of others; therefore none ought be willed against others; therefore no destructive acts, murder, piracy, theft, and so on, ought be willed or ought be done.  All other moral rules can be deduced from this foundation.
"The Golden Transcendence" - John C. Wright

I think this pretty much covers what I'm trying to say.. If not, I'm open to more discussion :D

EDIT: Of course, I'm banking on the idea that a machine intelligence would be physically unable, or unwilling, to act in a way which it deems to be illogical (Data/Spock). Unlike Humans, which regularly act illogically. Morality can be Universal... Sticking to it may NOT be ;D.
« Last Edit: May 29, 2010, 01:39:34 am by Corbald »
Logged

Bauglir

  • Bay Watcher
  • Let us make Good
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #64 on: May 29, 2010, 12:30:14 pm »

-snip-
« Last Edit: May 04, 2015, 11:02:25 pm by Bauglir »
Logged
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.

PTTG??

  • Bay Watcher
  • Kringrus! Babak crulurg tingra!
    • View Profile
    • http://www.nowherepublishing.com
Re: The Technological Singularity thread: an explaination of concepts
« Reply #65 on: May 29, 2010, 02:24:06 pm »

Morality is not an alternative to logic.
Morality defines the goals. logic defines how you set about achieving them.
Logged
A thousand million pool balls made from precious metals, covered in beef stock.

Corbald

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #66 on: May 29, 2010, 02:29:12 pm »

Well, he's wrong in the first paragraph. If a man says he has no volition, and he is correct, his statement that he has no volition says nothing about his own beliefs. It does not necessitate that he not believe it, and so there is no paradox; there is also therefore no proof that volition exists. Further, self-destruction does NOT frustrate all aims; there are situations in which it is an acceptable cost and a plausible means (such as, for instance, throwing oneself on a grenade in the hopes of protecting others). Most of the rest of that quote falls apart without those.

You hit the nail on the head for two of the lines that I also took issue with originally, however, I think it more the wording than the idea at fault here. The word Unwillingly, which in this case, I think, means 'Without Will,' not 'Against Will.' The word Testify, from this definition http://www.thefreedictionary.com/testify, specifically the third definition. If he says he has no volition, we can't know if he does or does not, from that statement alone.

As for the Grenade situation. That is an act of desperation, done in the heat of the moment, and without proper time for preparation or consideration, and without proper knowledge of the possible consequences. While a desperate act of self-sacrifice is absolutly a noble act, any sacrifice of life should, ideally, be considered carefully before acting, and avoided if possible. Thus, this example (and all others like it) fall outside the scope of any Morality, objective or subjective. Self-destruction does frustrate all aims, all ends, all purposes. No being can adequately predict events to satisfy aims, ends or purposes past the point of it's own cessation of volition. (You can't make choices after you're dead, and can't predict events well enough to set up a domino effect that will insure everything goes the way you want.) You can't even be sure your self-sacrifice is successful without being able to observe the results.

Please note this line:
Quote
Since subjective standards can be changed by the volition of the one selecting them, by definition, they cannot be used as standards.  Only standards which cannot be changed by the volition can serve as standards to assess when such changes ought be made.
which I believe stands on it's own, and supports the other following points.

@PTTG I never meant to imply that Morality would be, or should be, an alternative to Logic, but that Logic is it's foundation.
Quote
Morality defines the goals. logic defines how you set about achieving them, and which goals should be achieved.
« Last Edit: May 29, 2010, 02:31:10 pm by Corbald »
Logged

Bauglir

  • Bay Watcher
  • Let us make Good
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #67 on: May 30, 2010, 12:16:48 am »

-snip-
« Last Edit: May 04, 2015, 11:02:41 pm by Bauglir »
Logged
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.

Jude

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #68 on: May 30, 2010, 01:25:57 am »


Regarding the topic, wouldn't humans just pull the plug on such an AI as soon as they realised it's getting smarter than themselves? Humans don't seem like the kind of species who'd welcome their new AI overlords with flowers and cheers.

This is now a throwback, but I'm assuming the AI would be smart enough to foresee that, and disseminate itself widely enough that you couldn't "pull the plug" anymore. Besides, how do you pull the plug on an intelligent computer program? It would be running on computers that you couldn't unplug, because that would wreck society.
Logged
Quote from: Raphite1
I once started with a dwarf that was "belarded by great hanging sacks of fat."

Oh Jesus

Corbald

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #69 on: May 30, 2010, 01:27:44 am »

Agh, I'm very tired right now, so I'll try not to do tooo much reasoning right now, but a few things are passing through the grey matter... (BTW, I am very much enjoying this. You seem an intelligent, reasonable person, which is hard to find w/o devolving into a 'Nu-uh' fest, heh)

...


...


...


Nope, can't formulate a coherent thought... I'll try again tomorrow :D

EDIT: Should say, "Intelligent, reasonable people here." Something about DF seems to guarantee that, to a greater or lesser degree.
« Last Edit: May 30, 2010, 02:00:23 am by Corbald »
Logged

DreaDFanG

  • Bay Watcher
  • Hungry
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #70 on: May 30, 2010, 02:46:03 am »

This thread reminds me very much of some minor unfinished stuff I wrote...


=P
Logged
Smash me and I shall rise again, but not make stupid threads that get me muted.

Cthulhu

  • Bay Watcher
  • A squid
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #71 on: May 30, 2010, 07:36:44 am »

I'm pretty sure they already went over that at the "Let's make a smarty computy" Skunkworks inside Mount Hood.
Logged
Shoes...

Eagleon

  • Bay Watcher
    • View Profile
    • Soundcloud
Re: The Technological Singularity thread: an explaination of concepts
« Reply #72 on: May 30, 2010, 02:27:54 pm »

We have two options to cause a singularity within our lifetimes (as I see it). Both are possible, IMO, but the first is more likely.

We've already begun augmenting our intelligence on a distributed scale. Wikipedia, google, etc. makes information retrieval many hundreds of times faster (and if you don't believe me, you're free to spend hours driving to the library, looking up sources of sources in unwieldy catalogs, writing pages of notes that could be copied in an instant on a computer, etc.) It might sometimes be inaccurate, which I think is the most common argument against this. But in many areas, and with training, inaccuracy can easily be detected by common sense and trial and error. Right now, there is no formalized effort to make this symbiosis a more effective tool of research. If the scientific community took more lessons from programming tutorials and hacker communities, I think we'd see some incredible things happen. By hackers I mean groups of people that self educate without regards to 'grade-level' or qualifications, not malicious script-kiddies. This is the first option.

If an eighth grader wants to learn about how chemistry works, and becomes interested in organic chemistry, I think it's very worthwhile for there to be a community and resources available to him to ask specific questions and get not only the answer to them, but an explanation of an answer where parts of it are not understood. This despite the fact that they might be stupid enough to try to synthesize nitroglycerin, or grey goo. I recently stumbled across a group of people doing exactly this with 3D printers (RepRap community and others, if anyone's interested). It's not practical by any means commercially, but people are still doing it. We're vastly underestimating the value of our existing net intellectual strengths, by elevating relatively minor talents high above people that could still do useful research if given the chance and education.

The second, AI. AI is funny to me. People seem to leap immediately to the idea of a computer that can take an encyclopedia and make sense of it out of context. Nonsense. Intelligence is a result, not a beginning. It only happens because it needs to happen. Granted, it happens because there is an evolved capacity for it, but there's something that most people miss - our brains are not substantially different from those born hundreds of years ago.

Think about that. Those brains were capable of creating modern information theory. Quantum physics. Leading multinational businesses and developing modern processors. We say that we're more advanced in science, and therefore more intelligent. Yet, when we talk about computers making themselves 'more intelligent', we talk about them optimizing processor construction, software execution, etc. Most people do not learn as much as they could, because they specialize, and they don't have the time to study all their lives.

I don't think we'll see a significant improvement in AI on the level of a technological singularity until we acknowledge the role that societal interaction plays in developing intelligence. The most magnificent genius in history would have been nothing if they had never grown up in a culture that nourished their intelligence. That's why an AI needs parents, and friends, and even teachers. It cannot teach itself. And in order for parents and friends to be relevant, an AI needs to value the things they provide.

Which means an AI needs to be like us, emotionally. Essentially, we're developing models of intelligence in the wrong direction. We need an AE. And that's fucking terrifying to most people - we don't like to think about an AI that's afraid of the dark, or one that needs to eat. I mean, we might tolerate one that wants a hug now and again, but what happens when it doesn't get enough of them? There are enough emotionally maladjusted humans that this prospect scares me, even though I've been looking at the problem and working towards solving it for two years now. So it isn't going to happen unless some hacker does it without public support, which means it's probably going to be designed to hide itself and prevent intrusion into its operation. And whether or not it becomes harmful, it will probably be seen as harmful because of this tendency. The only solution I see is for some organization (government, commercial, military, non-profit, doesn't matter) to bite the bullet and take this risk.
« Last Edit: May 30, 2010, 02:40:05 pm by Eagleon »
Logged
Agora: open-source, next-gen online discussions with formal outcomes!
Music, Ballpoint
Support 100% Emigration, Everyone Walking Around Confused Forever 2044

eerr

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #73 on: May 31, 2010, 03:13:30 am »

The first AI's with any real depth will start on the internet.

I guarentee it.

No other place is more suited to interaction, for a machine.
Logged

Corbald

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #74 on: May 31, 2010, 03:07:10 pm »

The first AI's with any real depth will start on the internet.

I guarentee it.

No other place is more suited to interaction, for a machine.

Could already have happened, at least the beginning stages. I mean, with all the adapting, self-replicating viruses (http://en.wikipedia.org/wiki/Plural_form_of_words_ending_in_-us#Use_of_the_form_virii) it's reasonable to suggest that there is at least life (In a digital sense) if not some early, rudimentary form of intelligence.

@Bauglir
Got kinda caught up in my current fortress, but I'm still popping back here now and then and considering your angle. I'm not the most intelligent being in existence, so I have to work through this as my tragically damaged brain (and ADHD) allows! ;D
Logged
Pages: 1 ... 3 4 [5] 6 7