Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 2 3 [4] 5 6 7

Author Topic: The Technological Singularity thread: an explaination of concepts  (Read 6685 times)

Criptfeind

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #45 on: May 27, 2010, 06:11:10 pm »

We can make your Brain-In-A-Jar think you have a hoverboard, having one for real is way to unsafe. I heard a kid died on one.

Also am I the only one not creeped out by the thought of a AI replacing humans? After all that is only evolution and we cant last forever anyway.
Logged

smigenboger

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #46 on: May 27, 2010, 06:19:13 pm »

I love the recent talk of brains-in-vats, although who's to say the AI haven't already taken over and we're already in the vats?
Logged
While talking to AJ:
Quote
In college I studied the teachings of Socrates and Aeropostale

dragnar

  • Bay Watcher
  • [Glub]
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #47 on: May 27, 2010, 06:28:07 pm »

Also am I the only one not creeped out by the thought of a AI replacing humans? After all that is only evolution and we cant last forever anyway.
Nah, I agree. If we get replaced, we get replaced. Personally, I'll become a cyborg at the first signs of singularity.
Logged
From this thread, I learned that video cameras have a dangerosity of 60 kiloswords per second.  Thanks again, Mad Max.

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #48 on: May 27, 2010, 06:31:06 pm »

Also am I the only one not creeped out by the thought of a AI replacing humans? After all that is only evolution and we cant last forever anyway.
Nah, I agree. If we get replaced, we get replaced. Personally, I'll become a cyborg at the first signs of singularity.
This.

Or as I like to say: I welcome the chance to become your new robotic overlord. :D
Logged

Retro

  • Bay Watcher
  • o7
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #49 on: May 27, 2010, 06:31:54 pm »

If my brain is in a vat, I want a better goddamned simulated reality than this.

Sir Pseudonymous

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #50 on: May 27, 2010, 06:34:29 pm »

I love the recent talk of brains-in-vats, although who's to say the AI haven't already taken over and we're already in the vats?
Then just think how awesomely meta it'll be to add another layer of brain-in-vatitude to the equation!
Logged
I'm all for eating the heart of your enemies to gain their courage though.

Criptfeind

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #51 on: May 27, 2010, 07:13:43 pm »

I love the recent talk of brains-in-vats, although who's to say the AI haven't already taken over and we're already in the vats?

We have no hoverboards.
Logged

dragnar

  • Bay Watcher
  • [Glub]
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #52 on: May 27, 2010, 07:49:38 pm »

I love the recent talk of brains-in-vats, although who's to say the AI haven't already taken over and we're already in the vats?

We have no hoverboards.
He makes a compelling argument. But maybe the humans placed into a perfect simulation all died, and the robots had to create a more pathetic simulation where those without admin access can still stop bullets with their minds... The Matrix makes less sense the more you think about it.
Logged
From this thread, I learned that video cameras have a dangerosity of 60 kiloswords per second.  Thanks again, Mad Max.

Criptfeind

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #53 on: May 27, 2010, 08:57:43 pm »

My mind is to squishy to stop bullets.
Logged

SolarShado

  • Bay Watcher
  • Psi-Blade => Your Back
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #54 on: May 28, 2010, 05:47:12 pm »

I <3 this thread.

My one regret is that I have nothing to contribute...
Logged
Avid (rabid?) Linux user. Preferred flavor: Arch

Corbald

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #55 on: May 28, 2010, 07:36:49 pm »

Ok, I have been lurking here for some time, but I have to throw my 2c in here.
I make some logical assumptions, because without them, the entire world kinda falls apart anyway. These are some of the assumptions that science can't prove, but relies on just to exist!

1) The Universe is self consistent. If the rules for one part of reality don't accurately reflect the rules somewhere else, it's because we don't understand the underlying rules. (Black holes work, reality outside them works, so some super-set of rules is governing both)

2) Morality is universal. OR Two entirely logical beings, given identical information, will arrive at the same conclusion.

3) The basic law of economics reflects the basic laws of energy conservation.

4) Emotion (Drive) can evolve from an entirely logical system. Why not? Reality is entirely logical, but it creates emergent systems that SEEM chaotic. Such as the human brain.

We'll start with #3. AI won't kill us. it may upgrade us, though that would likely be an entirely voluntary situation. I say this because killing us would be counter-productive to it's goals, regardless of what those goals are! Think about it, if you have a job to do, why kill/destroy/remove something that can do other, less critical jobs, while you focus on the important stuff? Such as maintaining your systems, or some simple number crunching so you can free your own processors for other algorithms.

And now to #2. The more learned, and the more intelligent a human becomes, the less violent they tend to be. Look at any High School for an obvious example of this. Geeks vs. Jocks (sorry to any Smart Jocks here :D  ) This trend would obviously continue to higher levels as well. It's simply wasteful  to murder/steal/destroy. Even medical science is trying it's best to remove the cutting from surgery, for obvious reasons.
Quote
...to argue that moral judgements can be rationally defensible, true or false, that there are rational procedural tests for identifying morally impermissible actions, or that moral values exist independently of the feeling-states of individuals at particular times.
R W Hepburn
sums it up succinctly. Though I would go beyond that and state that the feelings of the individual(s) are an inherent part of the interdependent rule set that would be used to determine moral/amoral. (Two people want something, and have equal right to it, the person that feels the most strongly about the object in question has the most right to it. As a rough (very rough) example)

And #1 is self-explanatory. It's the basic concepts of science and relativity. The universe works because it makes sense, and it makes sense because it works. It's 'Self-Consistent.' Any part of reality will make sense if you understand all of the rules, and all of the rules can be derived from any sub-set of them. (Science is based on that principle, and I think science is pretty much a proven idea so far :D )

#4 MUST be true or AI will NEVER happen. If an entirely logical system determines that it is pointless, it will self terminate, to prevent waste. Self termination would be wasteful, and would circumvent volition (personal choice) and is thus illogical. It's one of the basic tenants of logic. Continues existence is necessary for volition and thus survival is the ultimate goal of any being. That right there is a 'Drive.' All other emotions/drives can be deduced from that source. (Freud was on to something here. Sex is the most important part of our psyche, only because of this basic rule, and we can't live forever, so we attempt to make the species (and our own DNA) live forever.)

So... Now that we have proven that AI will be our friends. Let's talk about whether AI can even exist.
AI = Artificial Intelligence.

Questions that MUST be answered to determine if something is AI:

What is Artificial?
What is Intelligence?

Sorry folks. Both of those questions are unanswerable. Arguments have be raging since Mankind could ask those questions as to what is this 'Thinking?' What does it entail? You can go look it up if you want. You'll find an answer, I'm sure. And it'll be wrong in 3 months (or 3 minutes). The definitions of 'Alive,' 'Thinking' and 'Intelligent' change all the time. Let's just call it 'The ability to use logical deduction, and process information, at at least the level that Humanity does (on average).'
Same with Artificial. EVERYTHING is natural, as it's made from/with the substance of our universe. Humans are nothing more that extremely complicate chemical machines. Are we 'Natural?' or 'Artificial?' Let's use 'Anything made by Humans, that is not, it's self, inherently Human. (Babies :D  )'

Can we create AI? Sorry to the doubters, but DUH!! If the universe can do it, then so can we (remember the self consistent universe bit?) Just remember, the difference between 'Organic' and 'Mechanical' is a VERY thin line, and entirely a matter of classification, NOT application. We are already doing research that says that micro-processors would work better if we used organic chemicals, instead of silicon transistors. (to the power of 10, if not more) It may even be possible to fit ALL THE TRANSISTORS YOU WANT INTO THE SAME SPACE!

In short (too late) Technological Singularity (The super AI version, not the people = computers kind) is;
A) Possible, and probably inevitable, if we can keep up our current levels of discovery.
B) Likely not a 'Bad Thing.'
C) Coming soon....
« Last Edit: May 28, 2010, 07:52:29 pm by Corbald »
Logged

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #56 on: May 28, 2010, 09:41:47 pm »

First off, number 2 is fatally flawed. Morality is not universal. It can be made 'universal' with logic within the human species, but interspecies relations and relations within other species may follow entirely different rules. For example, is the slaughter of thousands of fish more or less wrong than the death of a human? If, as I suspect you would, you answered killing the human is more wrong because humans are intelligent, who is to say an AI would not say the same of us. That the death of thousands of humans is less wrong than the death of one AI with superhuman intelligence. Morality is an evolved trait which is used to more or less keep social structures together. A non-social species, if it were to evolve to intelligence*, may not even acknowledge morality as a concept. This is why if we create super-intelligent AI, they must be started from a template similar to humans; to think of us as being essentially the same as them; part of their society. When you have an us and a them, it will rapidly devolve into us vs them.

Secondly, organic computers are not magical. They can not fit infinite numbers of transistors into a finite space. The reason organic processors such as the brain are currently more efficient than your PC's CPU is because they are massively parallel. A neuron maxxes out at around 1 or 2 kHz, whereas even a crappy PC goes at almost 2.5gHz. The difference is you have only a small handful of processors (1 to 16 or so) whereas you have billions of neurons. However, when talking of organic processing methods, what is being discussed (to the best of my knowledge) is essentially chemical reaction based. What matters is how they perform, not whether they are arbitrarily classified as organic or inorganic. In that regard, I'm fairly sure nanotech beats out organic processing (or likely will in a relatively short period of time) in terms of both speed and durability.

*speaking purely in hypotheticals here; social structure may or may not actually be one of the keys to evolving higher intelligence
« Last Edit: May 28, 2010, 09:46:38 pm by alway »
Logged

Pathos

  • Guest
Re: The Technological Singularity thread: an explaination of concepts
« Reply #57 on: May 28, 2010, 09:58:27 pm »

One thing you have to remember, guys.

No matter how far technology advances, they will never be able to make your dick long enough to satisfy your ego.

How does that make you feel?
Logged

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #58 on: May 28, 2010, 10:02:56 pm »

One thing you have to remember, guys.

No matter how far technology advances, they will never be able to make your dick long enough to satisfy your ego.

How does that make you feel?
We can rebuild him! We have the technology!
No seriously, I think they already do that. :P
Logged

Corbald

  • Bay Watcher
    • View Profile
Re: The Technological Singularity thread: an explaination of concepts
« Reply #59 on: May 28, 2010, 10:14:01 pm »

First off, number 2 is fatally flawed. Morality is not universal. It can be made 'universal' with logic within the human species, but interspecies relations and relations within other species may follow entirely different rules. For example, is the slaughter of thousands of fish more or less wrong than the death of a human? If, as I suspect you would, you answered killing the human is more wrong because humans are intelligent, who is to say an AI would not say the same of us. That the death of thousands of humans is less wrong than the death of one AI with superhuman intelligence. Morality is an evolved trait which is used to more or less keep social structures together. A non-social species, if it were to evolve to intelligence*, may not even acknowledge morality as a concept. This is why if we create super-intelligent AI, they must be started from a template similar to humans; to think of us as being essentially the same as them; part of their society. When you have an us and a them, it will rapidly devolve into us vs them.

Secondly, organic computers are not magical. They can not fit infinite numbers of transistors into a finite space. The reason organic processors such as the brain are currently more efficient than your PC's CPU is because they are massively parallel. A neuron maxxes out at around 1 or 2 kHz, whereas even a crappy PC goes at almost 2.5gHz. The difference is you have only a small handful of processors (1 to 16 or so) whereas you have billions of neurons. However, when talking of organic processing methods, what is being discussed (to the best of my knowledge) is essentially chemical reaction based. What matters is how they perform, not whether they are arbitrarily classified as organic or inorganic. In that regard, I'm fairly sure nanotech beats out organic processing (or likely will in a relatively short period of time) in terms of both speed and durability.

*speaking purely in hypotheticals here; social structure may or may not actually be one of the keys to evolving higher intelligence

As for your response to #2, I, respectfully, disagree. Thousands of fish, or a Human? Let's say that I don't choose to not make that choice at all, and (impossibly) physics forces me to. Would you agree to the 'Lesser of Two Evils' method? But also agree that BOTH are inherently 'Wrong?' I, am, of course, NOT a hyper intelligent AI, and I'm sure any answer I give could be picked apart by someone, but isn't that the whole issue in the first place? The pursuit of knowledge to help us to avoid situations like this? If, for example, I knew that one of those fish carried a gene that would one day lead to the development of sapience in fish, I might choose the human. Otherwise, I would choose the creature that already has volition, human or otherwise. That being which is most capable of self improvement/information processing, within the limited remaining life span of the universe. 1) attempt to find a solution in which no damage is done. 2) Failing that, minimize damage. 3) failing that, damage that which is repairable/replaceable. 4) Failing that, realize that since matter/energy is interchangeable, all that is really being lost is the information concerning the structure of fish/human.

Basically, you aren't disproving my point, just forcing me to do a cost/benefit analysis. Possibly some damage control.

I'm not saying that a hyper intellect is going to be perfect, but that morality IS universal. The big issue here isn't that statement, it's the fact that we exist inside said universe, and can't know everything. If we could know everything in the universe, we would have to exist outside it, which implies that there is more universe to exist in!


Point the Second. Sorry, I failed to point out in my OP that the 'infinite transistors' was a poke at so called 'quantum processors,' NOT the organic processors (read 'Carbon based Transistors.' Carbon is extremely resistant to heat, silicon is... well... not) ma bad heh. And I wasn't refering to the brain when I said organic processors, but rather new transistors based on organic molecules, rather than the good old silicon based ones we have now. Imagine the power of a microchip where each transistor was a molecule with only like 10-15 atoms in it. Even a huge molecule, say, the size of DNA, would be a major upgrade to what we have now. As it is now, Moore's law will eventually break down, due to that fact that we can only make a silicon transistor so small before it's own action burns it up, and I'm not even gonna get into Planck's constant here, as I tend to be long winded as it is!

EDIT: Wanted to also inject that morality is NOT a concept created to ensure social stability, but rather an analysis of least-loss physical principles. I'm trying REALLY hard right now to find a (rather wordy) quote that EXACTLY sums up what I'm trying (failing) to say...

EDIT 2: Can't find the quote :(   Read the trilogy: "The Golden Age" by John C. Wright. Specifically, book 3, "The Golden Transcendent" After Phaethon meets his father at the Solar Orbital Array, while preparing to enter the Sun, the group discuses Objective Morality. Phaethon presents the most exacting definition of the subject I have ever seen. If someone owns the book and can post the (admittedly multi-page) quote, please do.
« Last Edit: May 28, 2010, 10:49:48 pm by Corbald »
Logged
Pages: 1 2 3 [4] 5 6 7