I have noticed there are a few tranhumanists/singularitarians on these forums, and since most don't have any idea about what those are, I have decided to create a thread to sum up the general concepts of said ideas. Its a bit long (800+ words), and you may actually learn something; YOU HAVE BEEN WARNED!
The main concept behind the Technological Singularity is that when artificial intelligence surpasses human intelligence, they will be able to improve upon their own code at an ever faster rate. This will lead to an ever more rapid increase in the intelligence of these AI.
Assumption 1: The human mind can be emulated within computer software. This occurs either through outright simulation (much less likely) or using programming techniques which are inspired by the brain's structure (very likely).
Evidence: There is, as of yet, no theoretical reason as to why it would not be possible to emulate the human brain. In fact, many AI have been created which emulate various parts of the brain to do specific tasks.
Assumption 2: Technology will continue to advance at a rate which is, at the very least, slightly below linear for the forseeable future.
Evidence: While the exact curve of the growth of technology can be debated (with estimates ranging from slightly diminishing returns all the way up to above exponential growth, depending on the estimator), there is no reason to think technological progess will slow considerably for at least another century.
Assumption 3: The most advanced general AI will be relatively similar to humans in their thought patterns, ethics, morals, and emotions.
Evidence: If we create AI based on reverse engineering of the structure of the human brain, the AI itself would have a mind similar to that of humans.
These three assumptions are really all it takes for a Singularity event to occur. The first assumption ensures the ability to create general AI. The variations within the second determine the timing of the Singularity event. If, as some suggest, technological growth in exponential, we may see a Singularity event within the next 30 to 40 years. If it is linear, it would likely occur before the end of this century. This is something to keep in mind, the Singularity, if in deed one is to occur, will happen within the lifetimes of some of us who are alive today. The third ensures a Singularity event would actually take off. An advanced, general AI alone will not spark a Singularity. That AI must also have the curiosity about discovery and intelligence humans do in order for it to be motivated to improve upon its own code. For example, a general AI meant for military purposes as a drone would almost certainly not start a Singularity. As such, Skynet scenarios are relatively unlikely due to the vastly different goals of such an AI.
The Hardware:
Moore's Law pretty much sums up this section. The power of computers doubles aproximately once every 2 years; even less for some components. Estimates for the computational power of the human brain vary, but even anti-singularity websites give estimates which are well within amounts achievable through Moore's Law within 2 decades on a supercomputer. Singularitarian websites usually have lower estimates which predict that supercomputers at the current day are capable of running human-level AI and that within a decade, so will $1000 consumer computers. In either case, it is a difference of relatively small quantities of time.
The Software:
The hardware required for a Singularity event will be around for several decades before the neurological basis on which to create a general AI becomes available. The brain is somewhat mysterious because it is incredibly difficuly to study; especially if one is attempting to study it when it is still running. However, the technology to study the brain on the individual neuron level is becoming available; projects to map brains have already begun. The time frame in which these efforts are completed is probably going to be one of the largest determinants of when a Singularity occurs.
Transhumanism 101:
At this point, I will shift away from explaining the Singularity and zoom out a bit from Singularitarianism to the more general philosophy of transhumanism. Transhumanism, as best I can tell, is a diverse subset of secular humanism which believes that the benefits of technology far outweigh the risks and should be used to better the human condition. In addition to Singularitarianism, there are many other sub-sets of transhumanism, many of which are partially overlapping. I won't go into them all here, since the wikipedia pages describe them so much better, but the views of these subsets range in scope from helping reduce poverty by giving technology to poor regions to things as lofty as achieving immortality through technological means.
The Effects:
A Technological Singularity would be a massively world-altering event. Technological progress would be made at a rate the likes of which had never before been seen. Within several years of the start of the Singularity, society would be radically different. Advances in technology, nanotechnology in particular, would likely lead to a post-scarcity economy powered by swarms of resource gathering nanobots. Aside from those results, the effects of a Singularity are unclear. Many guesses can and have been made, but I have already typed a long enough essay for now.
I probably could have gone on for several thousand words, but I decided against it to avoid over-clarification. I tried to include various speculations without getting too deep into any one in particular and tried to avoid describing future events which may not actually occur. As such, this is merely a summary of the idea of a Technological Singularity and not an actual in-depth paper on it. For more info, starting with wiki pages is always a good idea.
http://en.wikipedia.org/wiki/Artificial_intelligencehttp://en.wikipedia.org/wiki/Transhumanismhttp://en.wikipedia.org/wiki/SingularitarianismOr for an optimistic view, you could try Ray Kurzweil's book "The Singularity is Near."