We have two options to cause a singularity within our lifetimes (as I see it). Both are possible, IMO, but the first is more likely.
We've already begun augmenting our intelligence on a distributed scale. Wikipedia, google, etc. makes information retrieval many hundreds of times faster (and if you don't believe me, you're free to spend hours driving to the library, looking up sources of sources in unwieldy catalogs, writing pages of notes that could be copied in an instant on a computer, etc.) It might sometimes be inaccurate, which I think is the most common argument against this. But in many areas, and with training, inaccuracy can easily be detected by common sense and trial and error. Right now, there is no formalized effort to make this symbiosis a more effective tool of research. If the scientific community took more lessons from programming tutorials and hacker communities, I think we'd see some incredible things happen. By hackers I mean groups of people that self educate without regards to 'grade-level' or qualifications, not malicious script-kiddies. This is the first option.
If an eighth grader wants to learn about how chemistry works, and becomes interested in organic chemistry, I think it's very worthwhile for there to be a community and resources available to him to ask specific questions and get not only the answer to them, but an explanation of an answer where parts of it are not understood. This despite the fact that they might be stupid enough to try to synthesize nitroglycerin, or grey goo. I recently stumbled across a group of people doing exactly this with 3D printers (RepRap community and others, if anyone's interested). It's not practical by any means commercially, but people are still doing it. We're vastly underestimating the value of our existing net intellectual strengths, by elevating relatively minor talents high above people that could still do useful research if given the chance and education.
The second, AI. AI is funny to me. People seem to leap immediately to the idea of a computer that can take an encyclopedia and make sense of it out of context. Nonsense. Intelligence is a result, not a beginning. It only happens because it needs to happen. Granted, it happens because there is an evolved capacity for it, but there's something that most people miss - our brains are not substantially different from those born hundreds of years ago.
Think about that. Those brains were capable of creating modern information theory. Quantum physics. Leading multinational businesses and developing modern processors. We say that we're more advanced in science, and therefore more intelligent. Yet, when we talk about computers making themselves 'more intelligent', we talk about them optimizing processor construction, software execution, etc. Most people do not learn as much as they could, because they specialize, and they don't have the time to study all their lives.
I don't think we'll see a significant improvement in AI on the level of a technological singularity until we acknowledge the role that societal interaction plays in developing intelligence. The most magnificent genius in history would have been nothing if they had never grown up in a culture that nourished their intelligence. That's why an AI needs parents, and friends, and even teachers. It cannot teach itself. And in order for parents and friends to be relevant, an AI needs to value the things they provide.
Which means an AI needs to be like us, emotionally. Essentially, we're developing models of intelligence in the wrong direction. We need an AE. And that's fucking terrifying to most people - we don't like to think about an AI that's afraid of the dark, or one that needs to eat. I mean, we might tolerate one that wants a hug now and again, but what happens when it doesn't get enough of them? There are enough emotionally maladjusted humans that this prospect scares me, even though I've been looking at the problem and working towards solving it for two years now. So it isn't going to happen unless some hacker does it without public support, which means it's probably going to be designed to hide itself and prevent intrusion into its operation. And whether or not it becomes harmful, it will probably be seen as harmful because of this tendency. The only solution I see is for some organization (government, commercial, military, non-profit, doesn't matter) to bite the bullet and take this risk.