Neural networks, genetic algorithms etc are what is defined as the field of AI. What they lack is consciousness. The concept of a singularity doesn't necessarily imply we get a "person in a box" out of it. So asking for that is a red herring.
The main point of the singularity concept is that these sorts of things aren't linear at all. Say you build a machine that can read and semantically understand college-level textbooks. That would be a very useful machine. Making the machine will take a long time. But once you have that machine, you can basically pour all the books into it in no time at all. That is a discontinuous growth in what that machine is then capable of. The implications of having the first such machine that is capable of doing that will have massive flow-on effects through every field of human thought.
But asking why AI would be useful, you might as well ask why any human smarter than average is useful. Think about possible future AIs. Is the potential limit of AI smartness less than an average human, equal to an average human, or greater than an average human. There's nothing inherently special about the average intelligence of a human. The limit of machine intelligence is very unlikely to exactly fall there. The limit is likely to be vastly different to the limit for a human. And given that it's potentially possible to build a computer with many more components than a human brain, the limit is probably above the normal limit for a human. So, asking what good will AI do is like saying what's the point of having smarter-than-average people, when average people are perfectly sufficient.
No, you might not as well, because if you point at a smart person and a 'dumb' person, put them in a field that still has some potential left for innovation, and ask them to get to work, you can and will have moments of insight that don't come from simply being smart (whatever that means - ask a comp-sci student to design a bridge some time), but rather from the experiences each person has in learning the material. Intelligence is not something that can be measured on a scale. We are capable of intelligence because we can form shortcuts and discard information, we
need cognitive blind spots to function and produce new shortcuts. Our intelligence depends on our social connections as much as our ability to analyze the world around us, and that's what makes it suited to solving human problems with any amount of rapid intuition.
It's not so simple as 'feed knowledge in slot A, get solution B.' For GA and NN designs, someone has to judge that solution against reality. The famous tank lighting anecdote (a NN learns that photos shot at a certain time contain tanks, rather than visually recognizing any part of the camouflaged tank itself) is a real problem for designers of neural networks - signal inflow still, for the most part, has to be designed for a set of known parameters, because neural networks are incredibly susceptible to situational bias. Signal output has to fit with physical reality. Now you start to talk about taking unregulated input from human text, which may contain humor or plain inaccuracy, and the whole gamut of action available to solve a problem in the real world, and the situation goes too far out of control to consider for serious design beyond basic optimizations like the antenna, or problems with perfectly known parameters and outcomes (there are very few).
The reason we can deal with it isn't even consciousness - it's cultural context and the
decades of training we've had from comparatively incredibly efficient neural circuitry suited especially well for dealing with social learning. A neural network without that is going to be very, very confused by your barrage of obtuse chemical engineering texts. I think it
does imply a person (or persons, as with angle's concept) in a box situation. I would ask for that if I were still working with AI and you told me to make something that could take every college major and make sense of it all. I'd still think you were insane, because the AI probably would not want to deal with that many linear algebra jokes.