It's actually a pretty important book overall, since it's AFAIK the first detailed written description of a neural network in history and its principles of operation.
There was somewhat of a rebuttal a few years later, a very mathematical kind of book called "Perceptrons" by Minsky and Papert, 1969, which was also pretty important since it delineated the weaknesses of perceptrons - generally, that learning difficult functions like "parity", where you have to output 1 when the image is symmetrical left-to-right, and 0 when it's not - or learning invariant representations of images like letters and geometric shapes, for example - would tend to increase the required weights exponentially with the number of inputs (pixels, in the case of images), which would rapidly increase the memory required to store it to an unreasonable amount, or alternatively, that they would take an exponentially-increasing amount of time to learn.
I'm not even sure if those difficulties have been overcome by switching to multi-layered neural networks, since, as I've heard, they're based on fundamental constraints on any kind of parallel-based computing system, which all neural networks are, by design.
Unfortunately, I don't have it, and it seems that it's not available online. So I don't know if it's still applicable or not. Shame, it sounds a lot like it mathematically justifies the same issues that you've raised.