I don't think you get what I'm talking about in regards to network topology btw.
How do you actually think neural networks operate btw? do you know about network topology?
"cleverbot" is not a neural network - it's just a plain old keyword search thingy that spits out pre-written replies, and the usual way you do what you're talking about with letter frequency is called a Markov Chain. Neither of those have anything to do with neural networks. So those examples are no good as examples.
The thing is exactly what I was talking about - neural networks refer to a
specific technology. not cleverbot and markov chains. Those aren't even done with neural networks.
https://en.wikipedia.org/wiki/Deep_learning#Deep_neural_networksA deep neural network (DNN) is an ANN with multiple hidden layers between the input and output layers.[10][2] Similar to shallow ANNs, DNNs can model complex non-linear relationships.
That's it - that's the
entire definition of "deep learning". Traditional neural networks only had one layer of "processing" nodes in them. "Deep Learning" is all about working out how to work with multiple layer networks. They're still entirely one-way signal processing things and they are dependent on "outside" code to do the actual learning. The "learning" part isn't actually part of the network itself, any more than it was for the one-hidden-layer networks. They don't have memory, state, nor do they have any feedback loops inside the network. Nobody knows how to design networks with those yet - e.g. nobody knows how to make a "living" network in which signals propagate around and it has memory and state.
That's why I'm saying it's become a "buzz word" that people think is "a magic box" when in fact the topologies aren't any more complex than before, they've just thrown more layers in, added more cells, and pour more "big data" into the same old traditional dumb network designs, purely because we have more processing power and it's cheaper to scale up a stupid network than to design a clever one.
Back to your example with letter-inputs. you could use a neural network to learn a mapping from one letter to the next, however the network, even a "deep learning" one has no
state - it has no such thing as memory. So it cannot say "oh the last letter fed to me was a Q and this one is a U" and react differently because of that. All it knows is that it was hard-wired to respond to "Q" with one output and "U" with a different output. So no, you can't in fact get a meaningful word-processing NN by feeding it a single letter at a time. All you could teach
that network is that "T is followed by H" but it will
always follow T with H if you teach it that. The Neural Network doesn't have
state and it cannot remember what order it was taught the sequence, so it has no information about that whatsoever. If you want a neural network to do something more advanced that simply spit out "H" every time it sees "T" then you need to hand-design it to do exactly what you ask, and it will almost certainly
not be able to do any more than what you explicitly design it to do. You seem to view NNs as a "magic box" that you feed a stream of letters to and it somehow makes sense of them. It just don't work like that.