I do not post in this subforum. In fact, I have gone out of my way to make it considerably difficult for me to read this subforum at all. However, I recently began to slip, and worked around my own efforts to at least keep up-to-date with some of the news involving Ukraine.
Instead I find myself reading this.
This will be my first, and last, post in this subforum. If you have a specific response as to my sources or the veracity of my argument, you can DM me directly if you wish for me to reply.
First, to dispense with this:
I'm not saying I'm right... but https://www.sciencetimes.com/articles/36093/20220214/slightly-conscious-ai-existing-today-according-openai-expert.htm
To make a few points clear:
- Consciousness as a term has no scientific merit—
"The word consciousness, as we use it in daily conversation, is similar to the layman's heat: it conflates multiple meanings that cause considerable confusion."— Stanislaus Dehaene, Consciousness and the Brain, 2014
Generally research into the field of consciousness specifically studies conscious access, or when perceived information actually enters the point of perception such that, following, it enters memory. This is done because other uses of the term in terms of a field of research are, simply put, subjective and impossible to advance scientific inquiry with. - Hence, terms such as "slightly conscious" are completely worthless to analyze, because they convey no meaning whatsoever. What is conscious, in this case? What does it mean to be "slightly" conscious? If I sever off the leg of a cricket and it continues to twitch, does that mean the leg is "slightly conscious"? There is no clear answer because there is not meant to be one— the goalposts change as readily as the claim does.
- If we want to use something even remotely empirical for studying the notion of consciousness (fraught as it may be), then our best bet would be the Lovelace test. Naturally this is still outside of the realm of pure objectivity, but it's still useful to have around.
- The person who specifically make this remark was describing their current project, some flavor of GPT. GPT is an artificial neural network, which is to say that it's a very lengthy linear algebra equation that, though iterative methods, balances internal weight values to best match a complicated and multifaceted input of substantial volume. Specifically, it follows what is known as a "transformer" model, which is a fancy way of handling sentence-to-sentence transformations through full-sentence analysis that includes an "attention" (read: weight bias) component to evaluate of the words in a sentence in a particular order, whether given prior sentences to weigh other words in the current sentence more or less when providing the output of another sentence.
- Regarding GPT and all possible future iterations, it does not pass the Lovelace test. What does that mean, specifically? That means a few things, but most notably (my own rephrasing from the earlier link) that it is entirely deterministic. That is to say, more in the flavor of the actual language of the Lovelace test itself, that a researcher with full understanding of the neural network involved in the GPT model and its particular training data can absolutely predict with full confidence the output for any given input.
- If you would prefer, we could instead consider the neural network model under the umbrella of the current existing framework for understanding consciousness, the global workspace model (Baars, 1988) and global neuronal workspace (Dehaene, 2011). Now, you might argue that there's some degree of overlap here between the GPT model and the global workspace theory. However, do not be mistaken; the global workspace theory specifically concerns an active system, while GPT is inherently static. What do I mean by this? An active system continues to transform based on a perpetual stream of inputs, outputs, and feedback from those outputs. GPT, following its pretraining (it's the P in GPT), has its weights set in stone. No changing happens following that point regarding its weights; any change in the nuance of language would require retraining the entire thing. As for the global neuronal workspace, GPT by contrast lacks any means of altering its connection structure whatsoever. In fact, the entire basis of artificial neural networks requires these connections be static, for it is the weight of these connections that is "learned" through training (that is, a regression is calculated). Make no mistake; GPT and all other artificial neural networks are nothing more than a series of matrices (that is, linear functions) applied to the input to yield an output. Some of these, like transformer models, feed the output back into the input. Others don't. Either way, these are complicated mathematical equations with no notion of state. In fact, leaving GPT with the same recurrence for too long (that is, having it operate on too long of a paragraph) will degrade the output to a point of worthlessness.
Of course, that's not really all that satisfying. So, in detail, let me explain exactly what is going on with computers, consciousness, (artificial) intelligence, and why no one is ever going to program an AI.
To begin, we need to understand exactly what a computer is. For that, we can turn to Searle, who in his paper "Is the Brain a Digital Computer?"
(Searle, 2002), worked out that the definition of a computer is arbitrarily applied to anything at all. To expand on that beyond his paper, we can say that a computer is a
model. That is to say, it is a mathematical object, and not a physical one. There are dynamical systems whose behaviors can be, doing away with a few variables such as physical position, closely approximated through abstraction to this powerful mathematical model (for they are explicitly built for this purpose), but they are not in themselves computers. After all, there is nothing that exists in the model that can predict what happens to a CPU when struck by a hammer. Programs, in this regard, are merely interactions with this model (even without resorting to the Curry-Howard correspondence).
As for consciousness? I'll defer to the papers from Baars and Dehaene, for they provide far more depth than I could on the nuances of current models of consciousness. The point, however, is that they are just that— models. Models that do not necessarily have a 1:1 correspondence to the model of a computer. That is not to say that they cannot be embedded within a computer; far from it. That's what is implied by Turing universality, after all; any computable function can be simulated by a Turing machine. There is no reason to suppose the universe is not computable, so we can keep that assumption around to make our lives easier for the sake of this explanation. However— and this is an important caveat— a Turing machine's ability to simulate any arbitrary computable process does not mean it can do so with any degree of effectiveness. That would require the underlying models to be very similar. You can visualize this simply by viewing a version of Conway's Game of Life rigged up to simulate a computer— it will function, but far slower than a system designed to mimic that computer from the start.
The global neuronal workspace model, in that regard, is the worst-case scenario for Turing simulation. It operates in parallel, has a substantially large number of states (if a brain has a billion neurons and each neuron can be on or off, then the brain has 2 to the power of a billion possible states. This is larger than the number of atoms in the universe), and millions of confounding variables that make basic connective models insufficient for adequate simulation. No dynamical system implementing the computer model would ever be able to simulate even a single state transition of the global neuronal workspace in all its glory. And for clarity, models of consciousness do not necessarily suppose the consciousness of humans; while Bostrom (
Superintelligence, 2014) may muse on the notion of an AI being an "alien consciousness" compared to the consciousness of humans, there is no evidence that any sort of entity that we would truly call conscious would fit neatly enough into the model of the computer as earlier described as to actually run. If we defer back to the Lovelace Test, the primary criterion is the ability to generate creative works in a way that cannot be "explained away" by someone with absolute knowledge of the system. That is to say, the system cannot be deterministic; it, quite literally, cannot be 1:1 with the computer model, for in being so would be absolutely deterministic and therefore fail the Lovelace test. If anything, the Lovelace test can be regarded as that exact statement, that no model of consciousness can be effectively simulated by a Turing machine.
That is not to say that AI cannot exist. However, we do have to dispense with the usual flavor of "stumbling through the woods" spoken by the usual AI prognosticators such as Bostrom and Musk (a metaphor whose existence, I might add, might be a key indication that the path currently set out is not the right one, if no one in the field can actually conceive of how they actually get to the destination from where they are). AI, fundamentally, is no program waiting to be written by some hapless developer. It cannot even exist on the dynamical systems that we refer to as computers. Rather, it is something that would be born about through research into the exact physical properties of the dynamical systems that we do currently describe as conscious, as to work out the basic properties necessary to exhibit these features. With that understanding at hand, we would then be able to create hardware that also holds these features, at which point perhaps some notion of "programming" might circle back in depending on the nature of the hardware. That is the route to AI.