A rather concerning thought experiment. Suppose that sentience does have a mathematical model, i.e. can be programmed. Suppose, then, that the steps required to express sentience are expressed onto a piece of paper, and an individual proceeds to follow the instructions of each step. Fundamentally, this is no different than a computer program performing a series of steps in place of this individual. If we agree that the computer following these steps results in sentience, does the piece of paper, when coupled with someone willing to write out each step, produce sentience of its own? If not, what is the difference between an individual performing each step on a piece of paper and a computer processing each step on transistors and memory storage devices?
The problem with this is that we are making a mathematical model not of the sentience itself but the behavior we expect a sentient being to exhibit; there is no reason to think that there are not multiple ways to arrive at the result, only one of them actually involves the existence of a sentience. We have the problem then of the fact that we have no way of knowing whether the means we are employing to our 'ends' is the right means because we are reverse engineering the procedure as it were. The problem with true AI is as ever that it is essentially impossible to tell whether you have actually succeeded in creating it.
If you can't tell whether anything is sentient or not, what even
is sentience? Imagine that Omega* came down and told you that a certain thing was sentient;
if this would not change your expectations about that thing, not even a little, then the concept is useless. Otherwise, we
can tell whether things are sentient,
but perhaps not with absolute certainty. (Principle:
make your beliefs pay rent in anticipated experience.)
*Omega is a rhetorical/explanatory/conceptual tool that helps construct a thought experiment where the muddy vagueness of the world can be cleared aside to see how our thoughts work when unobscured by uncertainty. For the thought experiment, you trust that what Omega says is definitely true. This is like "pinning down" part of a model to better understand how it all functions. It's also sort of like controls in a scientific experiment.
A rather concerning thought experiment. Suppose that sentience does have a mathematical model, i.e. can be programmed. Suppose, then, that the steps required to express sentience are expressed onto a piece of paper, and an individual proceeds to follow the instructions of each step. Fundamentally, this is no different than a computer program performing a series of steps in place of this individual. If we agree that the computer following these steps results in sentience, does the piece of paper, when coupled with someone willing to write out each step, produce sentience of its own? If not, what is the difference between an individual performing each step on a piece of paper and a computer processing each step on transistors and memory storage devices?
Ah, that's a good way of putting it. A more abstract and vague thought experiment along these lines was what pushed me toward omnirealism - either all computible minds are real, or no minds are real, or [some weird thing that says that you realize a mind by writing down symbols but not by thinking about the model] (but the model is only present in some interconnected neurons; paper is part of my extended brain, and this possibility is invalid), or [some weird thing that says that you realize a mind when you understand how it works], or [some weird thing that says that you realize a mind not by understanding it, but by predicting how it works]. I prefer the first, because I don't see an important difference between the mathematical structure being known and the structure being ran. (There are ways to get the output without directly running things. If I use abstractions to determine what a model-mind does, rather than going variable-by-variable, I don't think that the mind-ness has disappeared. And if you can make a mind real just by knowing the mathematical model that describes how it works... then we have to define "knowledge," because otherwise I could just make a massive random file and say "statistically, at least one portion of this file would produce a mind if ran with one of the nearly infinitely-many possible interpretation systems." Or if I make it even larger, the same can be said for any given language. Heck, a rock has information. Maybe the rock's atoms, when analyzed and put into an interpretation system, make a mind. That's just ridiculous. We've effectively said that all minds are real, anyway, but in a weird and roundabout way.)
(This assumes that the substrate is not inherently important to the mind - I am ran on a lump of sparky flesh, you are run on a lump of sparky silicon, but that doesn't make one of us necessarily not a person. This seems obvious to me, but is probably a controversial statement.)
Well, fundamentally the substrate doesn't really matter- that's the Church-Turing thesis, after all. If it works on a computer, it works on pencil and paper. In that regard, if we can consider that that sentience is Turing-complete, then it must be true that sentience can exist on any medium. So long as there is a retainer of information, and something to act upon them by explicit instruction, there can be sentience.
There is a catch, though. Unbounded nondeterminism- the notion of a nondeterministic process whose time of termination is unbounded- can arise in concurrent systems. Unbounded nondeterminism, under clever interpretations, can be considered to be hypercomputational- any actor in such a system has an unbounded runtime and a nondeterministic outcome, so it remains unknowable the end result of such a system. If sentience requires such unbounded nondeterminism, then such a system would no longer need to ascribe by the Church-Turing thesis, and not need be replicable on pen and paper. We are already aware that the human brain is highly concurrent, so it's plausible that sentience requires this unbounded nondeterminism arising through concurrency to exist. It wouldn't mean that we cannot produce sentience on a computer- we've already produced system setups with unbounded nondeterminism- but that its existence on a computer does not necessarily imply that it can exist on simpler mediums. All without violating any existing proofs.
So it plausible that sentience can exist in a form that can be done on a computer or in a brain, but not with pen and paper. It would simply require a great deal of concurrency.
I don't understand how a (non-quantum?) computer could do anything that I can't do on paper and pencil, given arbitrarily but finitely more time and resources.
Also, I don't see how unbounded nondeterminism applies to human beings. Unless quantum uncertainty plays an important role in human cognition, we're probably just deterministic (albeit very chaotic), right? And what does the time of termination even mean for a human mind?
But why would it be bad to simulate humans in such a situation, yet it's fine to script a story where that happens?
The main question is whether hypotheticals are morally real, then. And keep in mind that (as far as I know) we can never rule out that we are living in a simulation ourselves.
We're certainly living in a hypothetical universe that is being simulated by an infinite number of hypothetical computers. But ours is special, as I'll demonstrate.
I'm now imagining a universe like XKCD's man-with-rocks, except the person is a woman. Both these universes are now simulating our universe. There are infinite permutations available, all simulating our universes.
In fact there are universes simulating every universe, including permutations of our universe. Like in the comic, the man misplaces a rock - permutations like that, including the moon disappearing or the strong nuclear force ceasing.
If our universe is merely one of these infinite simulations, then the odds of physics continuing to work are statistically near zero.
If all conceivable, hypothetical universes had consciousness like you or I, then statistically speaking we should be experiencing total chaos. But we aren't.
Therefore, it's morally safe to imagine hypothetical universes, since the beings within are astronomically unlikely to have consciousness.
Even if they are otherwise copies, or near-copies, of us. Even if they react as we would, and it's natural to feel empathy for them.
We could definitely be brains in jars, but I reject the idea that simulation can create consciousness.
(This "proof" from my butt sounds familiar, I'm probably remembering something I read... Probably from some sci-fi growing up. I'd like to know what it's called, if anyone recognizes it. I really should study actual philosophy more.)
Or, alternatively, there could be somebody simulating universes who
doesn't misplace bits often?
Or apply the anthropic principle. Nobody ever experiences ceasing-to-exist.