The problem: it is obvious that dwarves are nowhere near as complex as humans. They do not actually "learn" anything, they do not actually "feel" anything, and they do not actually "think" about anything.
They don't even really exist. That is actually the problem, they appear to exist but they don't. Presumably you could with enough work make a dwarf fortress that would create the perfect simulation of people, but in the process of making the illusion did you actually make the real thing by accident?
Determinism and sentience aren't actually incompatible. e.g. it's perfectly possible to have the sense of free-will even in a deterministic universe.
e.g. if "you" make a decision, that decision is determined by your previous state. But you are the state. So there's no "external" force "making" you do what you didn't want to do. The confusion comes from the idea that something "external" forced you to act as you did. But it didn't, because you are the deterministic system. Determinism is in fact internal decision-making because the self and the system are not a duality, they're the same thing. There's also nothing special about humans in this. We're just biomachines who have feedback/sentience. There's no need to start creating new pseudo-science physics because we're uncomfortable with the idea that the existing laws of physics might pre-determine what we do. We're just not special enough in the universe to warrant that.
The idea that hooking up a "random" input source means free will is wrong. That's not freedom, that's being buffeted uncontrollably by whatever random fluctuations happen to occur. In a sense, that could even be said to be less free than just being a deterministic being who uses it's own state to decide how to act next.
Free will is to the *external* observer indistinguishable from randomness, that means that if there is no randomness to be observed, there is necessarily no free will either. That is because free will is the subjective reporting of what is one's own randomness to the eyes of others. Concepts like free will are based upon what you call 'duality', if that goes away then so does those concepts since they are an explanation for a contradiction that does not exist without the 'duality'.
We can know what kinds of scripts it has. You probably need self-reflection and learning functions for sentience.
No, you merely need those functions to *appear* sentient. You are replicating the external behavior of a sentient beings through scripts, as the present DF creatures do. Doing it sufficiently better does not inherently mean you are doing anything except doing it better; or you could have accidentally made actual conscious beings and you would have no way to tell.
Well, if this discussion is going to continue, might as well drop some stuff that came up in another thread where we were discussing the same topic.
Well, let's take a look at some theories of consciousness/sentience.
First we have the attention schema theory, which... claims that "consciousness" is just a machine's attempt to build a model of itself; ultimately consciousness as a whole isn't just an "illusion" but outright does not exist.
Secondly we have the global workspace theory, which states that all of the processes in the brain "compete" for sending signals to a "global workspace" which can interact with any other process in a voluntary fashion. That is, that all of our "conscious" processes are actually just subconscious ones attempting to influence other parts of the brain.
Thirdly we have the holonomic brain theory, which seems to state that cognition works like quantum physics; it does not say that the brain in any way relies on quantum properties or anything like that, but that consciousness behaves mathematically like quantum physics. Very distinct difference.
Fourthly we have the integrated information theory, which is mostly concerned with what means you could call any particular system "sentient." Specifically, it is a set of axioms, postulates, and mathematical formulations of both that describe the characteristics of a dynamic system such that the given system demonstrates consciousness.
Fifthly we have the multiple drafts model, which states that consciousness isn't a property of a system of its parts. Rather, it is a property of the flow of information itself. It takes the notion of qualia, throws it out, and regards consciousness as a description of behavior. That is, that the properties of consciousness and the judgement of those properties are indistinguishable. It borrows from the global workspace theory in the notion that any particular neural process compete for a notion of "consciousness", but specifically that such processes reach that state the moment they leave something behind.
So, if you are looking for some criterion of things that let you measure the "consciousness" of a system, try starting with integrated information theory. Regarding what consciousness "is" in the first place, try multiple drafts model or attention schema theory.
The last one is the least nonsensical of the lot. The third one is not saying anything at all, of course the brain works according to quantum physics because it is part of a material reality that does.