If I understand correctly what you are trying to say, being comprised of atoms, which are comprised of hadrons, which are comprised of quarks does not mean that the molecule has three different identities. Molecules are groups of atoms which are groups of hadrons which are groups of quarks (and the binding forces, but that is irrelevant detail). It’s just too expensive and unnecessary for human beings to refer to every single quark and the relationship between them, and we usually can’t detect the individual components anyway, so we refer to clusters of quarks as “chairs”. There are still only quarks (or whatever is left when the division stops, but we refer to them as one.
The chair I am sitting on has the following natures or identities: It is wooden, it is brown (with some kind of finish on it), it is made from pieces of wood which slot into holes in the other pieces of wood, it could be easily broken with enough force, and those are only some of the most obvious ones. I have a 24" 1080p HDTV. I identify it as both a TV and a monitor and a thing to play sound through or connect headphones to. A widget is not just a widget. It's also a paperweight, if we need it to be, or whatever else. All identities are assigned by observers, and there can be many, and they can differ from person to person - they're not innate to the object or anything.
Even if you consider "a specific nature or identity" to only refer to innate properties of a thing, verifiable by everyone, I could still point out that all substances undergo
phase transitions, e.g. water freezing into ice, or boiling and turning into water vapor, which changes their properties and how they behave and appear.
Illusions and simulations exist, they just aren’t what the deceived think they are, so the definition still stands.
The definition I was using (from my own mind) was 'everything that is in the real world,' which excludes things in simulations (the code and data would exist, what that creates would not) - This makes sense to me because you would not talk about the people in your computer game existing, because they don't. Wikipedia gives a variety of possible definitions, from "the world we are aware or conscious of through our senses, and that persists independently without them" (which seems similar except it would consider the existence to consist of a simulated world if you were simulated, rather than the real world). Another definition is "everything."
I'm a big fan of the chinese room thought experiment, so I would dispute this and say that since one can value something one can be certain that you are an entity and not a computer simulation. Furthermore, we cannot know the true nature of reality, but we can perceive all that is apparent and that that must suffice because we are incapable of truly knowing reality.
I am aware of the
chinese room thought experiment, and Searle's viewpoints, but think it is kind of a silly argument. How can you argue that there is something in brains which creates consciousness and that this will prevent the creation of artificial consciousness,
when you have no proof or evidence or understanding of what creates consciousness in humans? It certainly may make it difficult, if the goal was to replicate the way the brain worked, and if we were talking about the reality we experience. But if we're referring to a hypothetical reality which is running ours as a simulation, the Chinese Room thought experiment has no bearing on the cause of human consciousness. Searle may think that it is because of something in the brain which we have not found. Other people may think it is because of qualia. It may be that we were programmed to have consciousness. If we are a simulated reality, we cannot know what those in the reality containing our simulated reality are capable of.
I think I responded to this in my last note, but I would say that the sensations you refer to are still acceptable substitutes because they do not change the fact of reality, they just make the data easier for one's consciousness to interpret.
However it is impossible to write a program that can value, it is only possible to create a simulation that approaches an asymptote at perfection. I previously referenced the Chinese room thought experiment and I hold to that again in that function is not equivalent to "essence," for lack of a better word.
Say what? For theoretical AI in a theoretical world running ours as a simulation, we would be unable to say definitively "they can't possibly have made AI that could think" because we don't know the natural laws of that universe or how advanced they are, how capable their computers are, etc. For our reality I refer you to
http://en.wikipedia.org/wiki/Strong_ai.
To quote from quite some distance down that article:
The term "strong AI" was adopted from the name of a position in the philosophy of artificial intelligence first identified by John Searle as part of his Chinese room argument in 1980.[50] He wanted to distinguish between two different hypotheses about artificial intelligence:[51]
- An artificial intelligence system can think and have a mind. (The word "mind" has a specific meaning for philosophers, as used in "the mind body problem" or "the philosophy of mind".)
- An artificial intelligence system can (only) act like it thinks and has a mind.
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage, which is fundamentally different than the subject of this article, is common in academic AI research and textbooks.[52]
The term "strong AI" is now used to describe any artificial intelligence system that acts like it has a mind,[1] regardless of whether a philosopher would be able to determine if it actually has a mind or not. As Russell and Norvig write: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."[53] AI researchers are interested in a related statement:
- An artificial intelligence system can think (or act like it thinks) as well as or better than people do.
This assertion, which hinges on the breadth and power of machine intelligence, is the subject of this article.
Interesting question - what if there is no end to the nesting? We can't argue that it's impossible, because the simulated reality above us doesn't have to abide by our physical laws, and so on for it's parent, ad infinitum. In fact, we can prove this by looking at the variation possible in our own universe - a thing is not necessarily a ball, but it's easily conceivable to make a simulation in which everything is balls. In the Ball-Universe, the existence of Cat is impossible, even in thought - we can think of something cat-shaped and cat-behaved if our physical dimensions are the same, but it wouldn't be an actual Cat. Can we still say we exist for certain when the requirements for the nature of our parent universe are arbitrary?
Turtles all the way down!
Edit: One of the lists was misformatted.