Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 2 [3]

Author Topic: A Good Old Fashioned Existential Thought Experiment  (Read 8067 times)

Grek

  • Bay Watcher
    • View Profile
Re: A Good Old Fashioned Existential Thought Experiment
« Reply #30 on: June 01, 2011, 05:18:54 pm »

Ensemble is the most reasonable interpretation that I have come across thus far. The assumption that a wave function collapses into the one single universe that we observe is not something that is supported by the data. Both the mathmatics and the experimental data points towards a collapseless theory.

Particles and the properties thereof are abstractions of something more fundemental. They are useful abstractions, much like classical mechanics is a useful abstraction of reletivistic physics, but ultimately inaccurate with regards to the underlying mechanics of the universe when viewed at a quantum scale.

It is not possible to take an amplitude configuration that you are not measuring and insert a sensor into it without altering the configuration and the results of the configuration itself. Measuring position or momentum, two properties ultimately based on the underlying amplitude configurations of the "particles" being measured, has the same effect.

A perfect simulation of an object is meaningful for some definitions of the word 'perfect'. If you want to insist that the simulation simulate individual entites within the simulation by using identical entities on a one-to-one basis, rather than having some analogous entity, say a few bytes of data in a computer somewhere, them no, you do not have a simulation. You have a copy. But if you're willing to create a simulation out of different entities that are none the less in a perfectly isomorphic relationship to whatever is being simulated, then yes, you have a perfect simulation. My objection is soley to the notion that an object is a "simulation" of a physically identical object.
Logged

counting

  • Bay Watcher
  • Zenist
    • View Profile
    • Crazy Zenist Hospital
Re: A Good Old Fashioned Existential Thought Experiment
« Reply #31 on: June 01, 2011, 05:47:34 pm »

Ensemble is the most reasonable interpretation that I have come across thus far. The assumption that a wave function collapses into the one single universe that we observe is not something that is supported by the data. Both the mathmatics and the experimental data points towards a collapseless theory.

Particles and the properties thereof are abstractions of something more fundemental. They are useful abstractions, much like classical mechanics is a useful abstraction of reletivistic physics, but ultimately inaccurate with regards to the underlying mechanics of the universe when viewed at a quantum scale.

It is not possible to take an amplitude configuration that you are not measuring and insert a sensor into it without altering the configuration and the results of the configuration itself. Measuring position or momentum, two properties ultimately based on the underlying amplitude configurations of the "particles" being measured, has the same effect.

A perfect simulation of an object is meaningful for some definitions of the word 'perfect'. If you want to insist that the simulation simulate individual entites within the simulation by using identical entities on a one-to-one basis, rather than having some analogous entity, say a few bytes of data in a computer somewhere, them no, you do not have a simulation. You have a copy. But if you're willing to create a simulation out of different entities that are none the less in a perfectly isomorphic relationship to whatever is being simulated, then yes, you have a perfect simulation. My objection is soley to the notion that an object is a "simulation" of a physically identical object.

Most certainly a 'computer' can simulate a 'computer', its called Universal Turing Machine. And we already used them everywhere, the Java program is running on a simulated computer(machine
JVM on your physical computer as a program. And using Java, you can write a Virtual Machine Program just like itself running on JVM. Minecraft is actually running on JVM, so it's already in a computer's computer. And there is no doubts simulated a computer inside it, giving enough time and resources. But finite definition of simulation is probably not the same as a 'general simulation' you used. (Turing Machine are just a set of rules and a storage device, Every computer are essentially one of them.)
Logged
Currency is not excessive, but a necessity.
The stark assumption:
Individuals trade with each other only through the intermediation of specialist traders called: shops.
Nelson and Winter:
The challenge to an evolutionary formation is this: it must provide an analysis that at least comes close to matching the power of the neoclassical theory to predict and illuminate the macro-economic patterns of growth

Reelyanoob

  • Bay Watcher
    • View Profile
Re: A Good Old Fashioned Existential Thought Experiment
« Reply #32 on: June 01, 2011, 06:29:44 pm »

The issue is that some mathematics cannot be simulated even in infinite time, on an infinite turing machine. It's quite an assumption that all of our universe is of the simpler type of maths.
Logged

counting

  • Bay Watcher
  • Zenist
    • View Profile
    • Crazy Zenist Hospital
Re: A Good Old Fashioned Existential Thought Experiment
« Reply #33 on: June 01, 2011, 11:47:24 pm »

The issue is that some mathematics cannot be simulated even in infinite time, on an infinite turing machine. It's quite an assumption that all of our universe is of the simpler type of maths.
It's way too easy to running infinite loop in computer (Turning machine), giving it a for loop without termination boundary. A program is just a set of rules, just like Turning machine is a set of rules itself, and below that physical world should be also a set of rules (finite amount). If it's finite, it can be simulated in virtual machine, unless it violets the basic assumption, like a bit of information can both be 0/1 at the same time. Something will need to be modified. CPI is just a giant ALU with Control Unit, MM, Registers, and some memory cache. Any other operations is the extension of combinations used of these components and it's finite set of rules, arranged into infinite permutations. Memory size is a problem though. Since the information it require to stored without using quantum bits (qbit) will be massive, but with the use of qbits it will reduce greatly from N=2^k into k, massive compressions, but with limits as well.
Logged
Currency is not excessive, but a necessity.
The stark assumption:
Individuals trade with each other only through the intermediation of specialist traders called: shops.
Nelson and Winter:
The challenge to an evolutionary formation is this: it must provide an analysis that at least comes close to matching the power of the neoclassical theory to predict and illuminate the macro-economic patterns of growth

Grek

  • Bay Watcher
    • View Profile
Re: A Good Old Fashioned Existential Thought Experiment
« Reply #34 on: June 02, 2011, 02:43:07 am »

Most certainly a 'computer' can simulate a 'computer', its called Universal Turing Machine.

That's not the point I was making at all, and I am not sure how you arrived at it. I am only making the claim that it is impossible for a computer to simulate itself perfectly. If you try to make a computer that does that, it has to simulate itself running a simulation of itself, which in turn requires that the computer calculate the output of the simulation's simulation, the simulation's simulation's simulation and so on and so forth. You never get past the first frame of simulation.

That is wildly different from a computer simulating a different computer (say your computer "simulating" the minecraft universe and then a block calculator built in minecraft running a tic-tac-toe game) as there are no recursion issues there. You would only get problems if you tried to build a computer in minecraft that ran a copy of minecraft and then made a minecraft bot that translated a copy of the savegame for your minecraft world that contained the computer and a copy of itself into a block-memory savegame and minecraft bot executable that you then plugged into you block computer that ran minecraft. Doing so would result in your block computer crashing, and then your minecraft client crashing as your computer hits an infinite loop and craps its metaphorical pants.
Logged

Reelyanoob

  • Bay Watcher
    • View Profile
Re: A Good Old Fashioned Existential Thought Experiment
« Reply #35 on: June 02, 2011, 03:33:23 am »

The issue is that some mathematics cannot be simulated even in infinite time, on an infinite turing machine. It's quite an assumption that all of our universe is of the simpler type of maths.
It's way too easy to running infinite loop in computer (Turning machine), giving it a for loop without termination boundary. A program is just a set of rules, just like Turning machine is a set of rules itself, and below that physical world should be also a set of rules (finite amount). If it's finite, it can be simulated in virtual machine, unless it violets the basic assumption, like a bit of information can both be 0/1 at the same time. Something will need to be modified. CPI is just a giant ALU with Control Unit, MM, Registers, and some memory cache. Any other operations is the extension of combinations used of these components and it's finite set of rules, arranged into infinite permutations. Memory size is a problem though. Since the information it require to stored without using quantum bits (qbit) will be massive, but with the use of qbits it will reduce greatly from N=2^k into k, massive compressions, but with limits as well.

a program is an algorithm. There's a 1:1 correlation between algorithms and turing machines. Turing machines by definition have infinite memory, even the most simple one. No need to bring hardware tech jargon into the debate, that's not relevant, and is why we use Turing Machines for these discussions in the first place. Technology-speak only clouds the point you may be trying to make. Whether or not quantum processors make it more compact or not is irrelevant when talking about limits of computation on infinite turing machines. They can be as inefficient as they like, because they are infinite. Each logic gate could be cleverly crafted from trained everlasting kittens and a ball of string, but still produce the same output. Quantum Processors vs Kitten-Powered Computing, it does the same thing in a Turing sense.

My point was that even an infinite turing machine, given infinite memory/processor time, is still limited by computation and set theory. i.e. there are transcendental numbers which cannot be computed by Turing Machines/Agorithms even in infinite time. Actually there's a lot of them, so many that they cannot even be counted in infinite time. This is related to the "higher infinities" discovered by George Cantor, the creator of Set Theory according to wiki.

If the universe depends on any of these transcendental numbers, then it can only approximated, rather than directly calculated/computed or simulated. Even if the universe itself is quite small.

So, if the maths underlying the universe falls under the type of "non-computable" equations, then no amount of processing, even infinity amount, can compute those equations. These topics are discussed at length by Roger Penrose in his book The Emperors New Mind, though I disagree with some of his conclusions.
Logged

counting

  • Bay Watcher
  • Zenist
    • View Profile
    • Crazy Zenist Hospital
Re: A Good Old Fashioned Existential Thought Experiment
« Reply #36 on: June 02, 2011, 05:39:41 pm »

Most certainly a 'computer' can simulate a 'computer', its called Universal Turing Machine.

That's not the point I was making at all, and I am not sure how you arrived at it. I am only making the claim that it is impossible for a computer to simulate itself perfectly. If you try to make a computer that does that, it has to simulate itself running a simulation of itself, which in turn requires that the computer calculate the output of the simulation's simulation, the simulation's simulation's simulation and so on and so forth. You never get past the first frame of simulation.

That is wildly different from a computer simulating a different computer (say your computer "simulating" the minecraft universe and then a block calculator built in minecraft running a tic-tac-toe game) as there are no recursion issues there. You would only get problems if you tried to build a computer in minecraft that ran a copy of minecraft and then made a minecraft bot that translated a copy of the savegame for your minecraft world that contained the computer and a copy of itself into a block-memory savegame and minecraft bot executable that you then plugged into you block computer that ran minecraft. Doing so would result in your block computer crashing, and then your minecraft client crashing as your computer hits an infinite loop and craps its metaphorical pants.

You are mistaken the infinite loop thing. The true picture is that the computer simulate a logic gate, and the logic gates form a computer, and that computer forms the next layer of simulation. You assumption is that they all have to be simultaneously simulated, its not the case. The computer working fine just simulate something low level, it doesn't care about what those component are for. The things happening is that the next level will be slower. So the computer only calculate it's only next level of simulations at his own rate. The next computer do the same. Only the thing different is the rate of each computing cycle. It will crash or not just as the same as your computer will crash or not, has nothing to do if it actually a simulated computer.

And a 'computer' as I use the "", it's not it's looks and physical aspect make the definition of Turing Machine, it's just a program, or using the next poster words, Algorithm (I am afraid you don't understand what is means), and it assume to have infinite memory (working space). So in the current physical worlds, with limit resources to house memory, there is a limit how many layers of simulations can go on. Since they will have to house the programs itself on previous level, and make the next level of memory less and less, and slower and slower. To a point the final level of simulation, no longer has memory space to house the next level. That's when the 'crash' of the final level, but the previous N-1 level are still simulated on their own. Nothing changes, just no more next level. And a computer OS like you are using now, is in a fact, using a infinite loop to waiting the uses to response. And it never stops until crashes. The same as any other simulated computers.

The simulated computer in minecraft don't have to be a computer just for the purpose of simulating other computers, the same as your computer, it can just be used as a 'computer for playing minecraft only', or 'typing a post on some simulated Internet forum'. You are thinking it that everything has to go to that infinite end, but true is its not for that purpose only, as "Universal TM" suggested by name, its for all kinds of works, not just for simulations.
« Last Edit: June 02, 2011, 06:28:04 pm by counting »
Logged
Currency is not excessive, but a necessity.
The stark assumption:
Individuals trade with each other only through the intermediation of specialist traders called: shops.
Nelson and Winter:
The challenge to an evolutionary formation is this: it must provide an analysis that at least comes close to matching the power of the neoclassical theory to predict and illuminate the macro-economic patterns of growth

counting

  • Bay Watcher
  • Zenist
    • View Profile
    • Crazy Zenist Hospital
Re: A Good Old Fashioned Existential Thought Experiment
« Reply #37 on: June 02, 2011, 06:19:33 pm »

The issue is that some mathematics cannot be simulated even in infinite time, on an infinite turing machine. It's quite an assumption that all of our universe is of the simpler type of maths.
It's way too easy to running infinite loop in computer (Turning machine), giving it a for loop without termination boundary. A program is just a set of rules, just like Turning machine is a set of rules itself, and below that physical world should be also a set of rules (finite amount). If it's finite, it can be simulated in virtual machine, unless it violets the basic assumption, like a bit of information can both be 0/1 at the same time. Something will need to be modified. CPI is just a giant ALU with Control Unit, MM, Registers, and some memory cache. Any other operations is the extension of combinations used of these components and it's finite set of rules, arranged into infinite permutations. Memory size is a problem though. Since the information it require to stored without using quantum bits (qbit) will be massive, but with the use of qbits it will reduce greatly from N=2^k into k, massive compressions, but with limits as well.

a program is an algorithm. There's a 1:1 correlation between algorithms and turing machines. Turing machines by definition have infinite memory, even the most simple one. No need to bring hardware tech jargon into the debate, that's not relevant, and is why we use Turing Machines for these discussions in the first place. Technology-speak only clouds the point you may be trying to make. Whether or not quantum processors make it more compact or not is irrelevant when talking about limits of computation on infinite turing machines. They can be as inefficient as they like, because they are infinite. Each logic gate could be cleverly crafted from trained everlasting kittens and a ball of string, but still produce the same output. Quantum Processors vs Kitten-Powered Computing, it does the same thing in a Turing sense.

My point was that even an infinite turing machine, given infinite memory/processor time, is still limited by computation and set theory. i.e. there are transcendental numbers which cannot be computed by Turing Machines/Agorithms even in infinite time. Actually there's a lot of them, so many that they cannot even be counted in infinite time. This is related to the "higher infinities" discovered by George Cantor, the creator of Set Theory according to wiki.

If the universe depends on any of these transcendental numbers, then it can only approximated, rather than directly calculated/computed or simulated. Even if the universe itself is quite small.

So, if the maths underlying the universe falls under the type of "non-computable" equations, then no amount of processing, even infinity amount, can compute those equations. These topics are discussed at length by Roger Penrose in his book The Emperors New Mind, though I disagree with some of his conclusions.

Oh Sir. Penrose!... I hate his point of view about consciousness, need to be "emerged" from some cellular structure that has quantum effects, so we can have consciousness. Strong AI views believe that human brains can be simulated without it. I dislike his idea about invoking the physics into it. (i know many others don't like it as well).

The mathematics property about sets theory with infinity is very complex, and that is also very confusing when making an argument to someone who has no idea about set theory. And I believe I did the same thing involing the terms in CPU design, and qbits. But what I wan to say is that its really the physical world property preventing us to test the theoretical aspect of some simulation/non-simulation possibility.

Majorities of the people in the field of computer science don't care about the theoretical limitation aspect of some models that can't not be tested at all (but want to know what they are in order to prevent them). And yes, we like to know the anwsers for NPC problems, and how to identify and determine it can be working or not. Some problems which are NP-hard have the effect of overloading our limited resourced computer quickly without approximation. And we don't want that. If we have limitless time or not it can't be solved. But it's not that kind of questions we are interested most of the time, but the practical aspect of functionality. And qbits and quantan computing theory can solves some of the problems, not all. I don't think anything could be used to solved them all, qbits or not they are still countable, and need to be physically stored. And as of yet, we have not construct a full quantan computing theory that only runs solely on quantum bits. It has to be mixed with normal bits. For instance when a if (branching) happens since there is no way to tell its a 0/1 it must be decoherenced into 0) or 1) as normal bit. If a true quantum CPU can deal with this, than it will be massively powerful.
« Last Edit: June 02, 2011, 06:48:17 pm by counting »
Logged
Currency is not excessive, but a necessity.
The stark assumption:
Individuals trade with each other only through the intermediation of specialist traders called: shops.
Nelson and Winter:
The challenge to an evolutionary formation is this: it must provide an analysis that at least comes close to matching the power of the neoclassical theory to predict and illuminate the macro-economic patterns of growth

Grek

  • Bay Watcher
    • View Profile
Re: A Good Old Fashioned Existential Thought Experiment
« Reply #38 on: June 02, 2011, 06:28:27 pm »

Let me try to break it down as a step by step process:

1. In the physical universe, you have a turing machine which allows you to run arbitrary programs.
2. You have a physics simulation program which accepts a file containing a start state as input and produces a later state as output. This physics simulator is 100% accurate with regards to the physics which govern the turing machine from (1).
3. You have a device that scans both itself and the turing machine from (1) and creates a data file formatted to function as the start state input for the physics simulator from (2) containing all of the required information about the status of the turing machine and the scanning device.
4. You create a program which starts and stops programs as described in the later steps.
5. The program from step (4) loads the program from (2) onto the turing machine from (1) and then tells the device from (3) to produce a start state file containing the device from (3), the turing machine from (1) and the programs from (2) and (4).
6. The program from step (4) inputs the start state file from step (5) into the program from step (2), causing the program from (2) to simulate every step in this process beyond step 5, including this one.

A this point, the turing machine from (1) gets stuck in an infinite loop of step 6, consuming all processing power available to it and never proceeding beyond the first stage of step 6.
Logged

counting

  • Bay Watcher
  • Zenist
    • View Profile
    • Crazy Zenist Hospital
Re: A Good Old Fashioned Existential Thought Experiment
« Reply #39 on: June 02, 2011, 06:57:31 pm »

Let me try to break it down as a step by step process:

1. In the physical universe, you have a turing machine which allows you to run arbitrary programs.
2. You have a physics simulation program which accepts a file containing a start state as input and produces a later state as output. This physics simulator is 100% accurate with regards to the physics which govern the turing machine from (1).
3. You have a device that scans both itself and the turing machine from (1) and creates a data file formatted to function as the start state input for the physics simulator from (2) containing all of the required information about the status of the turing machine and the scanning device.
4. You create a program which starts and stops programs as described in the later steps.
5. The program from step (4) loads the program from (2) onto the turing machine from (1) and then tells the device from (3) to produce a start state file containing the device from (3), the turing machine from (1) and the programs from (2) and (4).
6. The program from step (4) inputs the start state file from step (5) into the program from step (2), causing the program from (2) to simulate every step in this process beyond step 5, including this one.

A this point, the turing machine from (1) gets stuck in an infinite loop of step 6, consuming all processing power available to it and never proceeding beyond the first stage of step 6.

Please read something about algorithm, Turing machine, and the meaning of "process" a procedure, before you mash them up and use them. And your first step 1~3 about scanning "everything and the UTM" is already implying infinite on its own. Its not a infinite system at all, you don't need everything in the universe to simulate a turning machine, just finite resource, like your own computer. The problem is within the infinite memory(working space) problems.

Graphical representation :

A UTM simulate from blue to red, and that's it, repeat. The rest of the memory is unused (infinite or not), until next level of a simulated UTM is needed. Other wise its just doing the blue staff. Every next step of a complete simulation will add additional running space, and time. But the clock rate isn't change. The problem is only occurred when this is layered infinite times. Otherwise it can always running, just more processes and slow to show the next level.

Imagine you on a plant(Blue+red) with a super powerful computer(red), which can process any amount of data in super fast speed. And one day you want to simulate a world, an you want them as detail as possible, so you buy a plant sized memory, which each atom can stored the information just as much as an atom contain. And you run this simulation and its so powerful, that it can process every bit of the information stored in that planet size memory in no time, and then on that simulated planted, a simulate you trying the same thing, and buy a planet sized memory, which in term our universe you notice what he is doing and in order to prevent program overflow, you buy a new planet sized memory, just to hold his planet sized memory in his simulated world. And the next level of simulation started, but your processor has to simulated other things in his world other than his computer, (this is where is gets slower), and his computer started to run simulations at a much slower rate in our world. But in his world, he didn't notice, since in his point of view, all the processes happens simultaneously in his world. (Like 1/100 of the CPU time is used to simulated his computer, so his computer can only run at the speed 100 times slower in our standard, but not in his standard), and these layers of simulation keeps going on, and you keeps buying planet sized memory, and mini-you in their world are doing the same thing, excapt the last level guy who haven't done the same thing.
« Last Edit: June 02, 2011, 07:59:53 pm by counting »
Logged
Currency is not excessive, but a necessity.
The stark assumption:
Individuals trade with each other only through the intermediation of specialist traders called: shops.
Nelson and Winter:
The challenge to an evolutionary formation is this: it must provide an analysis that at least comes close to matching the power of the neoclassical theory to predict and illuminate the macro-economic patterns of growth

HollowClown

  • Bay Watcher
    • View Profile
Re: A Good Old Fashioned Existential Thought Experiment
« Reply #40 on: June 08, 2011, 10:26:56 am »

A UTM simulate from blue to red, and that's it, repeat. The rest of the memory is unused (infinite or not), until next level of a simulated UTM is needed. Other wise its just doing the blue staff. Every next step of a complete simulation will add additional running space, and time. But the clock rate isn't change. The problem is only occurred when this is layered infinite times. Otherwise it can always running, just more processes and slow to show the next level.

While this would work with infinite memory (an ideal Turing machine), it simply wouldn't work with finite memory.  The issue has to do with something called the pigeonhole principle.

The basic concept here is that, if you want to use/store/manipulate 2n bits of data, you need at least 2n bits of storage to store it in.  Otherwise, you'll inevitably start losing data fidelity.  But because Turing machines also need to keep track of their own internal state, you'll need some additional data storage to store the state in.  So Turing machine A needs enough data to store both whatever it's simulating and its internal state;  if Turing machine B is simulating Turing machine A, B needs enough data to record A's data, A's state, and its own state.

This means that in your graphic, the length of the blue bar in each child simulation should be equal to the sum of the length of the red and blue bars in the parent simulation.  If the blue bars don't grow, you'll be stuck simulating either a smaller universe or a lower-fidelity universe -- in other words, you'd have to simulate a universe that contained less information than its parent.
Logged

counting

  • Bay Watcher
  • Zenist
    • View Profile
    • Crazy Zenist Hospital
Re: A Good Old Fashioned Existential Thought Experiment
« Reply #41 on: June 08, 2011, 11:14:08 am »

A UTM simulate from blue to red, and that's it, repeat. The rest of the memory is unused (infinite or not), until next level of a simulated UTM is needed. Other wise its just doing the blue staff. Every next step of a complete simulation will add additional running space, and time. But the clock rate isn't change. The problem is only occurred when this is layered infinite times. Otherwise it can always running, just more processes and slow to show the next level.

While this would work with infinite memory (an ideal Turing machine), it simply wouldn't work with finite memory.  The issue has to do with something called the pigeonhole principle.

The basic concept here is that, if you want to use/store/manipulate 2n bits of data, you need at least 2n bits of storage to store it in.  Otherwise, you'll inevitably start losing data fidelity.  But because Turing machines also need to keep track of their own internal state, you'll need some additional data storage to store the state in.  So Turing machine A needs enough data to store both whatever it's simulating and its internal state;  if Turing machine B is simulating Turing machine A, B needs enough data to record A's data, A's state, and its own state.

This means that in your graphic, the length of the blue bar in each child simulation should be equal to the sum of the length of the red and blue bars in the parent simulation.  If the blue bars don't grow, you'll be stuck simulating either a smaller universe or a lower-fidelity universe -- in other words, you'd have to simulate a universe that contained less information than its parent.

You can view it that way, or you can view it as B stored its own state within his own level, it is the red bar contain not only its own data, but its own state, or you can put it on the level above as you suggested. Either way, as long as UTM, and its state are finite, there is no problem to simulate finite level of them. The problem is always as I said, and you said the infinite memory, since we already know that it should be finite in our world. (There is just so much planet sized memory you can buy)
Logged
Currency is not excessive, but a necessity.
The stark assumption:
Individuals trade with each other only through the intermediation of specialist traders called: shops.
Nelson and Winter:
The challenge to an evolutionary formation is this: it must provide an analysis that at least comes close to matching the power of the neoclassical theory to predict and illuminate the macro-economic patterns of growth
Pages: 1 2 [3]