Are the laws themselves probabilistic, or do they act upon probabilistic inputs?
That's an interesting question. If we assume that the Universe's starting parameters are deterministic, then it's the laws that must be probabilistic, in order for probability to appear in the Universe. If we don't, then it's unclear.
You are now begging the question. If QM were deterministic, like I think it is, then the whole world would be deterministic by your definition. You can only support your argument in this way by assuming the validity of its goal. That is circular.
I am saying that the fundamental laws in all of physics are deterministic, except for QM, which I am not making a statement about within this sentence.
QM is effectively probabilistic relative to our observable reality, MWI or not. Therefore by my argument, the whole observable reality is probabilistic. I don't see what's circular about that argument.
And what would these "fundamental laws" be? I guess there's the laws of conservation... those are still probabilistic, though - as one interesting result of Heisenberg's uncertainty, total energy of a system can change, for a short period of time. Meaning, that these fundamental laws aren't actually fundamental, but more akin to Newton's physics relative to Einstein's ones - a very good approximation, that stops working under certain hard-to-reach conditions. Speaking of Einstein, theory of relativity is also not quite fundamental - it's mutually contradictory with the QM. While that makes QM itself not fundamental, that still makes it clear that, whatever this fundamental theory will be, in order to effectively predict our observable world, it must be probabilistic.
You are confusing laws and objects. If a single probabilistic law is applied a trillion times, that is still a single probabilistic component of QM.
Not all laws are created equal. Laws that are applied more often are, obviously, more important.
f(x)=x+1
x=rand(0,1)
Is f deterministic?
Yes.
Yes, it does. Copenhagen claims that the unobservable parts of the wave function disappear. MWI claims that they work exactly as they always have - according to the Schrödinger equation. I'm pretty sure that's the one.
Bullshit. Copenhagen doesn't claim that.
What mechanism would make it so that parts of the wave function go away, disobeying the Schrödinger equation, when they are observed? Why would you suppose that this is the case?
Also, Occam's Rule works on laws, not objects.
Do I look like a God to you? I don't know why reality works the way it is. I don't know why, for example, proton is ~1800 more massive than electron, and not any other number. I don't know why same electric charges repel, and different ones attract to each other. I don't know why
internal symmetries of a group SU(3) × SU(2) × U(1) are capable of accurately predicting nearly all fundamental particles that we know and their properties.
At some point, you have to just accept that you can't reduce the fundamental set of things any further without going into baseless speculation.
Also, Occam's razor is merely a guideline. Same for "minimum description length" principle. The reason for that is that it depends on language, basic vocabulary. It's a subjective measure. Unlike physical objects themselves, laws are merely descriptions written in one of many mathematical languages that have been developed over the course of history, which means that our choice of "what's more simple" is dependent on a language in which we've formulated it, and is thus subjective.
It usually is a quite good guideline, as long as you stay within a certain bounds, but after that, it becomes iffy. "Inshallah", or "God willed it" is a quite simple law, after all, and it's capable of explaining anything - post-factum, obviously, but since the lack of predictive power doesn't matter for Occam's razor, it's still super-simple. But we don't use it, because it has zero predictive power. Which is a measure completely different than just the mere "simplicity" of a law. You could incorporate the "predictive power" requirement into the "simplicity" measure to obtain the measure of "real simplicity" of the law, but then you would have to actually consider both the laws
and the objects involved, as the input parameters for the law to start predicting the observable reality.
Basically, this measure of "real simplicity" is what makes me feel like Copenhagen is more simple than MWI. MWI wins slightly by having a bit less laws, but it loses horribly on the object count.
It is a more elegant interpretation, and does not claim that its laws are periodically disobeyed whenever an "interaction" happens. Thus it is better. For now, they give the same experimental predictions, but that might not always be the case. And if they are ever distinguishable by experiment, I'd bet that there's at least a 80% chance that MWI is correct.
Barring the first part, where did you get your "80% chance" figure from?
For instance: we know that gases act a certain way. But why do they act that way? Why does increasing the pressure increase the temperature?
Because heat is related to motion, and because gases are made of molecules. Now we understand the more fundamental entities and laws behind the previously-discovered statistical laws.
OK. Now a question - do you believe that this process can be continued forever? Or is there some limit, some fundamental combination of entities, that cannot be reduced any further?
Ah, I think I see. Other things in physics also act as continuous or discrete under different circumstances. I do admit, that's a good counterargument, but where's the explanation for why the equations would suddenly collapse when observed? If they were really part of QM, wouldn't they arise from the equations themselves, just as the continuous/discrete possibilities arose from the solution to the Schrödinger-like equation in the link you gave? I didn't quite follow the entire article, but it looked like the possibilities weren't added in post-hoc, they were the result of known physical constraints.
To be blunt, I think that with things like that, there's no more fundamental explanation. It's just the way our reality is. Why do wave-functions obey Schrödinger equation? Why not any other? Schrödinger equation is based on some principles of quantum mechanics (such as linearity of the wave-function, and conservation of its norm), that appear reasonable, but these principles themselves - they're not derived, they're just set axiomatically.
For a famous example of things that look like they could be derived, but in reality are actually axioms, much more complex than the others in the set of axioms that defines a field of mathematics, you have the
Parallel postulate of Euclid, for geometry on a flat surface. You can actually replace it, and it would result in different geometries, like Riemann, or Lobachevsky ones.
But the important fact is, it looks
much, much more complex than the others:
"Let the following be postulated":
1. "To draw a straight line from any point to any point."
2. "To produce [extend] a finite straight line continuously in a straight line."
3. "To describe a circle with any centre and distance [radius]."
4. "That all right angles are equal to one another."
5. The parallel postulate: "That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles."
At a first glance, it pretty much looks like an error. And indeed, many people have tried to write an
explanation - a derivation of that fifth axiom from the first four. But they've failed. Despite it looking like a black swan, it's fundamental, and cannot be reduced any further, for the explanation of geometry on a flat plane.
The same, I believe, applies to the wave-function collapse. Sure, you can define alternative sets of axioms that don't invoke it directly, like MWI, but among all of them that I've seen, it's the most "real simple" one, by the measure of "real simplicity" I've previously defined.
Not quite infinitely large, but certainly large. (State space is the same as configuration space, right?)
It's also called phase space, and while it isn't technically infinite, you have to define a few parameters for every particle in the Universe. Last time I checked, the number of particles is about 10^150. Kinda hard to work with a function which takes 10^150 inputs.
It might be more functional to do so. Perhaps your way works best in practice; I'm fine with that. But I think that reality itself tends to work elegantly, regardless of our inelegant and quick hacks.
Eh. Minimum effort principle.
I will grant that we are probably using different criteria to determine which theory is better.
Definitely.
Sergarr, apparently, does not have a very high opinion of LW.
To clarify why: for many, many years, LW's idea-fix with AI (that being Bayesian superintelligence) was theoretically almost perfect, but was also absolutely unimplementable in practice for a simple reason that the Bayes theorem's computation time doesn't scale well to a large hypothesis space, and it also suffers from severe numerical instability issues due to having to multiply and divide by very small numbers in almost all situation.
Those minor issues didn't prevent them from also claiming that we're about to invent a self-improving AI (operating on Bayesian principles, because of course) that will rewrite reality like a God via nanomagic and that
such Bayesian superintelligence AI would invent General relativity as a "hypothesis under consideration" based on only three images of a falling apple from a webcam, which is a statement that still makes me go "WTF" each time I see it.