“Doesn’t physics say this universe is going to run out of negentropy before you can do an infinite amount of computation?” Actually, there is a [proposal that could create a computer that runs forever.
3p1cd3m0n
I see. Does the method of normalization you gave work even when there is an infinite number of hypotheses?
Decreasing existential risk isn’t incredibly important to you? Could you explain why?
Right; I forgot that it used a prefix-free encoding. Apologies if the answer to this is painfully obvious, but does having a prefix-free encoding entail that there is a finite number of possible hypotheses?
I still don’t see how that would make all the hypotheses sum to 1. Wouldn’t that only make the probabilities of all the hypotheses of length n sum to 1, and thus make the sum of all hypotheses sum to > 1? For example, consider all the hypotheses of length 1. Assuming Omega = 1 for simplicity, there are 2 hypotheses, each with a probability of 2^-1/1 = 0.5, making all them sum to 1. There are 4 hypotheses of length 2, each with a probability of 2^-2/1 = 0.25, making them sum to 1. Thus, the sum of the probabilities of all hypotheses of length ⇐ 2 = 2 > 1.
Is Omega doing something I don’t understand? Would all hypotheses be required to be some set length?
That would assign a zero probability of hypotheses that take more than n bits to specify, would it not? That sounds far from optimal.
Did you not previously state that one should learn about the problem as much as one can before coming to a conclusion, lest one falls prey to the confirmation bias? Should one learn about the problem fully before making a decision only when one doesn’t suspect to be biased?
“Of course” implies that the answer is obvious. Why is it obvious?
Unfortunately Chaitin’s Omega’s incomputable, but even if it wasn’t I don’t see how it would work as a normalizing constant. Chaitin’s Omega is a real number, there is an infinite number of hypotheses, and (IIRC) there is no real number r such that r multiplied by infinite equals one, so I don’t see how Chaitin’s Omega could possible work as a normalizing constant.
If I understand correctly, Yudkowsky finds philosophical zombies to be implausible, as it would require consciousness to have no causal influence on reality, which Yudkowsky seems to believe entails that if there are philosophical zombies, it’s purely coincidental that accurate discussions of consciousness are done by those who are conscious, which is very improbable and thus philosophical zombies are very implausible. This reasoning seems flawed, as discussing and thinking about consciousness could cause consciousness to exist, but this consciousness would have no effect on anything else. For philosophical zombies to exist, thinking about consciousness could only bring about consciousness in certain substrates.
Is there any justification that Solomonoff Induction is accurate, other than intuition?
If I understand Solomonoff Induction correctly, for all n and p the the sum of the probabilities of all the hypotheses of length n equals the sum of the probabilities of hypotheses of length p. If this is the case, what normalization constant could you possibility use to make all the probabilities sum to one? It seems there are none.
“Mystery, and the joy of finding out, is either a personal thing, or it doesn’t exist at all—and I prefer to say it’s personal.” I don’t see why this is the case. Can’t one only have joy from finding out what no one in the Solar System knows? That way, one can still have joy, but it’s still not personal.
Should one really be so certain about there being no higher-level entities? You said that simulating higher-level entities takes fewer computational resources, so perhaps our universe is a simulation and that the creators, in an effort to save computational resources, made the universe do computations on higher-level entities when no-one was looking at the “base” entities. Far-fetched, maybe, but not completely implausible.
Perhaps if we start observing too many lower-level entities, the world will run out of memory. What would that look like?
What makes you think that the argument you just said was generated by you for a reason, instead of for no reason at all?
What evidence is there for mice being unable to think about thinking? Due to the communication issues, mice can’t say if they can think about thinking or not.
What evidence is there for floating beliefs being uniquely human? As far as I know, neuroscience hasn’t advanced far enough to be able to tell if other species have floating beliefs or not.
Edit: Then again, the question of if floating beliefs are uniquely human is practically a floating belief itself.
Then this solution just assumes the possibility of infinite people is 0. If this solution is based on premises that are probably false, then how is it a solution at all? I understand that infinity makes even bigger problems, so we should instead just call your solution a pseudo-solution-that’s-probably-false-but-is—still-the-best-one-we-have, and dedicate more efforts to finding a real solution.
I really appreciate having the examples in parentheses and italicised. It lets me easily skip them when I know what you mean. I wish others would do this.