Not being careful in making descriptive statements:
My brain has preferences between probability distributions built into it.
As humans using Solomonoff induction, we go on to argue that
Fundamental mental entities:
Rather than supposing that the probability of a certain universe depends on the complexity of that universe, it takes as a primitive object a probability distribution over possible experiences.
Unsubstantiated claims:
The shortest description of me is a pair (U, x), where U is a description of my universe and x is a description of where to find me in that universe.
Not being careful in making descriptive statements:
I don’t understand how these descriptive statements could be made more careful. In the first statement, I go on to explain exactly what I mean as well as I can. Do you not think my description refers to a function your brain performs? In the second statement, you are objecting to my use of “we” instead of giving a list of people? (e.g., me, Yudkowsky, Solomonoff...)
Fundamental mental entities:
As long as I don’t understand what consciousness is, it seems this problem is unavoidable. Should we not talk about anthropics until we solve the problem of consciousness? That seems like a bad option, since we may well have to make choices about simulations long before then.
Unsubstantiated claims:
My claim is better substantiated than the claim that Solomonoff induction is a reasonable thing to do for a human scientist. Admittedly that may not be the case, but its pretty well accepted here and has been argued at great length by many other thinkers (e.g., Solomonoff).
Not being careful in making descriptive statements:
Fundamental mental entities:
Unsubstantiated claims:
I don’t understand how these descriptive statements could be made more careful. In the first statement, I go on to explain exactly what I mean as well as I can. Do you not think my description refers to a function your brain performs? In the second statement, you are objecting to my use of “we” instead of giving a list of people? (e.g., me, Yudkowsky, Solomonoff...)
As long as I don’t understand what consciousness is, it seems this problem is unavoidable. Should we not talk about anthropics until we solve the problem of consciousness? That seems like a bad option, since we may well have to make choices about simulations long before then.
My claim is better substantiated than the claim that Solomonoff induction is a reasonable thing to do for a human scientist. Admittedly that may not be the case, but its pretty well accepted here and has been argued at great length by many other thinkers (e.g., Solomonoff).