If there’s a pool of unknown things that are infinitely important, and what they are correlates positively with what would be important otherwise, then that gives you a lower bound on the probability of scenarios that you should take seriously no matter how high their utility. I’m not sure that it’s a very high lower bound though.
It sounds like there may be a great point in here. I can’t quite see what it is or whether it works, though. Could you maybe spell it out with some variables or math?
There’s also a class of things that we can’t really decide rationally because they’re far more improbable than our understanding of decision theory being completely wrong.
If we use “decide rationally” to mean “decide in the way that makes most sense, given our limited knowledge and understanding” rather than “follow a particular procedure with a certain sort of justification”, I don’t think this is true. We should just be able to stick a probability on our understanding of decision theory being right, estimate conditional probabilities for outcomes and preferences if our understanding is/isn’t right, etc. There wouldn’t be a definite known framework in which this was rigorous, but it should yield good best-guess probabilities for decision-making the way any other taking account of structural uncertainty does.
It sounds like there may be a great point in here. I can’t quite see what it is or whether it works, though. Could you maybe spell it out with some variables or math?
Suppose you have a prima facie utility function U on ordinary outcomes; and suppose that you estimate that due to unknown unknowns, the probability that your real utility function V is infinite for each ordinary outcome is 1/(10^100) * U(outcome). Then you should prefer eating a pie with U = 3 utils (versus say 1 util for not eating it) to an 1 in 10^200 chance of going to heaven and getting infinite utils (which I’m counting here as an extraordinary outcome that the relationship between U and V doesn’t apply to).
If we use “decide rationally” to mean “decide in the way that makes most sense, given our limited knowledge and understanding” rather than “follow a particular procedure with a certain sort of justification”, I don’t think this is true.
I’m confused here, but I’m thinking of cases like: there’s a probability of 1 in 10^20 that God exists, but if so then our best guess is also that 1=2. If God exists, then the utility of an otherwise identical outcome is (1/1)^1000 times what it would otherwise be, so it’s also (1/2)^1000 times what it would otherwise be, so can we ignore that case?
I suspect reasoning like this would produce cutoffs far below Roko’s, though. (And the first argument above probably wouldn’t reproduce normal behavior.)
It sounds like there may be a great point in here. I can’t quite see what it is or whether it works, though. Could you maybe spell it out with some variables or math?
If we use “decide rationally” to mean “decide in the way that makes most sense, given our limited knowledge and understanding” rather than “follow a particular procedure with a certain sort of justification”, I don’t think this is true. We should just be able to stick a probability on our understanding of decision theory being right, estimate conditional probabilities for outcomes and preferences if our understanding is/isn’t right, etc. There wouldn’t be a definite known framework in which this was rigorous, but it should yield good best-guess probabilities for decision-making the way any other taking account of structural uncertainty does.
Suppose you have a prima facie utility function U on ordinary outcomes; and suppose that you estimate that due to unknown unknowns, the probability that your real utility function V is infinite for each ordinary outcome is 1/(10^100) * U(outcome). Then you should prefer eating a pie with U = 3 utils (versus say 1 util for not eating it) to an 1 in 10^200 chance of going to heaven and getting infinite utils (which I’m counting here as an extraordinary outcome that the relationship between U and V doesn’t apply to).
I’m confused here, but I’m thinking of cases like: there’s a probability of 1 in 10^20 that God exists, but if so then our best guess is also that 1=2. If God exists, then the utility of an otherwise identical outcome is (1/1)^1000 times what it would otherwise be, so it’s also (1/2)^1000 times what it would otherwise be, so can we ignore that case?
I suspect reasoning like this would produce cutoffs far below Roko’s, though. (And the first argument above probably wouldn’t reproduce normal behavior.)