If I knew I would be smarter than Yudkowsky, as he writes:
It doesn’t feel to me like 3^^^^3 lives are really at stake, even at very tiny probability. I’d sooner question my grasp of “rationality” than give five dollars to a Pascal’s Mugger because I thought it was “rational”.
Something seems to be fundamentally wrong with using Bayes’ Theorem, the expected utility formula, and Solomonoff induction to determine how to choose given unbounded utility scenarios. If you just admit that it is wrong but less wrong, then I think it is valid to scrutinize your upper and lower bounds. Yudkowsky clearly sets some upper bound, but what is it and how does he determine it if not by ‘gut feeling’? And if it all comes down to ‘instinct’ on when to disregard any expected utility, then how can one still refer to those heuristics as ‘laws’?
If I knew I would be smarter than Yudkowsky, as he writes:
Something seems to be fundamentally wrong with using Bayes’ Theorem, the expected utility formula, and Solomonoff induction to determine how to choose given unbounded utility scenarios. If you just admit that it is wrong but less wrong, then I think it is valid to scrutinize your upper and lower bounds. Yudkowsky clearly sets some upper bound, but what is it and how does he determine it if not by ‘gut feeling’? And if it all comes down to ‘instinct’ on when to disregard any expected utility, then how can one still refer to those heuristics as ‘laws’?