This is very well-written, exceptionally clear-headed, and, I’d suggest, Mainworthy. This kind of thinking does indeed seem to be what several have/are converging upon, including, IIRC, Wei Dai, Eliezer, some SPARC attendees who were thrown anthropics to try, possibly Carl Shulman, and presumably many others (e.g. other advocates of UDT and its offspring). Anthropics may well be/become the best example of LW rapidly solving/making major progress on a significant open problem in philosophy and reaching consensus before mainstream philosophy manages to do so.
It really does seem to me that the massive confusion around Doomsday is a result of people who are very smart and even good at reductionism (e.g. even Bos(s)trom, though I’ve by no means read all or even most of his stuff) lapsing and thinking about anthropics in such a way that they might as well be talking about souls.
That said, as best I can tell, Eliezer has remained mysteriously silent on Sleeping Beauty and Doomsday, which makes he hesitate slightly to declare them solved. (E.g. I’d expect his endorsement of a solution by now if he agreed and did not feel confused.) And specifically, last I heard, Eliezer held probability theory as above vulgar things like betting or something like that, so the lack of an obvious way to reconcile that view of probability with the dissolution of Sleeping Beauty in this post and the one I linked gives me pause. (This could be a failure of reductive effort on my part, though.)
Agreed about Eliezer thinking similar thoughts. At least, he’s thinking thoughts which seem to me to be similar to those in this post. See Building Phenomenological Bridges (article by Robby based on Eliezer’s facebook discussion).
That article discusses (among other things) how an AI should form hypotheses about the world it inhabits, given its sense perceptions. The idea “consider all and only those worlds which are consistent with an observer having such-and-such perceptions, and then choose among those based on other considerations” is, I think, common to both these posts.
This is very well-written, exceptionally clear-headed, and, I’d suggest, Mainworthy. This kind of thinking does indeed seem to be what several have/are converging upon, including, IIRC, Wei Dai, Eliezer, some SPARC attendees who were thrown anthropics to try, possibly Carl Shulman, and presumably many others (e.g. other advocates of UDT and its offspring). Anthropics may well be/become the best example of LW rapidly solving/making major progress on a significant open problem in philosophy and reaching consensus before mainstream philosophy manages to do so.
It really does seem to me that the massive confusion around Doomsday is a result of people who are very smart and even good at reductionism (e.g. even Bos(s)trom, though I’ve by no means read all or even most of his stuff) lapsing and thinking about anthropics in such a way that they might as well be talking about souls.
Related.
That said, as best I can tell, Eliezer has remained mysteriously silent on Sleeping Beauty and Doomsday, which makes he hesitate slightly to declare them solved. (E.g. I’d expect his endorsement of a solution by now if he agreed and did not feel confused.) And specifically, last I heard, Eliezer held probability theory as above vulgar things like betting or something like that, so the lack of an obvious way to reconcile that view of probability with the dissolution of Sleeping Beauty in this post and the one I linked gives me pause. (This could be a failure of reductive effort on my part, though.)
Agreed about Eliezer thinking similar thoughts. At least, he’s thinking thoughts which seem to me to be similar to those in this post. See Building Phenomenological Bridges (article by Robby based on Eliezer’s facebook discussion).
That article discusses (among other things) how an AI should form hypotheses about the world it inhabits, given its sense perceptions. The idea “consider all and only those worlds which are consistent with an observer having such-and-such perceptions, and then choose among those based on other considerations” is, I think, common to both these posts.