A person is a complicated machine, we can observe how this machine develops or could develop through processes that we could set up in the world or hypothetically. This is already quite clear, and things like “first person perspective” or “I will observe” don’t make this clearer.
So I don’t see a decision theory proclaiming “QI is false!”, it’s just not a consideration it needs to deal with at any point, even if somehow there was a way of saying more clearly what that consideration means. Like a chip designer doesn’t need to appreciate the taste of good cheese to make better AI accelerators.
A person is a complicated machine, we can observe how this machine develops or could develop through processes that we could set up in the world or hypothetically. This is already quite clear, and things like “first person perspective” or “I will observe” don’t make this clearer.
So I don’t see a decision theory proclaiming “QI is false!”, it’s just not a consideration it needs to deal with at any point, even if somehow there was a way of saying more clearly what that consideration means. Like a chip designer doesn’t need to appreciate the taste of good cheese to make better AI accelerators.