It’s not a utility function over inputs, it’s over the accuracy of models.
Affecting the accuracy of a specified model—a term defined as “how well it predicts future inputs”—is a subset of optimizing future inputs.
If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can’t affect the stuff outside the holodeck.
You’re still thinking like a realist. A holodeck doesn’t prevent you from observing the real world—there is no “real world”. It prevents you testing how well certain models predict experiences when you take the action “leave the holodeck”, unless of course you leave the holodeck—it’s an opportunity cost and nothing more, and a minor one at that, since information holds only instrumental value.
Just because someone frames things differently doesn’t mean they have to make the obvious mistakes and start killing babies.
Pardon?
For example, I could do what you just did to “maximize expected utility over possible worlds” by choosing to modify my brain to have erroneously high expected utility. It’s maximized now right? See the problem with this argument?
Except that I (think that I) get my utility over the world, not over my experiences. Same reason I don’t win the lottery with quantum suicide.
It all adds up to normality
You know, not every belief adds up to normality—just the true ones. Imagine someone arguing you had misinterpreted happiness-maximization because “it all adds up to normality”.
You know, I’m actually not.
Affecting the accuracy of a specified model—a term defined as “how well it predicts future inputs”—is a subset of optimizing future inputs.
You’re still thinking like a realist. A holodeck doesn’t prevent you from observing the real world—there is no “real world”. It prevents you testing how well certain models predict experiences when you take the action “leave the holodeck”, unless of course you leave the holodeck—it’s an opportunity cost and nothing more, and a minor one at that, since information holds only instrumental value.
Pardon?
Except that I (think that I) get my utility over the world, not over my experiences. Same reason I don’t win the lottery with quantum suicide.
You know, not every belief adds up to normality—just the true ones. Imagine someone arguing you had misinterpreted happiness-maximization because “it all adds up to normality”.