So, what are those possible worlds but models? And isn’t the “real world” just the most accurate model? Properly modeling your actions lets you affect the preferred “world” model’s accuracy, and such. The remaining issue is whether the definition of “good” or “preferred” depends on realist vs instrumentalist outlook, and I don’t see how. Maybe you can clarify.
Interesting. So we prefer that some models or others be accurate, and take actions that we expect to make that happen, in our current bag of models.
Ok I think I get it. I was confused about what the referent of your preferences would be if you did not have your models referring to something. I see that you have made the accuracy of various models the referent of preferences. This seems reasonable enough.
I can see now that I’m confused about this stuff a bit more than I thought I was. Will have to think about it a bit more.
It works fine—as long as you only care about optimizing inputs, in which case I invite you to go play in the holodeck while the rest of us optimize the real world.
If you can’t find a holodeck, I sure hope you don’t accidentally sacrifice your life to save somebody or further some noble cause. After all, you wont be there to experience the resulting inputs, so what’s the point?
It’s not a utility function over inputs, it’s over the accuracy of models.
If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can’t affect the stuff outside the holodeck.
Just because someone frames things differently doesn’t mean they have to make the obvious mistakes and start killing babies.
For example, I could do what you just did to “maximize expected utility over possible worlds” by choosing to modify my brain to have erroneously high expected utility. It’s maximized now right? See the problem with this argument?
It all adds up to normality, which probably means we are confused and there is an even simpler underlying model of the situation.
It’s not a utility function over inputs, it’s over the accuracy of models.
Affecting the accuracy of a specified model—a term defined as “how well it predicts future inputs”—is a subset of optimizing future inputs.
If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can’t affect the stuff outside the holodeck.
You’re still thinking like a realist. A holodeck doesn’t prevent you from observing the real world—there is no “real world”. It prevents you testing how well certain models predict experiences when you take the action “leave the holodeck”, unless of course you leave the holodeck—it’s an opportunity cost and nothing more, and a minor one at that, since information holds only instrumental value.
Just because someone frames things differently doesn’t mean they have to make the obvious mistakes and start killing babies.
Pardon?
For example, I could do what you just did to “maximize expected utility over possible worlds” by choosing to modify my brain to have erroneously high expected utility. It’s maximized now right? See the problem with this argument?
Except that I (think that I) get my utility over the world, not over my experiences. Same reason I don’t win the lottery with quantum suicide.
It all adds up to normality
You know, not every belief adds up to normality—just the true ones. Imagine someone arguing you had misinterpreted happiness-maximization because “it all adds up to normality”.
Interesting. So we prefer that some models or others be accurate, and take actions that we expect to make that happen, in our current bag of models.
Ok I think I get it. I was confused about what the referent of your preferences would be if you did not have your models referring to something. I see that you have made the accuracy of various models the referent of preferences. This seems reasonable enough.
I can see now that I’m confused about this stuff a bit more than I thought I was. Will have to think about it a bit more.
I like how you put it into some fancy language, and now it sounds almost profound.
It is entirely possible that I’m talking out of my ass here, and you will find a killer argument against this approach.
Likewise the converse. I reckon both will get killed by a proper approach.
It works fine—as long as you only care about optimizing inputs, in which case I invite you to go play in the holodeck while the rest of us optimize the real world.
If you can’t find a holodeck, I sure hope you don’t accidentally sacrifice your life to save somebody or further some noble cause. After all, you wont be there to experience the resulting inputs, so what’s the point?
You are arguing with a strawman.
It’s not a utility function over inputs, it’s over the accuracy of models.
If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can’t affect the stuff outside the holodeck.
Just because someone frames things differently doesn’t mean they have to make the obvious mistakes and start killing babies.
For example, I could do what you just did to “maximize expected utility over possible worlds” by choosing to modify my brain to have erroneously high expected utility. It’s maximized now right? See the problem with this argument?
It all adds up to normality, which probably means we are confused and there is an even simpler underlying model of the situation.
You know, I’m actually not.
Affecting the accuracy of a specified model—a term defined as “how well it predicts future inputs”—is a subset of optimizing future inputs.
You’re still thinking like a realist. A holodeck doesn’t prevent you from observing the real world—there is no “real world”. It prevents you testing how well certain models predict experiences when you take the action “leave the holodeck”, unless of course you leave the holodeck—it’s an opportunity cost and nothing more, and a minor one at that, since information holds only instrumental value.
Pardon?
Except that I (think that I) get my utility over the world, not over my experiences. Same reason I don’t win the lottery with quantum suicide.
You know, not every belief adds up to normality—just the true ones. Imagine someone arguing you had misinterpreted happiness-maximization because “it all adds up to normality”.