Well, I’ll give it another go, despite someone diligently downvoting all my related comments.
It’s not me, FWIW; I find the discussion interesting.
That said, I’m not sure what methodology you use to determine which actions to take, given your statement that ” the “real world” just the most accurate model”. If all you cared about was the accuracy of your model, would it not be easier to avoid taking any physical actions, and simply change your model on the fly as it suits you ? This way, you could always make your model fit what you observe. Yes, you’d be grossly overfitting the data, but is that even a problem ?
I didn’t say it’s all I care about. Given a choice of several models and an ability to make one of them more accurate than the rest, I would likely exercise this choice, depending on my preferences, the effort required and the odds of success, just like your garden variety realist would. As Eliezer used to emphasize, “it all adds up to normality”.
I am guessing that you, TimS and nyan_sandwich all seem to think that my version of instrumentalism is incompatible with having preferences over possible worlds. I have trouble understanding where this twist is coming from.
It’s not that I think that your version of instrumentalism is incompatible with preferences, it’s more like I’m not sure I understand what the word “preferences” even means in your context. You say “possible worlds”, but, as far as I can tell, you mean something like, “possible models that predict future inputs”.
Firstly, I’m not even sure how you account for our actions affecting these inputs, especially given that you do not believe that various sets of inputs are connected to each other in any way; and without actions, preferences are not terribly relevant. Secondly, you said that a “preference” for you means something like, “a desire to make one model more accurate than the rest”, but would it not be easier to simply instantiate a model that fits the inputs ? Such a model would be 100% accurate, wouldn’t it ?
Your having a preference for worlds without, eg, slavery can’t possibly translate into something iike “i want to change the world external to me so that it no longer contains slaves”. I have trouble understanding what it would translate to. You could adopt models where things you don’t like don’t exist, but they wouldn’t be accurate.
Your having a preference for worlds without, eg, slavery can’t possibly translate into something iike “i want to change the world external to me so that it no longer contains slaves”.
No, but it translates to its equivalent:
I prefer models which describe a society without slavery to be accurate (i.e. confirmed in a later testing).
I prefer models which describe a society without slavery to be accurate (i.e. confirmed in a later testing).
So you’re saying you have a preference over the map, as opposed to the territory (your experiences, in this case)
That sounds subject to some standard pitfalls, offhand, where you try to fool yourself into choosing the “no-slaves” map instead of trying to optimize, well, reality, such as the slaves—perhaps with an experience machine, through simple self-deception, or maybe some sort of exploit involving Occam’s Razor.
I agree that self-deception is a “real” possibility. Then again, it is also a possibility for a realist. Or a dualist. In fact, confusing map and territory is one of the most common pitfalls, as you well know. Would it be more likely for an instrumentalist to become instrumenta-lost? I don’t see why it would be the case. For example, from my point of view, you arbitrarily chose a comforting Christian map (is it an inverse of “some sort of exploit involving Occam’s Razor”?) instead of a cold hard uncaring one, even though you seem to be preferring realism over instrumentalism.
Ah, no, sorry, I meant that those options would satisfy your stated preferences, not that they were pitfalls on the road to it. I’m suggesting that since you don’t want to fall into those pitfalls, those aren’t actually your preferences, whether because you’ve made a mistake or I have (please tell me if I have.)
I propose a ww2 mechanical aiming computer as an example of a model. Built based on the gears that can be easily and conveniently manufactured, there’s very little doubt that universe does not use anything even remotely similar to produce the movement of the projectile through the air, even if we assume that such question is meaningful.
A case can be made that physics is not that much different from ww2 aiming computer (built out of mathematics that is available and can be conveniently used). And with regards to MWI, a case can be made that it is similar to removing the only ratchet in the mechanical computer and proclaiming rest of the gears the reality because somehow “from the inside” it would allegedly still feel the same even though the mechanical computer, without this ratchet, doesn’t even work any more for predicting anything.
Of course, it is not clear how close physics is to a mechanical aiming computer in terms of how the internals can correspond to the real world.
It’s not me, FWIW; I find the discussion interesting.
That said, I’m not sure what methodology you use to determine which actions to take, given your statement that ” the “real world” just the most accurate model”. If all you cared about was the accuracy of your model, would it not be easier to avoid taking any physical actions, and simply change your model on the fly as it suits you ? This way, you could always make your model fit what you observe. Yes, you’d be grossly overfitting the data, but is that even a problem ?
I didn’t say it’s all I care about. Given a choice of several models and an ability to make one of them more accurate than the rest, I would likely exercise this choice, depending on my preferences, the effort required and the odds of success, just like your garden variety realist would. As Eliezer used to emphasize, “it all adds up to normality”.
Would you do so if picking another model required less effort ? I’m not sure how you can justify doing that.
I am guessing that you, TimS and nyan_sandwich all seem to think that my version of instrumentalism is incompatible with having preferences over possible worlds. I have trouble understanding where this twist is coming from.
It’s not that I think that your version of instrumentalism is incompatible with preferences, it’s more like I’m not sure I understand what the word “preferences” even means in your context. You say “possible worlds”, but, as far as I can tell, you mean something like, “possible models that predict future inputs”.
Firstly, I’m not even sure how you account for our actions affecting these inputs, especially given that you do not believe that various sets of inputs are connected to each other in any way; and without actions, preferences are not terribly relevant. Secondly, you said that a “preference” for you means something like, “a desire to make one model more accurate than the rest”, but would it not be easier to simply instantiate a model that fits the inputs ? Such a model would be 100% accurate, wouldn’t it ?
Your having a preference for worlds without, eg, slavery can’t possibly translate into something iike “i want to change the world external to me so that it no longer contains slaves”. I have trouble understanding what it would translate to. You could adopt models where things you don’t like don’t exist, but they wouldn’t be accurate.
No, but it translates to its equivalent:
And how do you arrange that?
So you’re saying you have a preference over the map, as opposed to the territory (your experiences, in this case)
That sounds subject to some standard pitfalls, offhand, where you try to fool yourself into choosing the “no-slaves” map instead of trying to optimize, well, reality, such as the slaves—perhaps with an experience machine, through simple self-deception, or maybe some sort of exploit involving Occam’s Razor.
I agree that self-deception is a “real” possibility. Then again, it is also a possibility for a realist. Or a dualist. In fact, confusing map and territory is one of the most common pitfalls, as you well know. Would it be more likely for an instrumentalist to become instrumenta-lost? I don’t see why it would be the case. For example, from my point of view, you arbitrarily chose a comforting Christian map (is it an inverse of “some sort of exploit involving Occam’s Razor”?) instead of a cold hard uncaring one, even though you seem to be preferring realism over instrumentalism.
Ah, no, sorry, I meant that those options would satisfy your stated preferences, not that they were pitfalls on the road to it. I’m suggesting that since you don’t want to fall into those pitfalls, those aren’t actually your preferences, whether because you’ve made a mistake or I have (please tell me if I have.)
I propose a ww2 mechanical aiming computer as an example of a model. Built based on the gears that can be easily and conveniently manufactured, there’s very little doubt that universe does not use anything even remotely similar to produce the movement of the projectile through the air, even if we assume that such question is meaningful.
A case can be made that physics is not that much different from ww2 aiming computer (built out of mathematics that is available and can be conveniently used). And with regards to MWI, a case can be made that it is similar to removing the only ratchet in the mechanical computer and proclaiming rest of the gears the reality because somehow “from the inside” it would allegedly still feel the same even though the mechanical computer, without this ratchet, doesn’t even work any more for predicting anything.
Of course, it is not clear how close physics is to a mechanical aiming computer in terms of how the internals can correspond to the real world.