It’s not that I think that your version of instrumentalism is incompatible with preferences, it’s more like I’m not sure I understand what the word “preferences” even means in your context. You say “possible worlds”, but, as far as I can tell, you mean something like, “possible models that predict future inputs”.
Firstly, I’m not even sure how you account for our actions affecting these inputs, especially given that you do not believe that various sets of inputs are connected to each other in any way; and without actions, preferences are not terribly relevant. Secondly, you said that a “preference” for you means something like, “a desire to make one model more accurate than the rest”, but would it not be easier to simply instantiate a model that fits the inputs ? Such a model would be 100% accurate, wouldn’t it ?
It’s not that I think that your version of instrumentalism is incompatible with preferences, it’s more like I’m not sure I understand what the word “preferences” even means in your context. You say “possible worlds”, but, as far as I can tell, you mean something like, “possible models that predict future inputs”.
Firstly, I’m not even sure how you account for our actions affecting these inputs, especially given that you do not believe that various sets of inputs are connected to each other in any way; and without actions, preferences are not terribly relevant. Secondly, you said that a “preference” for you means something like, “a desire to make one model more accurate than the rest”, but would it not be easier to simply instantiate a model that fits the inputs ? Such a model would be 100% accurate, wouldn’t it ?