I’m not really disagreeing. I’m just pointing out that, as you list progressively more and more speculative models, looser and looser connected to the experiment, the idea of some objective reality becomes progressively less useful, and the questions like “but what if the Boltzmann Brains/mathematical universe/many worlds/super-mega crossover/post-utopian colonial alienation is real?” become progressively more nonsensical.
Yet people forget that and seriously discuss questions like that, effectively counting angels on the head of a pin. And, on the other hand, they get this mental block due to the idea of some static objective reality out there, limiting their model space.
These two fallacies is what started me on my way from realism to pragmatism/instrumentalism in the first place.
the idea of some objective reality becomes progressively less useful
Useful for what? Prediction? But realists arent using these models to answer the “what input should I expect” question; they are answering other questions, like “what is real” and “what should we value”.
And “nothing” is an answer to “what is real”. What does instrumentalism predict?
If it’s really better or more “true” on some level, I suppose you might predict a superintelligence would self-modify into an anti-realist? Seems unlikely from my realist perspective, at least, so I’d have to update in favour of something.
If it’s really better or more “true” on some level
But if that’s no a predictive level, then instrumentalism is inconsistent. it is saying that all other non-predictive
theories should be rejected for being non-predictive, but that it is itself somehow an exception. This is of course parallel to the flaw in Logical Positivism.
If I had such a persuasive argument, naturally it would already have persuaded me, but my point is that it doesn’t need to persuade people who already agree with it—just the rest of us.
And once you’ve self-modified into an instrumentalist, I guess there are other arguments that will now persuade you—for example, that this hypothetical underlying layer of “reality” has no extra predictive power (at least, I think that’s what shiminux finds persuasive.)
I’m not really disagreeing. I’m just pointing out that, as you list progressively more and more speculative models, looser and looser connected to the experiment, the idea of some objective reality becomes progressively less useful, and the questions like “but what if the Boltzmann Brains/mathematical universe/many worlds/super-mega crossover/post-utopian colonial alienation is real?” become progressively more nonsensical.
Yet people forget that and seriously discuss questions like that, effectively counting angels on the head of a pin. And, on the other hand, they get this mental block due to the idea of some static objective reality out there, limiting their model space.
These two fallacies is what started me on my way from realism to pragmatism/instrumentalism in the first place.
Useful for what? Prediction? But realists arent using these models to answer the “what input should I expect” question; they are answering other questions, like “what is real” and “what should we value”.
And “nothing” is an answer to “what is real”. What does instrumentalism predict?
If it’s really better or more “true” on some level, I suppose you might predict a superintelligence would self-modify into an anti-realist? Seems unlikely from my realist perspective, at least, so I’d have to update in favour of something.
But if that’s no a predictive level, then instrumentalism is inconsistent. it is saying that all other non-predictive theories should be rejected for being non-predictive, but that it is itself somehow an exception. This is of course parallel to the flaw in Logical Positivism.
Well, I suppose all it would need to peruade is people who don’t already believe it …
More seriously, you’ll have to ask shiminux, because I, as a realist, anticipate this test failing, so naturally I can’t explain why it would succeed.
Huh? I don’t see why the ability to convince people who don’t care about consistency is something that should sway me.
If I had such a persuasive argument, naturally it would already have persuaded me, but my point is that it doesn’t need to persuade people who already agree with it—just the rest of us.
And once you’ve self-modified into an instrumentalist, I guess there are other arguments that will now persuade you—for example, that this hypothetical underlying layer of “reality” has no extra predictive power (at least, I think that’s what shiminux finds persuasive.)