E.g. “maybe you’re in an asylum” assumes that it’s possible for an asylum to “exist” and for someone to be in it, both of which are meaningless under my worldview.
What do you mean by “reality”? You keep using words that are meaningless under my worldview without bothering to define them.
You’re implementing a feature into your model which doesn’t change what it predicts but makes it less computationally efficient.
The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model. Ipso facto, my model is better. There’s no coherent excuse for this.
This isn’t relevant to the truth of verificationism, though. My argument against realism is that it’s not even coherent. If it makes your model prettier, go ahead and use it.
What does it mean for your model to be “true”? There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
It’s not about making my model “prettier”- my model is literally better at evoking the outcomes that I want to evoke. This is the correct dimension on which to evaluate your model.
You’ll just run into trouble if you try doing e.g. quantum physics and insist on realism—you’ll do things like assert there must be loopholes in Bell’s theorem, and search for them and never find them.
My preferred interpretation of quantum physics (many worlds) was made before bell’s theorem, and it turns out that bell’s theorem is actually strong evidence in favor of many worlds. Bell’s theorem does not “disprove realism”, it just disproves hidden variable theories. My interpretation already predicted that.
I suspect this isn’t going anywhere, so I’m abdicating.
>The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model.
I don’t think that message conveys useful information in the context of this argument, to anyone. I can model regular delusions just fine—what I can’t model is a delusion that gives one an appearance of having experiences while no experiences were in fact had. Saying “delusion” doesn’t clear up what you mean.
Saying “(True ^ True) = False” also doesn’t convey information. I don’t know what is meant by a world in which that holds, and I don’t think you know either. Being able to say the words doesn’t make it coherent.
You went to some severe edge cases here—not just simulation, but simulation that also somehow affects logical truths or creates a false appearance of experience. Those don’t seem like powers even an omnipotent being would possess, so I’m skeptical that those are meaningful, even if I was wrong about verificationism in general.
For more ordinary delusions or simulations, I can interpret that language in terms of expected experiences.
>What does it mean for your model to be “true”?
Nothing, and this is precisely my point. Verificationism is a criterion of meaning, not part of my model. The meaning of “verificationism is true” is just that all statements that verificationism says are incoherent are in fact incoherent.
>There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
I didn’t respond because I agree. All models are wrong, some models are useful. Use Solomonoff to weight various models to predict the future, without asserting that any of those models are “reality”. Solomonoff doesn’t even have a way to mark a model as “real”, that’s just completely out of scope.
I’m not particularly convinced by your claim that one should believe in untrue or incoherent things if it helps them be more productive, and I’m not interested in debating that. If you have a counter-argument to anything I’ve said, or a reason to think ontological statements are coherent, I’m interested in that. But a mere assertion that talking about these incoherent things boosts productivity isn’t interesting to me now.
You’re implementing a feature into your model which doesn’t change what it predicts but makes it less computationally efficient.
The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model. Ipso facto, my model is better. There’s no coherent excuse for this.
What does it mean for your model to be “true”? There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
It’s not about making my model “prettier”- my model is literally better at evoking the outcomes that I want to evoke. This is the correct dimension on which to evaluate your model.
My preferred interpretation of quantum physics (many worlds) was made before bell’s theorem, and it turns out that bell’s theorem is actually strong evidence in favor of many worlds. Bell’s theorem does not “disprove realism”, it just disproves hidden variable theories. My interpretation already predicted that.
I suspect this isn’t going anywhere, so I’m abdicating.
>The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model.
I don’t think that message conveys useful information in the context of this argument, to anyone. I can model regular delusions just fine—what I can’t model is a delusion that gives one an appearance of having experiences while no experiences were in fact had. Saying “delusion” doesn’t clear up what you mean.
Saying “(True ^ True) = False” also doesn’t convey information. I don’t know what is meant by a world in which that holds, and I don’t think you know either. Being able to say the words doesn’t make it coherent.
You went to some severe edge cases here—not just simulation, but simulation that also somehow affects logical truths or creates a false appearance of experience. Those don’t seem like powers even an omnipotent being would possess, so I’m skeptical that those are meaningful, even if I was wrong about verificationism in general.
For more ordinary delusions or simulations, I can interpret that language in terms of expected experiences.
>What does it mean for your model to be “true”?
Nothing, and this is precisely my point. Verificationism is a criterion of meaning, not part of my model. The meaning of “verificationism is true” is just that all statements that verificationism says are incoherent are in fact incoherent.
>There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
I didn’t respond because I agree. All models are wrong, some models are useful. Use Solomonoff to weight various models to predict the future, without asserting that any of those models are “reality”. Solomonoff doesn’t even have a way to mark a model as “real”, that’s just completely out of scope.
I’m not particularly convinced by your claim that one should believe in untrue or incoherent things if it helps them be more productive, and I’m not interested in debating that. If you have a counter-argument to anything I’ve said, or a reason to think ontological statements are coherent, I’m interested in that. But a mere assertion that talking about these incoherent things boosts productivity isn’t interesting to me now.