>The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model.
I don’t think that message conveys useful information in the context of this argument, to anyone. I can model regular delusions just fine—what I can’t model is a delusion that gives one an appearance of having experiences while no experiences were in fact had. Saying “delusion” doesn’t clear up what you mean.
Saying “(True ^ True) = False” also doesn’t convey information. I don’t know what is meant by a world in which that holds, and I don’t think you know either. Being able to say the words doesn’t make it coherent.
You went to some severe edge cases here—not just simulation, but simulation that also somehow affects logical truths or creates a false appearance of experience. Those don’t seem like powers even an omnipotent being would possess, so I’m skeptical that those are meaningful, even if I was wrong about verificationism in general.
For more ordinary delusions or simulations, I can interpret that language in terms of expected experiences.
>What does it mean for your model to be “true”?
Nothing, and this is precisely my point. Verificationism is a criterion of meaning, not part of my model. The meaning of “verificationism is true” is just that all statements that verificationism says are incoherent are in fact incoherent.
>There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
I didn’t respond because I agree. All models are wrong, some models are useful. Use Solomonoff to weight various models to predict the future, without asserting that any of those models are “reality”. Solomonoff doesn’t even have a way to mark a model as “real”, that’s just completely out of scope.
I’m not particularly convinced by your claim that one should believe in untrue or incoherent things if it helps them be more productive, and I’m not interested in debating that. If you have a counter-argument to anything I’ve said, or a reason to think ontological statements are coherent, I’m interested in that. But a mere assertion that talking about these incoherent things boosts productivity isn’t interesting to me now.
>The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model.
I don’t think that message conveys useful information in the context of this argument, to anyone. I can model regular delusions just fine—what I can’t model is a delusion that gives one an appearance of having experiences while no experiences were in fact had. Saying “delusion” doesn’t clear up what you mean.
Saying “(True ^ True) = False” also doesn’t convey information. I don’t know what is meant by a world in which that holds, and I don’t think you know either. Being able to say the words doesn’t make it coherent.
You went to some severe edge cases here—not just simulation, but simulation that also somehow affects logical truths or creates a false appearance of experience. Those don’t seem like powers even an omnipotent being would possess, so I’m skeptical that those are meaningful, even if I was wrong about verificationism in general.
For more ordinary delusions or simulations, I can interpret that language in terms of expected experiences.
>What does it mean for your model to be “true”?
Nothing, and this is precisely my point. Verificationism is a criterion of meaning, not part of my model. The meaning of “verificationism is true” is just that all statements that verificationism says are incoherent are in fact incoherent.
>There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
I didn’t respond because I agree. All models are wrong, some models are useful. Use Solomonoff to weight various models to predict the future, without asserting that any of those models are “reality”. Solomonoff doesn’t even have a way to mark a model as “real”, that’s just completely out of scope.
I’m not particularly convinced by your claim that one should believe in untrue or incoherent things if it helps them be more productive, and I’m not interested in debating that. If you have a counter-argument to anything I’ve said, or a reason to think ontological statements are coherent, I’m interested in that. But a mere assertion that talking about these incoherent things boosts productivity isn’t interesting to me now.