>Everything you’re thinking is compatible with a situation in which you’re actually in a simulation hosted in some entirely alien reality (2 + 2 = 3, experience is meaningless, causes follow after effects, (True ^ True) = False, etc, which is being manipulated in extremely contrived ways which produce your exact current thought processes.
I disagree, and see no reason to agree. You have not fully specified this situation, and have offered no argument for why this situation is coherent. Being as this is obviously self-contradictory (at least the part about logic), why should I accept this?
>If you have an argument against this problem, I am especially interested in hearing it
The problem is that you’re assuming that verificationism is false in arguing against it, which is impermissible. E.g. “maybe you’re in an asylum” assumes that it’s possible for an asylum to “exist” and for someone to be in it, both of which are meaningless under my worldview.
Same for any other way to cash out “it’s all a delusion”—you need to stipulate unverifiable entities in order to even define delusion.
Now, this is distinct from the question of whether I should have 100% credence in claims such as 2+2=4 or “I am currently having an experience”. I can have uncertainty as to such claims without allowing for them to be meaningfully false. I’m not 100% certain that verificationism is valid.
>It seems like the fact you can’t tell between this situation and reality
What do you mean by “reality”? You keep using words that are meaningless under my worldview without bothering to define them.
>The real question of importance is, does operating on a framework which takes specific regular notice of the idea that naïve realism is technically a floating belief increase your productivity in the real world?
This isn’t relevant to the truth of verificationism, though. My argument against realism is that it’s not even coherent. If it makes your model prettier, go ahead and use it. You’ll just run into trouble if you try doing e.g. quantum physics and insist on realism—you’ll do things like assert there must be loopholes in Bell’s theorem, and search for them and never find them.
E.g. “maybe you’re in an asylum” assumes that it’s possible for an asylum to “exist” and for someone to be in it, both of which are meaningless under my worldview.
What do you mean by “reality”? You keep using words that are meaningless under my worldview without bothering to define them.
You’re implementing a feature into your model which doesn’t change what it predicts but makes it less computationally efficient.
The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model. Ipso facto, my model is better. There’s no coherent excuse for this.
This isn’t relevant to the truth of verificationism, though. My argument against realism is that it’s not even coherent. If it makes your model prettier, go ahead and use it.
What does it mean for your model to be “true”? There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
It’s not about making my model “prettier”- my model is literally better at evoking the outcomes that I want to evoke. This is the correct dimension on which to evaluate your model.
You’ll just run into trouble if you try doing e.g. quantum physics and insist on realism—you’ll do things like assert there must be loopholes in Bell’s theorem, and search for them and never find them.
My preferred interpretation of quantum physics (many worlds) was made before bell’s theorem, and it turns out that bell’s theorem is actually strong evidence in favor of many worlds. Bell’s theorem does not “disprove realism”, it just disproves hidden variable theories. My interpretation already predicted that.
I suspect this isn’t going anywhere, so I’m abdicating.
>The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model.
I don’t think that message conveys useful information in the context of this argument, to anyone. I can model regular delusions just fine—what I can’t model is a delusion that gives one an appearance of having experiences while no experiences were in fact had. Saying “delusion” doesn’t clear up what you mean.
Saying “(True ^ True) = False” also doesn’t convey information. I don’t know what is meant by a world in which that holds, and I don’t think you know either. Being able to say the words doesn’t make it coherent.
You went to some severe edge cases here—not just simulation, but simulation that also somehow affects logical truths or creates a false appearance of experience. Those don’t seem like powers even an omnipotent being would possess, so I’m skeptical that those are meaningful, even if I was wrong about verificationism in general.
For more ordinary delusions or simulations, I can interpret that language in terms of expected experiences.
>What does it mean for your model to be “true”?
Nothing, and this is precisely my point. Verificationism is a criterion of meaning, not part of my model. The meaning of “verificationism is true” is just that all statements that verificationism says are incoherent are in fact incoherent.
>There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
I didn’t respond because I agree. All models are wrong, some models are useful. Use Solomonoff to weight various models to predict the future, without asserting that any of those models are “reality”. Solomonoff doesn’t even have a way to mark a model as “real”, that’s just completely out of scope.
I’m not particularly convinced by your claim that one should believe in untrue or incoherent things if it helps them be more productive, and I’m not interested in debating that. If you have a counter-argument to anything I’ve said, or a reason to think ontological statements are coherent, I’m interested in that. But a mere assertion that talking about these incoherent things boosts productivity isn’t interesting to me now.
>Everything you’re thinking is compatible with a situation in which you’re actually in a simulation hosted in some entirely alien reality (2 + 2 = 3, experience is meaningless, causes follow after effects, (True ^ True) = False, etc, which is being manipulated in extremely contrived ways which produce your exact current thought processes.
I disagree, and see no reason to agree. You have not fully specified this situation, and have offered no argument for why this situation is coherent. Being as this is obviously self-contradictory (at least the part about logic), why should I accept this?
>If you have an argument against this problem, I am especially interested in hearing it
The problem is that you’re assuming that verificationism is false in arguing against it, which is impermissible. E.g. “maybe you’re in an asylum” assumes that it’s possible for an asylum to “exist” and for someone to be in it, both of which are meaningless under my worldview.
Same for any other way to cash out “it’s all a delusion”—you need to stipulate unverifiable entities in order to even define delusion.
Now, this is distinct from the question of whether I should have 100% credence in claims such as 2+2=4 or “I am currently having an experience”. I can have uncertainty as to such claims without allowing for them to be meaningfully false. I’m not 100% certain that verificationism is valid.
>It seems like the fact you can’t tell between this situation and reality
What do you mean by “reality”? You keep using words that are meaningless under my worldview without bothering to define them.
>The real question of importance is, does operating on a framework which takes specific regular notice of the idea that naïve realism is technically a floating belief increase your productivity in the real world?
This isn’t relevant to the truth of verificationism, though. My argument against realism is that it’s not even coherent. If it makes your model prettier, go ahead and use it. You’ll just run into trouble if you try doing e.g. quantum physics and insist on realism—you’ll do things like assert there must be loopholes in Bell’s theorem, and search for them and never find them.
You’re implementing a feature into your model which doesn’t change what it predicts but makes it less computationally efficient.
The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model. Ipso facto, my model is better. There’s no coherent excuse for this.
What does it mean for your model to be “true”? There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
It’s not about making my model “prettier”- my model is literally better at evoking the outcomes that I want to evoke. This is the correct dimension on which to evaluate your model.
My preferred interpretation of quantum physics (many worlds) was made before bell’s theorem, and it turns out that bell’s theorem is actually strong evidence in favor of many worlds. Bell’s theorem does not “disprove realism”, it just disproves hidden variable theories. My interpretation already predicted that.
I suspect this isn’t going anywhere, so I’m abdicating.
>The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model.
I don’t think that message conveys useful information in the context of this argument, to anyone. I can model regular delusions just fine—what I can’t model is a delusion that gives one an appearance of having experiences while no experiences were in fact had. Saying “delusion” doesn’t clear up what you mean.
Saying “(True ^ True) = False” also doesn’t convey information. I don’t know what is meant by a world in which that holds, and I don’t think you know either. Being able to say the words doesn’t make it coherent.
You went to some severe edge cases here—not just simulation, but simulation that also somehow affects logical truths or creates a false appearance of experience. Those don’t seem like powers even an omnipotent being would possess, so I’m skeptical that those are meaningful, even if I was wrong about verificationism in general.
For more ordinary delusions or simulations, I can interpret that language in terms of expected experiences.
>What does it mean for your model to be “true”?
Nothing, and this is precisely my point. Verificationism is a criterion of meaning, not part of my model. The meaning of “verificationism is true” is just that all statements that verificationism says are incoherent are in fact incoherent.
>There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
I didn’t respond because I agree. All models are wrong, some models are useful. Use Solomonoff to weight various models to predict the future, without asserting that any of those models are “reality”. Solomonoff doesn’t even have a way to mark a model as “real”, that’s just completely out of scope.
I’m not particularly convinced by your claim that one should believe in untrue or incoherent things if it helps them be more productive, and I’m not interested in debating that. If you have a counter-argument to anything I’ve said, or a reason to think ontological statements are coherent, I’m interested in that. But a mere assertion that talking about these incoherent things boosts productivity isn’t interesting to me now.