This is false. I actually have no idea what it would mean for an experience to be a delusion—I don’t think that’s even a meaningful statement.
I’m comfortable with the Cartesian argument that allows me to know that I am experiencing things.
Everything you’re thinking is compatible with a situation in which you’re actually in a simulation hosted in some entirely alien reality (2 + 2 = 3, experience is meaningless, causes follow after effects, (True ^ True) = False, etc, which is being manipulated in extremely contrived ways which produce your exact current thought processes.
There are an exhausting number of different riffs on this idea- maybe you’re in an asylum and all of your thinking including “I actually have no idea what it would mean for an experience to be a delusion” is due to some major mental disorder. Oh, how obvious- my idea of experience was a crazy delusion all along. I can’t believe I said that it was my daughter’s arm. “I think therefore I am”? Absurd!
If you have an argument against this problem, I am especially interested in hearing it- it seems like the fact you can’t tell between this situation and reality (and you can’t know whether this situation is impossible as a result, etc.) is part of the construction of the scenario. You’d need to show that the whole idea that “We can construct situations in which you’re having exactly the same thoughts as you are right now, but with some arbitrary change (Which you don’t even need to believe is theoretically possible or coherent) in the background” is invalid.
Do I think this is a practical concern? Of course not. The Cartesian argument isn’t sufficient to convince me, though- I’m just assuming that I really exist and things are broadly as they seem. I don’t think it’s that plausible to expect that I would be able to derive these assumptions without using them- there is no epistemological rock bottom.
On the contrary, it’s the naive realist model that doesn’t pay rent by not making any predictions at all different from my simpler model.
Your model is (I allege) not actually simpler. It just seems simpler because you “removed something” from it. A mind could be much “simpler” than ours, but also less useful- which is the actual point of having a simpler model. The “simplest” model which accurately predicts everything we see is going to be a fundamental physical theory, but making accurate predictions about complicated macroscopic behavior entirely from first principles is not tractable with eight billion human brains worth of hardware.
The real question of importance is, does operating on a framework which takes specific regular notice of the idea that naïve realism is technically a floating belief increase your productivity in the real world? I can’t see why that would be the case- it requires occasionally spending my scare brainpower on reformatting my basic experience of the world in more complicated terms, I have to think about whether or not I should argue with someone whenever they bring up the idea of naïve realism, etc. You claim adopting the “simpler” model doesn’t change your predictions, so I don’t see what justifies these costs. Are there some major hidden costs of naïve realism that I’m not aware of? Am I actually wasting more unconscious brainpower working with the idea of “reality” and things “really existing”?
If I have to choose between two models which make the exact same predictions (i.e. my current model and your model), I’m going to choose between the model which is better at achieving my goals. In practice, this is the more computationally efficient model, which (I allege) is my current model.
>Everything you’re thinking is compatible with a situation in which you’re actually in a simulation hosted in some entirely alien reality (2 + 2 = 3, experience is meaningless, causes follow after effects, (True ^ True) = False, etc, which is being manipulated in extremely contrived ways which produce your exact current thought processes.
I disagree, and see no reason to agree. You have not fully specified this situation, and have offered no argument for why this situation is coherent. Being as this is obviously self-contradictory (at least the part about logic), why should I accept this?
>If you have an argument against this problem, I am especially interested in hearing it
The problem is that you’re assuming that verificationism is false in arguing against it, which is impermissible. E.g. “maybe you’re in an asylum” assumes that it’s possible for an asylum to “exist” and for someone to be in it, both of which are meaningless under my worldview.
Same for any other way to cash out “it’s all a delusion”—you need to stipulate unverifiable entities in order to even define delusion.
Now, this is distinct from the question of whether I should have 100% credence in claims such as 2+2=4 or “I am currently having an experience”. I can have uncertainty as to such claims without allowing for them to be meaningfully false. I’m not 100% certain that verificationism is valid.
>It seems like the fact you can’t tell between this situation and reality
What do you mean by “reality”? You keep using words that are meaningless under my worldview without bothering to define them.
>The real question of importance is, does operating on a framework which takes specific regular notice of the idea that naïve realism is technically a floating belief increase your productivity in the real world?
This isn’t relevant to the truth of verificationism, though. My argument against realism is that it’s not even coherent. If it makes your model prettier, go ahead and use it. You’ll just run into trouble if you try doing e.g. quantum physics and insist on realism—you’ll do things like assert there must be loopholes in Bell’s theorem, and search for them and never find them.
E.g. “maybe you’re in an asylum” assumes that it’s possible for an asylum to “exist” and for someone to be in it, both of which are meaningless under my worldview.
What do you mean by “reality”? You keep using words that are meaningless under my worldview without bothering to define them.
You’re implementing a feature into your model which doesn’t change what it predicts but makes it less computationally efficient.
The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model. Ipso facto, my model is better. There’s no coherent excuse for this.
This isn’t relevant to the truth of verificationism, though. My argument against realism is that it’s not even coherent. If it makes your model prettier, go ahead and use it.
What does it mean for your model to be “true”? There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
It’s not about making my model “prettier”- my model is literally better at evoking the outcomes that I want to evoke. This is the correct dimension on which to evaluate your model.
You’ll just run into trouble if you try doing e.g. quantum physics and insist on realism—you’ll do things like assert there must be loopholes in Bell’s theorem, and search for them and never find them.
My preferred interpretation of quantum physics (many worlds) was made before bell’s theorem, and it turns out that bell’s theorem is actually strong evidence in favor of many worlds. Bell’s theorem does not “disprove realism”, it just disproves hidden variable theories. My interpretation already predicted that.
I suspect this isn’t going anywhere, so I’m abdicating.
>The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model.
I don’t think that message conveys useful information in the context of this argument, to anyone. I can model regular delusions just fine—what I can’t model is a delusion that gives one an appearance of having experiences while no experiences were in fact had. Saying “delusion” doesn’t clear up what you mean.
Saying “(True ^ True) = False” also doesn’t convey information. I don’t know what is meant by a world in which that holds, and I don’t think you know either. Being able to say the words doesn’t make it coherent.
You went to some severe edge cases here—not just simulation, but simulation that also somehow affects logical truths or creates a false appearance of experience. Those don’t seem like powers even an omnipotent being would possess, so I’m skeptical that those are meaningful, even if I was wrong about verificationism in general.
For more ordinary delusions or simulations, I can interpret that language in terms of expected experiences.
>What does it mean for your model to be “true”?
Nothing, and this is precisely my point. Verificationism is a criterion of meaning, not part of my model. The meaning of “verificationism is true” is just that all statements that verificationism says are incoherent are in fact incoherent.
>There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
I didn’t respond because I agree. All models are wrong, some models are useful. Use Solomonoff to weight various models to predict the future, without asserting that any of those models are “reality”. Solomonoff doesn’t even have a way to mark a model as “real”, that’s just completely out of scope.
I’m not particularly convinced by your claim that one should believe in untrue or incoherent things if it helps them be more productive, and I’m not interested in debating that. If you have a counter-argument to anything I’ve said, or a reason to think ontological statements are coherent, I’m interested in that. But a mere assertion that talking about these incoherent things boosts productivity isn’t interesting to me now.
Everything you’re thinking is compatible with a situation in which you’re actually in a simulation hosted in some entirely alien reality (2 + 2 = 3, experience is meaningless, causes follow after effects, (True ^ True) = False, etc, which is being manipulated in extremely contrived ways which produce your exact current thought processes.
There are an exhausting number of different riffs on this idea- maybe you’re in an asylum and all of your thinking including “I actually have no idea what it would mean for an experience to be a delusion” is due to some major mental disorder. Oh, how obvious- my idea of experience was a crazy delusion all along. I can’t believe I said that it was my daughter’s arm. “I think therefore I am”? Absurd!
If you have an argument against this problem, I am especially interested in hearing it- it seems like the fact you can’t tell between this situation and reality (and you can’t know whether this situation is impossible as a result, etc.) is part of the construction of the scenario. You’d need to show that the whole idea that “We can construct situations in which you’re having exactly the same thoughts as you are right now, but with some arbitrary change (Which you don’t even need to believe is theoretically possible or coherent) in the background” is invalid.
Do I think this is a practical concern? Of course not. The Cartesian argument isn’t sufficient to convince me, though- I’m just assuming that I really exist and things are broadly as they seem. I don’t think it’s that plausible to expect that I would be able to derive these assumptions without using them- there is no epistemological rock bottom.
Your model is (I allege) not actually simpler. It just seems simpler because you “removed something” from it. A mind could be much “simpler” than ours, but also less useful- which is the actual point of having a simpler model. The “simplest” model which accurately predicts everything we see is going to be a fundamental physical theory, but making accurate predictions about complicated macroscopic behavior entirely from first principles is not tractable with eight billion human brains worth of hardware.
The real question of importance is, does operating on a framework which takes specific regular notice of the idea that naïve realism is technically a floating belief increase your productivity in the real world? I can’t see why that would be the case- it requires occasionally spending my scare brainpower on reformatting my basic experience of the world in more complicated terms, I have to think about whether or not I should argue with someone whenever they bring up the idea of naïve realism, etc. You claim adopting the “simpler” model doesn’t change your predictions, so I don’t see what justifies these costs. Are there some major hidden costs of naïve realism that I’m not aware of? Am I actually wasting more unconscious brainpower working with the idea of “reality” and things “really existing”?
If I have to choose between two models which make the exact same predictions (i.e. my current model and your model), I’m going to choose between the model which is better at achieving my goals. In practice, this is the more computationally efficient model, which (I allege) is my current model.
>Everything you’re thinking is compatible with a situation in which you’re actually in a simulation hosted in some entirely alien reality (2 + 2 = 3, experience is meaningless, causes follow after effects, (True ^ True) = False, etc, which is being manipulated in extremely contrived ways which produce your exact current thought processes.
I disagree, and see no reason to agree. You have not fully specified this situation, and have offered no argument for why this situation is coherent. Being as this is obviously self-contradictory (at least the part about logic), why should I accept this?
>If you have an argument against this problem, I am especially interested in hearing it
The problem is that you’re assuming that verificationism is false in arguing against it, which is impermissible. E.g. “maybe you’re in an asylum” assumes that it’s possible for an asylum to “exist” and for someone to be in it, both of which are meaningless under my worldview.
Same for any other way to cash out “it’s all a delusion”—you need to stipulate unverifiable entities in order to even define delusion.
Now, this is distinct from the question of whether I should have 100% credence in claims such as 2+2=4 or “I am currently having an experience”. I can have uncertainty as to such claims without allowing for them to be meaningfully false. I’m not 100% certain that verificationism is valid.
>It seems like the fact you can’t tell between this situation and reality
What do you mean by “reality”? You keep using words that are meaningless under my worldview without bothering to define them.
>The real question of importance is, does operating on a framework which takes specific regular notice of the idea that naïve realism is technically a floating belief increase your productivity in the real world?
This isn’t relevant to the truth of verificationism, though. My argument against realism is that it’s not even coherent. If it makes your model prettier, go ahead and use it. You’ll just run into trouble if you try doing e.g. quantum physics and insist on realism—you’ll do things like assert there must be loopholes in Bell’s theorem, and search for them and never find them.
You’re implementing a feature into your model which doesn’t change what it predicts but makes it less computationally efficient.
The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model. Ipso facto, my model is better. There’s no coherent excuse for this.
What does it mean for your model to be “true”? There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
It’s not about making my model “prettier”- my model is literally better at evoking the outcomes that I want to evoke. This is the correct dimension on which to evaluate your model.
My preferred interpretation of quantum physics (many worlds) was made before bell’s theorem, and it turns out that bell’s theorem is actually strong evidence in favor of many worlds. Bell’s theorem does not “disprove realism”, it just disproves hidden variable theories. My interpretation already predicted that.
I suspect this isn’t going anywhere, so I’m abdicating.
>The fact you’re saying “both of which are meaningless under my worldview” is damning evidence that your model (or at least your current implementation of your model) sucks, because that message transmits useful information to someone using my model but apparently has no meaning in your model.
I don’t think that message conveys useful information in the context of this argument, to anyone. I can model regular delusions just fine—what I can’t model is a delusion that gives one an appearance of having experiences while no experiences were in fact had. Saying “delusion” doesn’t clear up what you mean.
Saying “(True ^ True) = False” also doesn’t convey information. I don’t know what is meant by a world in which that holds, and I don’t think you know either. Being able to say the words doesn’t make it coherent.
You went to some severe edge cases here—not just simulation, but simulation that also somehow affects logical truths or creates a false appearance of experience. Those don’t seem like powers even an omnipotent being would possess, so I’m skeptical that those are meaningful, even if I was wrong about verificationism in general.
For more ordinary delusions or simulations, I can interpret that language in terms of expected experiences.
>What does it mean for your model to be “true”?
Nothing, and this is precisely my point. Verificationism is a criterion of meaning, not part of my model. The meaning of “verificationism is true” is just that all statements that verificationism says are incoherent are in fact incoherent.
>There are infinitely many unique models which will predict all evidence you will ever receive- I established this earlier and you never responded.
I didn’t respond because I agree. All models are wrong, some models are useful. Use Solomonoff to weight various models to predict the future, without asserting that any of those models are “reality”. Solomonoff doesn’t even have a way to mark a model as “real”, that’s just completely out of scope.
I’m not particularly convinced by your claim that one should believe in untrue or incoherent things if it helps them be more productive, and I’m not interested in debating that. If you have a counter-argument to anything I’ve said, or a reason to think ontological statements are coherent, I’m interested in that. But a mere assertion that talking about these incoherent things boosts productivity isn’t interesting to me now.