I’m beginning to warm to the idea that the reason why we have evolved to think in terms of counterfactuals and probabilities is rooted in these are fundamental at the quantum-level. Normally I’m suspicious at rooting macro level claims in quantum level effects because at such a high level of abstraction it would be very easy for these effects to wash out, but the multi-world hypothesis is something that wouldn’t wash out. Otherwise it would seem to be all a bit too much of a coincidence.
(“Oh, so you believe that counterfactuals and probability are at least partly a human construct, but they just so happen to correspond with what seems to us to be the fundamental level of physics, not because there is a relation there, but because of pure happenstance. Seems a bit of a stretch)”
I expect that agents evolved in a purely deterministic but similarly complex world would be no less likely to (eventually) construct counterfactuals and probabilities than those in a quantum sort of universe. Far more likely to develop counterfactuals first, since it seems that agents on the level of dogs can imagine counterfactuals at least in the weak sense of “an expected event that didn’t actually happen”. Human-level counterfactual models are certainly more complex than that, but I don’t think they’re qualitatively different.
I think if there’s any evolution pressure toward ability to predict the environment, and the environment has a range of salient features that vary in complexity, there will be some agents that can model and predict the environment better than others regardless of whether that environment is fundamentally deterministic or not. In cases where evolution leads to sufficiently complex prediction, I think it will inevitably lead to some sort of counterfactuals.
The simplest predictive model can only be applied to sensory data directly. The agent gains a sense of what to expect next, and how much that differed from what actually happened. This can be used to update the model. This isn’t technically a counterfactual, but only through a quirk of language. In everything but name “what to expect next” is at least some weak form of counterfactual. It’s a model of an event that hasn’t happened and might not happen. But still, let’s just rule it out arbitrarily and continue on.
The next step is probably to be able to apply the same predictive model to memory as well, which for a model changing over time means that an agent can remember what they experienced, what they expected, and compare with what they would now expect to have happened in those circumstances. This is definitely a counterfactual. It might not be conscious, but it is a model of something in the past that never happened. It opens up a lot of capability for using a bunch of highly salient stored data to update the model instead of just the comparative trickle of new salient data that comes in over time.
There are still higher strengths and complexities of counterfactuals of course, but it seems to me that these are all based on the basic mechanism of a predictive model applied to different types of data.
None of this needs any reference to quantum mechanics, and nor does probability. All it needs is a universe too complex to be comprehended in its entirety, and agents that are capable of learning to imperfectly model parts of it that are relevant to themselves.
“I expect that agents evolved in a purely deterministic but similarly complex world would be no less likely to (eventually) construct counterfactuals and probabilities than those in a quantum sort of universe”
I’m actually trying to make a slightly unusual argument. My argument isn’t that we wouldn’t construct counterfactuals in a purely deterministic world operating similar to ours. My argument is involves:
a) Claiming that counterfactuals are at least partly constructed by humans (if you don’t understand why this might be reasonable, then it’ll be more of a challenge to understand the overall argument) b) Claiming that it would be a massive coincidence if something partly constructed by humans happened to correspond with fundamental structures in such a way unrelated to the fundamental structures c) Concluding that its likely that there is some as yet unspecified relation
To me the correspondence seems smaller, and therefore the coincidence less unlikely.
Many-world hypothesis assumes parallel worlds that obey exactly the same laws of physics. Anything can happen with astronomically tiny probability, but the vast majority of parallel worlds is just as boring as our world. The counterfactuals we imagine are not limited by the laws of physics.
Construction of counterfactuals is useful for reasoning with uncertainty. Quantum physics is a source of uncertainty, but there are also enough macroscopic sources of uncertainty (limited brain size, second law of thermodynamics). If an intelligent life evolved in a deterministic universe, I imagine it would also find counterfactual reasoning useful.
Not hugely. Quantum mechanics doesn’t have any counterfactuals in some interpretations. It has deterministic evolution of state (including entanglement), and then we interpret incomplete information about it as being probabilistic in nature. Just as we interpret incomplete information about everything else.
I’m not claiming that there’s a perfect correspondence between counterfactuals as different worlds in a multiverse vs. decision counterfactuals. Although maybe that’s enough the undermine any coincidence right there?
I don’t see how there is anything here other than equivocation of different meanings of “world”. Counterfactuals-as-worlds is not even a particularly convincing way of making sense of what counterfactuals are.
I’m beginning to warm to the idea that the reason why we have evolved to think in terms of counterfactuals and probabilities is rooted in these are fundamental at the quantum-level. Normally I’m suspicious at rooting macro level claims in quantum level effects because at such a high level of abstraction it would be very easy for these effects to wash out, but the multi-world hypothesis is something that wouldn’t wash out. Otherwise it would seem to be all a bit too much of a coincidence.
(“Oh, so you believe that counterfactuals and probability are at least partly a human construct, but they just so happen to correspond with what seems to us to be the fundamental level of physics, not because there is a relation there, but because of pure happenstance. Seems a bit of a stretch)”
I expect that agents evolved in a purely deterministic but similarly complex world would be no less likely to (eventually) construct counterfactuals and probabilities than those in a quantum sort of universe. Far more likely to develop counterfactuals first, since it seems that agents on the level of dogs can imagine counterfactuals at least in the weak sense of “an expected event that didn’t actually happen”. Human-level counterfactual models are certainly more complex than that, but I don’t think they’re qualitatively different.
I think if there’s any evolution pressure toward ability to predict the environment, and the environment has a range of salient features that vary in complexity, there will be some agents that can model and predict the environment better than others regardless of whether that environment is fundamentally deterministic or not. In cases where evolution leads to sufficiently complex prediction, I think it will inevitably lead to some sort of counterfactuals.
The simplest predictive model can only be applied to sensory data directly. The agent gains a sense of what to expect next, and how much that differed from what actually happened. This can be used to update the model. This isn’t technically a counterfactual, but only through a quirk of language. In everything but name “what to expect next” is at least some weak form of counterfactual. It’s a model of an event that hasn’t happened and might not happen. But still, let’s just rule it out arbitrarily and continue on.
The next step is probably to be able to apply the same predictive model to memory as well, which for a model changing over time means that an agent can remember what they experienced, what they expected, and compare with what they would now expect to have happened in those circumstances. This is definitely a counterfactual. It might not be conscious, but it is a model of something in the past that never happened. It opens up a lot of capability for using a bunch of highly salient stored data to update the model instead of just the comparative trickle of new salient data that comes in over time.
There are still higher strengths and complexities of counterfactuals of course, but it seems to me that these are all based on the basic mechanism of a predictive model applied to different types of data.
None of this needs any reference to quantum mechanics, and nor does probability. All it needs is a universe too complex to be comprehended in its entirety, and agents that are capable of learning to imperfectly model parts of it that are relevant to themselves.
“I expect that agents evolved in a purely deterministic but similarly complex world would be no less likely to (eventually) construct counterfactuals and probabilities than those in a quantum sort of universe”
I’m actually trying to make a slightly unusual argument. My argument isn’t that we wouldn’t construct counterfactuals in a purely deterministic world operating similar to ours. My argument is involves:
a) Claiming that counterfactuals are at least partly constructed by humans (if you don’t understand why this might be reasonable, then it’ll be more of a challenge to understand the overall argument)
b) Claiming that it would be a massive coincidence if something partly constructed by humans happened to correspond with fundamental structures in such a way unrelated to the fundamental structures
c) Concluding that its likely that there is some as yet unspecified relation
Does this make sense?
To me the correspondence seems smaller, and therefore the coincidence less unlikely.
Many-world hypothesis assumes parallel worlds that obey exactly the same laws of physics. Anything can happen with astronomically tiny probability, but the vast majority of parallel worlds is just as boring as our world. The counterfactuals we imagine are not limited by the laws of physics.
Construction of counterfactuals is useful for reasoning with uncertainty. Quantum physics is a source of uncertainty, but there are also enough macroscopic sources of uncertainty (limited brain size, second law of thermodynamics). If an intelligent life evolved in a deterministic universe, I imagine it would also find counterfactual reasoning useful.
Yeah, that’s a reasonable position to take.
Not hugely. Quantum mechanics doesn’t have any counterfactuals in some interpretations. It has deterministic evolution of state (including entanglement), and then we interpret incomplete information about it as being probabilistic in nature. Just as we interpret incomplete information about everything else.
Hopefully one day I get a chance to look further into quantum mechanics
What correspondence? Counterfactuals-as-worlds have all laws of physics broken in them, including quantum mechanics.
I’m not claiming that there’s a perfect correspondence between counterfactuals as different worlds in a multiverse vs. decision counterfactuals. Although maybe that’s enough the undermine any coincidence right there?
I don’t see how there is anything here other than equivocation of different meanings of “world”. Counterfactuals-as-worlds is not even a particularly convincing way of making sense of what counterfactuals are.
If you’re interpreting me as defending something along the lines of David Lewis, then that’s actually not what I’m doing.
Says who?