I decided to split out some content from the end of my post The Nature of Counterfactuals because upon reflection I don’t feel it is as high quality as the core of the post.
I finished The Nature of Counterfactuals by noting that I was incredibly unsure of how we should handle circular epistemology. That said, there are a few ideas I want to offer up on how to approach this. The big challenge with counterfactuals is not imagining other states the universe could be in or how we could apply our “laws” of physics to discover the state of the universe at other points of time. Instead, the challenge comes when we want to construct a counterfactual representing someone choosing a different decision. After all, in a deterministic universe, someone could only have made a different choice if the universe were different, but then it’s not clear why we would care about the fact that someone in a different universe would have achieved a particular score when we just care about this universe.
I believe that answer to this question will be roughly that in certain circumstances we only care about particular things. For example, let’s suppose Omega is programmed in such a way that it would be impossible for Amy to choose box A without gaining 5 utility or choose box B without gaining 10 utility. Assume that in the universe Amy chooses box A and gains 5 utility. We’re tempted to say “If she had chosen box B she would have gained 10 utility” even though she would have to occupy a different mental state at the time of the decision and the past would be different because the model has been set up so that those factors are unimportant. Since those factors are the only difference between the state where she chooses A and the state where she chooses B we’re tempted to treat these possibilities as the same situation.
So naturally, this leads to a question, why should we build a model where those particular factors are unimportant? Does this lead to pure subjectivity? Well, the answer seems to be that often in practise such a heuristic tends to work well—agents that ignore such factors tend to perform pretty close to agents that account for them—and often better when we include time pressure in our model.
This is the point where the nature of counterfactuals becomes important—whether they are ontologically real or merely a way in which we structure our understanding of the universe. If we’re looking for something ontologically real, the fact that a heuristic is pragmatically useful provides quite limited information about what counterfactuals actually are.
On the other hand, if they’re a way of structuring our understanding, then we’re probably aiming to produce something consistent from our intuitions and our experience of the universe. And from this perspective, the mere fact that a heuristic is intuitively appealing counts as evidence for it.
I suspect that with a bit more work this kind of account could be enough to get a circular epistemology off the ground.
Speculation from The Nature of Counterfactuals
I decided to split out some content from the end of my post The Nature of Counterfactuals because upon reflection I don’t feel it is as high quality as the core of the post.
I finished The Nature of Counterfactuals by noting that I was incredibly unsure of how we should handle circular epistemology. That said, there are a few ideas I want to offer up on how to approach this. The big challenge with counterfactuals is not imagining other states the universe could be in or how we could apply our “laws” of physics to discover the state of the universe at other points of time. Instead, the challenge comes when we want to construct a counterfactual representing someone choosing a different decision. After all, in a deterministic universe, someone could only have made a different choice if the universe were different, but then it’s not clear why we would care about the fact that someone in a different universe would have achieved a particular score when we just care about this universe.
I believe that answer to this question will be roughly that in certain circumstances we only care about particular things. For example, let’s suppose Omega is programmed in such a way that it would be impossible for Amy to choose box A without gaining 5 utility or choose box B without gaining 10 utility. Assume that in the universe Amy chooses box A and gains 5 utility. We’re tempted to say “If she had chosen box B she would have gained 10 utility” even though she would have to occupy a different mental state at the time of the decision and the past would be different because the model has been set up so that those factors are unimportant. Since those factors are the only difference between the state where she chooses A and the state where she chooses B we’re tempted to treat these possibilities as the same situation.
So naturally, this leads to a question, why should we build a model where those particular factors are unimportant? Does this lead to pure subjectivity? Well, the answer seems to be that often in practise such a heuristic tends to work well—agents that ignore such factors tend to perform pretty close to agents that account for them—and often better when we include time pressure in our model.
This is the point where the nature of counterfactuals becomes important—whether they are ontologically real or merely a way in which we structure our understanding of the universe. If we’re looking for something ontologically real, the fact that a heuristic is pragmatically useful provides quite limited information about what counterfactuals actually are.
On the other hand, if they’re a way of structuring our understanding, then we’re probably aiming to produce something consistent from our intuitions and our experience of the universe. And from this perspective, the mere fact that a heuristic is intuitively appealing counts as evidence for it.
I suspect that with a bit more work this kind of account could be enough to get a circular epistemology off the ground.