Some people have asked why the Bayesian Network approach suggested by Judea Pearl is insufficient (including in the comments below). This approach is firmly rooted in Causal Decision Theory (CDT). Most people on LW have rejected CDT because of its failure to handle Newcomb’s Problem.
I’ll make a counter-claim and say that most people on LW in fact have rejected the use of Newcomb’s Problem as a test that will say something useful about decision theories.
That being said, there is definitely a sub-community which believes deeply in the relevance of Newcomb’s Problem as a test. This sub-community has historically created, and is still creating, a lot of traffic on this forum. This is to be expected: the people who reject Newcomb’s Problem do not tend to post about it that much.
Personally, I reject Newcomb’s Problem as a test.
I am also among the crowd who have posted explanations of Pearl Causality and Counterfactuals. My explanation here highlights the ‘using a different world model’ interpretation of Pearl’s counterfactual math, so it may in fact touch on your reframing:
Or reframing this, counterfactuals only make sense from a cognitive frame.
I guess I’d roughly describe [a cognitive frame] as something that forms models of the world.
Overall, reading the post and the comment section, I feel that, if I reject Newcomb’s Problem as a test, I can only ever write things that will not meet your prize criterion of usefully engaging with ‘circular dependency’.
I have a sense that with ‘circular dependency’ you are also pointing to a broader class of philosophical problems of ‘what does it mean for something to be true or correctly inferred’. If these were spelled out in detail, I also believe that I would end up rejecting the notion that we need to solve all these open problems definitively, the notion that these problems represent gaps in an agent foundations framework that still need to be filled, if the framework is to support AGI safety/alignment.
Overall, reading the post and the comment section, I feel that, if I reject Newcomb’s Problem as a test, I can only ever write things that will not meet your prize criterion of usefully engaging with ‘circular dependency’.
Firstly, I don’t see why that would interfere with evaluating possible arguments for and against circular dependency. It’s possible for an article to be here’s why these 3 reasons why we might think counterfactuals are circular are all false (not stating that an article would have to necessarily engage with 3 different arguments to win).
Secondly, I guess my issue with most of the attempts to say “use system X for counterfactuals” is that people seem to think merely not mentioning counterfactuals means that there isn’t a dependence on them. So there likely needs to be some part of such an article discussing why things that look counterfactual really aren’t.
I briefly skimmed your article and I’m sure if I read it further I’d learn something interesting, but merely as is it wouldn’t be on scope.
It’s possible for an article to be here’s why these 3 reasons why we might think counterfactuals are circular are all false
OK, so if I understand you correctly, you posit that there is something called ‘circular epistemology’. You said in the earlier post you link to at the top:
You might think that the circularity is a problem, but circular epistemology turns out to be viable (see Eliezer’s Where Recursive Justification Hits Bottom). And while circular reasoning is less than ideal, if the comparative is eventually hitting a point where we can provide no justification at all, then circular justification might not seem so bad after all.
You further suspect that circular epistemology might have something useful to say about counterfactuals, in terms of offering a justification for them without ‘hitting a point where we can provide no justification at all’. And you have a bounty for people writing more about this.
Yeah, I believe epistemology to be inherently circular. I think it has some relation to counterfactuals being circular, but I don’t see it as quite the same as counterfactuals seem a lot harder to avoid using than most other concept. The point of mentioning circular epistemology was to persuade people that my theory isn’t as absurd as it sounds at first.
Wait, I was under the impression from the quoted text that you make a distinction between ‘circular epistemology’ and ‘other types of epistemology that will hit a point where we can provide no justification at all’. i.e. these other types are not circular because they are ultimately defined as a set of axioms, rewriting rules, and observational protocols for which no further justification is being attempted.
So I think I am still struggling to see what flavour of philosophical thought you want people to engage with, when you mention ‘circular’.
Mind you, I see ‘hitting a point where we provide no justification at all’ as a positive thing in a mathematical system, a physical theory, or an entire epistemology, as long as these points are clearly identified.
Wait, I was under the impression from the quoted text that you make a distinction between ‘circular epistemology’ and ‘other types of epistemology that will hit a point where we can provide no justification at all’. i.e. these other types are not circular because they are ultimately defined as a set of axioms, rewriting rules, and observational protocols for which no further justification is being attempted.
If you’re referring to the Wittgenstenian quote, I was merely quoting him, not endorsing his views.
Not aware of which part would be a Wittgenstenian quote. Long time ago that I read Wittgenstein, and I read him in German. In any case, I remain confused on what you mean with ‘circular’.
Hmm… Oh, I think that was elsewhere on this thread. Probably not to you. Eliezer’s Where Recursive Justification Hits Bottom seems to embrace a circular epistemology despite its title.
That doesn’t help. If recursive justification is a particular kind of circular argument that’s valid, so that others are invalid, then something makes it valid. But what? EY doesn’t say. And how do we know that the additional factor isn’t doing all the work?
What I mean is that some people seem to think that if they can describe a system that explains counterfactuals without mentioning counterfactuals when explaining them that they’ve avoided a circular dependence. When of course, we can’t just take things at face value, but have to dig deeper than that.
I’ll make a counter-claim and say that most people on LW in fact have rejected the use of Newcomb’s Problem as a test that will say something useful about decision theories.
That being said, there is definitely a sub-community which believes deeply in the relevance of Newcomb’s Problem as a test. This sub-community has historically created, and is still creating, a lot of traffic on this forum. This is to be expected: the people who reject Newcomb’s Problem do not tend to post about it that much.
Personally, I reject Newcomb’s Problem as a test.
I am also among the crowd who have posted explanations of Pearl Causality and Counterfactuals. My explanation here highlights the ‘using a different world model’ interpretation of Pearl’s counterfactual math, so it may in fact touch on your reframing:
Overall, reading the post and the comment section, I feel that, if I reject Newcomb’s Problem as a test, I can only ever write things that will not meet your prize criterion of usefully engaging with ‘circular dependency’.
I have a sense that with ‘circular dependency’ you are also pointing to a broader class of philosophical problems of ‘what does it mean for something to be true or correctly inferred’. If these were spelled out in detail, I also believe that I would end up rejecting the notion that we need to solve all these open problems definitively, the notion that these problems represent gaps in an agent foundations framework that still need to be filled, if the framework is to support AGI safety/alignment.
Firstly, I don’t see why that would interfere with evaluating possible arguments for and against circular dependency. It’s possible for an article to be here’s why these 3 reasons why we might think counterfactuals are circular are all false (not stating that an article would have to necessarily engage with 3 different arguments to win).
Secondly, I guess my issue with most of the attempts to say “use system X for counterfactuals” is that people seem to think merely not mentioning counterfactuals means that there isn’t a dependence on them. So there likely needs to be some part of such an article discussing why things that look counterfactual really aren’t.
I briefly skimmed your article and I’m sure if I read it further I’d learn something interesting, but merely as is it wouldn’t be on scope.
OK, so if I understand you correctly, you posit that there is something called ‘circular epistemology’. You said in the earlier post you link to at the top:
You further suspect that circular epistemology might have something useful to say about counterfactuals, in terms of offering a justification for them without ‘hitting a point where we can provide no justification at all’. And you have a bounty for people writing more about this.
Am I understanding you correctly?
Yeah, I believe epistemology to be inherently circular. I think it has some relation to counterfactuals being circular, but I don’t see it as quite the same as counterfactuals seem a lot harder to avoid using than most other concept. The point of mentioning circular epistemology was to persuade people that my theory isn’t as absurd as it sounds at first.
Wait, I was under the impression from the quoted text that you make a distinction between ‘circular epistemology’ and ‘other types of epistemology that will hit a point where we can provide no justification at all’. i.e. these other types are not circular because they are ultimately defined as a set of axioms, rewriting rules, and observational protocols for which no further justification is being attempted.
So I think I am still struggling to see what flavour of philosophical thought you want people to engage with, when you mention ‘circular’.
Mind you, I see ‘hitting a point where we provide no justification at all’ as a positive thing in a mathematical system, a physical theory, or an entire epistemology, as long as these points are clearly identified.
If you’re referring to the Wittgenstenian quote, I was merely quoting him, not endorsing his views.
Not aware of which part would be a Wittgenstenian quote. Long time ago that I read Wittgenstein, and I read him in German. In any case, I remain confused on what you mean with ‘circular’.
Hmm… Oh, I think that was elsewhere on this thread. Probably not to you. Eliezer’s Where Recursive Justification Hits Bottom seems to embrace a circular epistemology despite its title.
He doesn’t show much sign of embracing the validity of all circular argument ss, and neither do you.
I never said all circular arguments are valid
That doesn’t help. If recursive justification is a particular kind of circular argument that’s valid, so that others are invalid, then something makes it valid. But what? EY doesn’t say. And how do we know that the additional factor isn’t doing all the work?
??? I don’t follow. You meant to write “use system X instead of using system Y which calls itself a definition of counterfactuals ”?
What I mean is that some people seem to think that if they can describe a system that explains counterfactuals without mentioning counterfactuals when explaining them that they’ve avoided a circular dependence. When of course, we can’t just take things at face value, but have to dig deeper than that.
OK thanks for explaining. See my other recent reply for more thoughts about this.