At least in this presentation of Buck vs. Them, there’s a disagreement over something like “whether scope matters”
I agree this could be a disagreement, but how do selection effects matter for it?
This feels like it’s mostly not about bets on whether X happened or not, and mostly about counterfactuals / reference class tennis
Seems plausible, but again why do selection effects matter for it?
----
I may have been a bit too concise when saying
the entire disagreement in the post is about the backward-looking sense
To expand on it, I expect that if we fix a particular model of the world (e.g. coordination of the type discussed here is hard, we have basically never succeeded at it, the lack of accidents so far is just luck), Buck and I would agree much more on the forward-looking consequences of that model for AI alignment (perhaps I’d be at like 30% x-risk, idk). The disagreement is about what model of the world we should have (or perhaps what distribution over models). For that, we look at what happens in the past (both in reality and counterfactually), which is “backward-looking”.
I agree this could be a disagreement, but how do selection effects matter for it?
Seems plausible, but again why do selection effects matter for it?
----
I may have been a bit too concise when saying
To expand on it, I expect that if we fix a particular model of the world (e.g. coordination of the type discussed here is hard, we have basically never succeeded at it, the lack of accidents so far is just luck), Buck and I would agree much more on the forward-looking consequences of that model for AI alignment (perhaps I’d be at like 30% x-risk, idk). The disagreement is about what model of the world we should have (or perhaps what distribution over models). For that, we look at what happens in the past (both in reality and counterfactually), which is “backward-looking”.