That all sounds right and I’m not sure where you expected me to disagree.
discussions about the inference necessarily have a backward-looking quality to them
This was exactly my point, I think? Since we’re looking backward when making inferences, and since we didn’t expect full extinction or even 90% extinction in the past, our inferences don’t need to take selection effects into account (or more accurately selection effects would have a relatively small effect on the final answer).
When I read the ‘Buck’ points, most of them feel like they’re trying to be about ‘how humans are’, or the forward-likeliness. Like, this here:
But I still feel that my overall worldview of “people will do wild and reckless things” loses fewer Bayes points than yours does.
Importantly, “wild and reckless” is describing the properties of the actions / underlying cognitive processes, not the outcomes. And later:
Why would that be an update? We already know that state bioweapons programs have killed thousands of people with accidental releases, and there’s no particular reason that they couldn’t cause worse disasters, and that international regulation has failed to control that.
At least in this presentation of Buck vs. Them, there’s a disagreement over something like “whether scope matters”; Buck thinks no (‘what damage happens to a toddler depends on how dangerous their environment is, since the toddler doesn’t know what to avoid and so can’t be scope-sensitive’) and Them thinks yes (‘sure, humanity has screwed up lots of things that don’t matter, but that’s because effort is proportional to how much the thing matters, and so they’re rationally coping with lots of fires that would be expensive to put out.’).
This feels like it’s mostly not about bets on whether X happened or not, and mostly about counterfactuals / reference class tennis (“would people have taken climate change more seriously if it were a worse problem?” / “is climate change a thing that people are actually trying to coordinate on, or a distraction?”).
At least in this presentation of Buck vs. Them, there’s a disagreement over something like “whether scope matters”
I agree this could be a disagreement, but how do selection effects matter for it?
This feels like it’s mostly not about bets on whether X happened or not, and mostly about counterfactuals / reference class tennis
Seems plausible, but again why do selection effects matter for it?
----
I may have been a bit too concise when saying
the entire disagreement in the post is about the backward-looking sense
To expand on it, I expect that if we fix a particular model of the world (e.g. coordination of the type discussed here is hard, we have basically never succeeded at it, the lack of accidents so far is just luck), Buck and I would agree much more on the forward-looking consequences of that model for AI alignment (perhaps I’d be at like 30% x-risk, idk). The disagreement is about what model of the world we should have (or perhaps what distribution over models). For that, we look at what happens in the past (both in reality and counterfactually), which is “backward-looking”.
That all sounds right and I’m not sure where you expected me to disagree.
This was exactly my point, I think? Since we’re looking backward when making inferences, and since we didn’t expect full extinction or even 90% extinction in the past, our inferences don’t need to take selection effects into account (or more accurately selection effects would have a relatively small effect on the final answer).
When I read the ‘Buck’ points, most of them feel like they’re trying to be about ‘how humans are’, or the forward-likeliness. Like, this here:
Importantly, “wild and reckless” is describing the properties of the actions / underlying cognitive processes, not the outcomes. And later:
At least in this presentation of Buck vs. Them, there’s a disagreement over something like “whether scope matters”; Buck thinks no (‘what damage happens to a toddler depends on how dangerous their environment is, since the toddler doesn’t know what to avoid and so can’t be scope-sensitive’) and Them thinks yes (‘sure, humanity has screwed up lots of things that don’t matter, but that’s because effort is proportional to how much the thing matters, and so they’re rationally coping with lots of fires that would be expensive to put out.’).
This feels like it’s mostly not about bets on whether X happened or not, and mostly about counterfactuals / reference class tennis (“would people have taken climate change more seriously if it were a worse problem?” / “is climate change a thing that people are actually trying to coordinate on, or a distraction?”).
I agree this could be a disagreement, but how do selection effects matter for it?
Seems plausible, but again why do selection effects matter for it?
----
I may have been a bit too concise when saying
To expand on it, I expect that if we fix a particular model of the world (e.g. coordination of the type discussed here is hard, we have basically never succeeded at it, the lack of accidents so far is just luck), Buck and I would agree much more on the forward-looking consequences of that model for AI alignment (perhaps I’d be at like 30% x-risk, idk). The disagreement is about what model of the world we should have (or perhaps what distribution over models). For that, we look at what happens in the past (both in reality and counterfactually), which is “backward-looking”.