But really my answer is “there are lots of ways you can get confidence in a thing that are not proofs”.
Totally agree; it’s an under-appreciated point!
Here’s my counter-argument: we have no idea what epistemological principles explain this empirical observation. Therefor we don’t actually know that the confidence we achieve in these ways is justified. So we may just be wrong to be confident in our ability to successfully board flights (etc.)
The epistemic/aleatory distinction is relevant here. Taking an expectation over both kinds of uncertainty, we can achieve a high level of subjective confidence in such things / via such means. However, we may be badly mistaken, and thus still extremely likely objectively speaking to be wrong.
This also probably explains a lot of the disagreement, since different people probably just have very different prior beliefs about how likely this kind of informal reasoning is to give us true beliefs about advanced AI systems.
I’m personally quite uncertain about that question, ATM. I tend to think we can get pretty far with this kind of informal reasoning in the “early days” of (proto-)AGI development, but we become increasingly likely to fuck up as we start having to deal with vastly super-human intelligences. And would like to see more work in epistemology aimed at addressing this (and other Xrisk-relevant concerns, e.g. what principles of “social epistemology” would allow the human community to effectively manage collective knowledge that is far beyond what any individual can grasp? I’d argue we’re in the process of failing catastrophically at that)
Totally agree; it’s an under-appreciated point!
Here’s my counter-argument: we have no idea what epistemological principles explain this empirical observation. Therefor we don’t actually know that the confidence we achieve in these ways is justified. So we may just be wrong to be confident in our ability to successfully board flights (etc.)
The epistemic/aleatory distinction is relevant here. Taking an expectation over both kinds of uncertainty, we can achieve a high level of subjective confidence in such things / via such means. However, we may be badly mistaken, and thus still extremely likely objectively speaking to be wrong.
This also probably explains a lot of the disagreement, since different people probably just have very different prior beliefs about how likely this kind of informal reasoning is to give us true beliefs about advanced AI systems.
I’m personally quite uncertain about that question, ATM. I tend to think we can get pretty far with this kind of informal reasoning in the “early days” of (proto-)AGI development, but we become increasingly likely to fuck up as we start having to deal with vastly super-human intelligences. And would like to see more work in epistemology aimed at addressing this (and other Xrisk-relevant concerns, e.g. what principles of “social epistemology” would allow the human community to effectively manage collective knowledge that is far beyond what any individual can grasp? I’d argue we’re in the process of failing catastrophically at that)