Specifically, the part where I thought there are particular epistemic states that don’t have words yet, but should. And that a function of LessWrong might be to make various possible epistemic states more salient as options. You might have reacts for “approve/disapprove” and “agree/disagree”… but you might also want reactions that let you quickly and effortless express “this isn’t exactly false or bad but it’s subtly making this discussion worse.”
Fictionalized, Paraphrased Critch said “hmm, this reminds me of some particular epistemic states I recently noticed that don’t have names.”
“Go on”, said I.
“So, you know the feeling of being uncertain? And how it feels different to be 60% sure of something, vs 90%?”
“Sure.”
“Okay. So here’s two other states you might be in:
75% sure that you’ll eventually be 99% sure,
80% sure that you’ll eventually be 90% sure.
He let me process those numbers for a moment.
...
Then he continued: “Okay, now imagine you’re thinking about a particular AI system you’re designing, which might or might not be alignable.
“If you’re feeling 75% sure that you’ll eventually be 99% sure that that AI is safe, this means you think that eventually you’ll have a clear understanding of the AI, such that you feel confident turning it on without destroying humanity. Moreover you expect to be able to convince other people that it’s safe to turn it on without destroying humanity.
“Whereas if you’re 80% sure that eventually you’ll be 90% sure that it’ll be safe, even in the future state where you’re better informed and more optimistic, you might still not actually be confident enough to turn it on. And even if for some reason you are, other people might disagree about whether you should turn it on.
“I’ve noticed people tracking how certain they are of something, without paying attention to whether their uncertainty is possible to resolve. And this has important ramifications for what kind of plans they can make. Some plans require near-certainty. Especially many plans that require group coordination.
“Makes sense”, said I. “Can I write this up as a blogpost?”
I’m not quite sure about the best name here, but this seems like a useful concept to have a handle for. Something like “unresolvable uncertainty?”
Is your uncertainty resolvable?
I was chatting with Andrew Critch about the idea of Reacts on LessWrong.
Specifically, the part where I thought there are particular epistemic states that don’t have words yet, but should. And that a function of LessWrong might be to make various possible epistemic states more salient as options. You might have reacts for “approve/disapprove” and “agree/disagree”… but you might also want reactions that let you quickly and effortless express “this isn’t exactly false or bad but it’s subtly making this discussion worse.”
Fictionalized, Paraphrased Critch said “hmm, this reminds me of some particular epistemic states I recently noticed that don’t have names.”
“Go on”, said I.
“So, you know the feeling of being uncertain? And how it feels different to be 60% sure of something, vs 90%?”
“Sure.”
“Okay. So here’s two other states you might be in:
75% sure that you’ll eventually be 99% sure,
80% sure that you’ll eventually be 90% sure.
He let me process those numbers for a moment.
...
Then he continued: “Okay, now imagine you’re thinking about a particular AI system you’re designing, which might or might not be alignable.
“If you’re feeling 75% sure that you’ll eventually be 99% sure that that AI is safe, this means you think that eventually you’ll have a clear understanding of the AI, such that you feel confident turning it on without destroying humanity. Moreover you expect to be able to convince other people that it’s safe to turn it on without destroying humanity.
“Whereas if you’re 80% sure that eventually you’ll be 90% sure that it’ll be safe, even in the future state where you’re better informed and more optimistic, you might still not actually be confident enough to turn it on. And even if for some reason you are, other people might disagree about whether you should turn it on.
“I’ve noticed people tracking how certain they are of something, without paying attention to whether their uncertainty is possible to resolve. And this has important ramifications for what kind of plans they can make. Some plans require near-certainty. Especially many plans that require group coordination.
“Makes sense”, said I. “Can I write this up as a blogpost?”
I’m not quite sure about the best name here, but this seems like a useful concept to have a handle for. Something like “unresolvable uncertainty?”