[Replying to this whole thread, not just your particular comment]
“Epistemic humility” over distributions of times is pretty weird to think about, and imo generally confusing or unhelpful. There’s an infinite amount of time, so there is no uniform measure. Nor, afaik, is there any convergent scale-free prior. You must use your knowledge of the world to get any distribution at all.
You can still claim that higher-entropy distributions are more “humble” w.r.t. to some improper prior. Which begs the question “Higher entropy w.r.t. what measure? Uniform? Log-uniform?”. There’s an infinite class of scale-free measures you can use here. The natural way to pick one is using knowledge about the world.
Even in this (possibly absurd) framework, it seems like “high-entropy” doesn’t deserve the word “humble”—since having any reasonable distribution means you already deviated by infinite bits from any scale-free prior, am I significantly less humble for deviating by infinity+1 bits? It’s not like either of us actually started from an improper prior, then collected infinite bits one-by-one, and you can say “hey, where’d you get one extra bit from?”
You can salvage some kind of humility idea here by first establishing, with only the simplest object-level arguments, some finite prior, then being suspicious of longer arguments which drag you far from that prior. Although this mostly looks like regular-old object-level argument. The term “humility” seems often counterproductive, unless ppl already understand which exact form is being invoked.
There’s a different kind of “humility” which is defering to other people’s opinions. This has the associated problem of picking who to defer to. I’m often in favor, whereas Yudkowsky seems generally against, especially when he’s the person being asked to defer (see for example his takedown of “Humbali” here).
I’m often in favor, whereas Yudkowsky seems generally against, especially when he’s the person being asked to defer (see for example his takedown of “Humbali” here).
This is well explained by the hypothesis that he is epistemically superior to all of us (or at least thinks he is).
[Replying to this whole thread, not just your particular comment]
“Epistemic humility” over distributions of times is pretty weird to think about, and imo generally confusing or unhelpful. There’s an infinite amount of time, so there is no uniform measure. Nor, afaik, is there any convergent scale-free prior. You must use your knowledge of the world to get any distribution at all.
You can still claim that higher-entropy distributions are more “humble” w.r.t. to some improper prior. Which begs the question “Higher entropy w.r.t. what measure? Uniform? Log-uniform?”. There’s an infinite class of scale-free measures you can use here. The natural way to pick one is using knowledge about the world.
Even in this (possibly absurd) framework, it seems like “high-entropy” doesn’t deserve the word “humble”—since having any reasonable distribution means you already deviated by infinite bits from any scale-free prior, am I significantly less humble for deviating by infinity+1 bits? It’s not like either of us actually started from an improper prior, then collected infinite bits one-by-one, and you can say “hey, where’d you get one extra bit from?”
You can salvage some kind of humility idea here by first establishing, with only the simplest object-level arguments, some finite prior, then being suspicious of longer arguments which drag you far from that prior. Although this mostly looks like regular-old object-level argument. The term “humility” seems often counterproductive, unless ppl already understand which exact form is being invoked.
There’s a different kind of “humility” which is defering to other people’s opinions. This has the associated problem of picking who to defer to. I’m often in favor, whereas Yudkowsky seems generally against, especially when he’s the person being asked to defer (see for example his takedown of “Humbali” here).
This is well explained by the hypothesis that he is epistemically superior to all of us (or at least thinks he is).