Cool idea.
Any model we actually spend time talking about is going to be vastly above the base rate, though. Because most human-considered models are very nonsensical/unlikely.
In hindsight I should’ve specified a time limit. Someone pointed out to me that if something taxonomically included in “human” continued living for a very long time, then that thing could “consider” an indefinite number of ideas. Maybe I should’ve said “that anyone considers up until the year 3k” or something.
I don’t think that solves the problem though. There are a lot of people, and many of them believe very unlikely models. Any model we (lesswrong-ish) people spend time discussing is going to be vastly more likely than a randomly selected human-thought-about model.
I realise this is getting close to reference class tennis, sorry.
I had little hope of solving much in this domain! But a base rate that is way off is still useful to me for some discussions. What you’re pointing to might offer some way to eliminate a lot of irrelevant n, or gouge probability away from them. So with respect to discussions within smart circles, maybe the base rate ends up being much higher than 1/5million. Maybe it’s more like 1⁄10,000, or even higher. I’m not a stickler, I’d take 1⁄1,000, if it lets certain individuals in these circles realize they have updated upward on a specific metaphysical idea way more strongly than they could reasonably. That it’s an obvious overconfidence to have updated all the way to 50% chance on a specific one that happens to be popular in smart circles at the time.
Cool idea. Any model we actually spend time talking about is going to be vastly above the base rate, though. Because most human-considered models are very nonsensical/unlikely.
In hindsight I should’ve specified a time limit. Someone pointed out to me that if something taxonomically included in “human” continued living for a very long time, then that thing could “consider” an indefinite number of ideas. Maybe I should’ve said “that anyone considers up until the year 3k” or something.
I don’t think that solves the problem though. There are a lot of people, and many of them believe very unlikely models. Any model we (lesswrong-ish) people spend time discussing is going to be vastly more likely than a randomly selected human-thought-about model. I realise this is getting close to reference class tennis, sorry.
I had little hope of solving much in this domain! But a base rate that is way off is still useful to me for some discussions. What you’re pointing to might offer some way to eliminate a lot of irrelevant n, or gouge probability away from them. So with respect to discussions within smart circles, maybe the base rate ends up being much higher than 1/5million. Maybe it’s more like 1⁄10,000, or even higher. I’m not a stickler, I’d take 1⁄1,000, if it lets certain individuals in these circles realize they have updated upward on a specific metaphysical idea way more strongly than they could reasonably. That it’s an obvious overconfidence to have updated all the way to 50% chance on a specific one that happens to be popular in smart circles at the time.
I think that’s how I’d use this as well.