I think you should write that post because thoughtful respected participants on LW use the anthropic principle incorrectly, IMHO. The gentleman who wrote great grandparent for example is respected enough to have been invited to attend SIAI’s workshop on decision theory earlier this year. And thoughtful respected participant Cousin It probably misapplied the anthropic principle in the first paragraph of this comment. I say “probably” because the context has to do with “modal realism” and other wooly thinking that I cannot digest, but I have not been able to think of any context in which Cousin It’s “every passing day without incident should weaken your faith in the anthropic explanation” is a sound argument.
(Many less thoughtful or less respected participants here have misapplied or failed to take into account the anthropic principle, too.)
And thoughtful respected participant Cousin It probably misapplied the anthropic principle in the first paragraph of this comment.
It has been a while since I skimmed “Anthropic Shadow”, but IIRC a key point or assumption in their formula was that the more recent a risk would have occurred or not, the less likely ‘we’ are to have observed the risk occurring, because more recently = less time for observers to recover from the existential risk or fresh observers to have evolved. This suggests a weak version: the longer we exist, the fewer risks’ absence we need to appeal to an observer-based principle.
(But thinking about it, maybe the right version is the exact opposite. It’s hard to think about this sort of thing.)
I’ve read “Anthropic Shadow” a few times now. I don’t think I will write a post on it. It does a pretty good job of explaining itself, and I couldn’t think of any uses for it.
The Shadow only biases estimates when some narrow conditions are met:
your estimate has to be based strictly on your past
of a random event
the events have to be very destructive to observers like yourself
and also rare to begin with
So it basically only applies to global existential risks, and there aren’t that many of them. Nor can we apply it to interesting examples like the Singularity, because that’s not a random event but dependent on our development.
I have been meaning to write a post summarizing “Anthropic Shadow”; would anyone besides you and me be interested in it?
I think you should write that post because thoughtful respected participants on LW use the anthropic principle incorrectly, IMHO. The gentleman who wrote great grandparent for example is respected enough to have been invited to attend SIAI’s workshop on decision theory earlier this year. And thoughtful respected participant Cousin It probably misapplied the anthropic principle in the first paragraph of this comment. I say “probably” because the context has to do with “modal realism” and other wooly thinking that I cannot digest, but I have not been able to think of any context in which Cousin It’s “every passing day without incident should weaken your faith in the anthropic explanation” is a sound argument.
(Many less thoughtful or less respected participants here have misapplied or failed to take into account the anthropic principle, too.)
It has been a while since I skimmed “Anthropic Shadow”, but IIRC a key point or assumption in their formula was that the more recent a risk would have occurred or not, the less likely ‘we’ are to have observed the risk occurring, because more recently = less time for observers to recover from the existential risk or fresh observers to have evolved. This suggests a weak version: the longer we exist, the fewer risks’ absence we need to appeal to an observer-based principle.
(But thinking about it, maybe the right version is the exact opposite. It’s hard to think about this sort of thing.)
I’ve read “Anthropic Shadow” a few times now. I don’t think I will write a post on it. It does a pretty good job of explaining itself, and I couldn’t think of any uses for it.
The Shadow only biases estimates when some narrow conditions are met:
your estimate has to be based strictly on your past
of a random event
the events have to be very destructive to observers like yourself
and also rare to begin with
So it basically only applies to global existential risks, and there aren’t that many of them. Nor can we apply it to interesting examples like the Singularity, because that’s not a random event but dependent on our development.