Ah— you have written it up at great length, just not in Less Wrong posts.
I think you claim too strong a predictive power for the patterns you see, but that’s a discussion for a different thread. (One particular objection: the fact that evolution has gotten us here contains a fair bit of anthropic bias. We don’t know exactly how narrow are the bottlenecks we’ve survived already.)
We don’t know exactly how narrow are the bottlenecks we’ve survived already.
We can estimate this for a lot of the major bottlenecks. For example, we can look at how likely other intelligent species are to survive and in what contexts. We have a fair bit of data for that. We also now have detailed genetic data so we can look at historical genetic bottlenecks in the technical sense for humans and for other species.
One particular objection: the fact that evolution has gotten us here contains a fair bit of anthropic bias. We don’t know exactly how narrow are the bottlenecks we’ve survived already.
Well, I don’t want to appear to endorse the thesis that you associated me with—but it appears that while we don’t know much about the past exactly, we do have some idea about past risks to our own existence. We can look at the distribution of smaller risks among our ancestors, and gather data from a range of other species. What Joshua Zelinsky said about genetic data is also a guide to recent bottleneck narrowness.
Occam’s razor also weighs against some anthropic scenarios that imply a high risk to our existence. The idea that we have luckily escaped 1000 asteroid strikes by chance has to compete with the explanation that these asteroids were never out there in the first place. The higher the supposed risk, the bigger the number of “lucky misses” that are needed—and the lower the chances are of that being the correct explanation.
Not that the past is necessarily a good guide—but rather we can account for anthropic effects quite well.
(One particular objection: the fact that evolution has gotten us here contains a fair bit of anthropic bias. We don’t know exactly how narrow are the bottlenecks we’ve survived already.)
User:timtyler himself has brought up the dinosaurs’ semi-extinction, for example, which was a local decrease in “moral progress” even if it might have been globally necessary or whatever.
Ah— you have written it up at great length, just not in Less Wrong posts.
I think you claim too strong a predictive power for the patterns you see, but that’s a discussion for a different thread. (One particular objection: the fact that evolution has gotten us here contains a fair bit of anthropic bias. We don’t know exactly how narrow are the bottlenecks we’ve survived already.)
We can estimate this for a lot of the major bottlenecks. For example, we can look at how likely other intelligent species are to survive and in what contexts. We have a fair bit of data for that. We also now have detailed genetic data so we can look at historical genetic bottlenecks in the technical sense for humans and for other species.
http://en.wikipedia.org/wiki/Population_bottleneck#Humans
Well, I don’t want to appear to endorse the thesis that you associated me with—but it appears that while we don’t know much about the past exactly, we do have some idea about past risks to our own existence. We can look at the distribution of smaller risks among our ancestors, and gather data from a range of other species. What Joshua Zelinsky said about genetic data is also a guide to recent bottleneck narrowness.
Occam’s razor also weighs against some anthropic scenarios that imply a high risk to our existence. The idea that we have luckily escaped 1000 asteroid strikes by chance has to compete with the explanation that these asteroids were never out there in the first place. The higher the supposed risk, the bigger the number of “lucky misses” that are needed—and the lower the chances are of that being the correct explanation.
Not that the past is necessarily a good guide—but rather we can account for anthropic effects quite well.
User:timtyler himself has brought up the dinosaurs’ semi-extinction, for example, which was a local decrease in “moral progress” even if it might have been globally necessary or whatever.