Why are two popular subjects here (1) extending lifespan, including cryogenics, (2) increasingly powerful AIs leading to a singularity. Is there an argument that concern for these things is somehow derivable from a Bayesian approach? Or is it more or less an accident that these things are of interest to the people here?
The short answer is that the people who originally created this site (the SIAI, FHI, Yudkowsky, etc) were all people who were working on these topics as their careers, and using Bayesian rationality in order to do those things. So, the people who initially made up the community were made up, in large part, of people who were interested in those topics and rationality. There is a bit more variation in this group now, but it’s still generally true.
The short answer is that the people who originally created this site (the SIAI, FHI, Yudkowsky, etc) were all people who were working on these topics as their careers, and using Bayesian rationality in order to do those things. So, the people who initially made up the community were made up, in large part, of people who were interested in those topics and rationality. There is a bit more variation in this group now, but it’s still generally true.