Like many people, I don’t think this idea will work. But I voted it up, because I vote on comment expected value. On a topic that is critical to solve, and for which there are no good ideas, entertaining crazy ideas is worthwhile. So I’d rather hear one crazy idea that a good Yudkowskian would consider sacrilege, than ten well-reasoned points that are already overrepresented on LessWrong. It’s analogous to the way that optimal mutation rate is high when your current best solution is very sub-optimal, and optimal selection strength (reproduction probability as a function of fitness) is low when your population is nearly homogenous (as ideas about FAI on LessWrong are).
Like many people, I don’t think this idea will work. But I voted it up, because I vote on comment expected value. On a topic that is critical to solve, and for which there are no good ideas, entertaining crazy ideas is worthwhile. So I’d rather hear one crazy idea that a good Yudkowskian would consider sacrilege, than ten well-reasoned points that are already overrepresented on LessWrong. It’s analogous to the way that optimal mutation rate is high when your current best solution is very sub-optimal, and optimal selection strength (reproduction probability as a function of fitness) is low when your population is nearly homogenous (as ideas about FAI on LessWrong are).