Hm, touché. Although … if “the community” were actually following a policy of strategically arguing for things based on importance-times-neglectedness, I would expect to see a lot more people working on eugenics, which looks really obviously potentially important to me, either on a Christiano-esque Outside View (smarter humans means relatively more human optimization power steering the future rather than unalignable machine-learning algorithms), or a hard-takeoff view (smarter humans sooner means more time to raise alignment-researcher tykebombs). Does that seem right or wrong to you? (Feel free to email or PM me.)
I was thinking that reputation-hit contributes to neglectedness. Maybe what we really need is a way to reduce reputational “splash damage”, so that people with different levels of reputation risk-tolerance can work together or at least talk to each other (using, for example, a website).
Hm, touché. Although … if “the community” were actually following a policy of strategically arguing for things based on importance-times-neglectedness, I would expect to see a lot more people working on eugenics, which looks really obviously potentially important to me, either on a Christiano-esque Outside View (smarter humans means relatively more human optimization power steering the future rather than unalignable machine-learning algorithms), or a hard-takeoff view (smarter humans sooner means more time to raise alignment-researcher tykebombs). Does that seem right or wrong to you? (Feel free to email or PM me.)
Importance * Neglectedness—Reputation hit might be more accurate.
I was thinking that reputation-hit contributes to neglectedness. Maybe what we really need is a way to reduce reputational “splash damage”, so that people with different levels of reputation risk-tolerance can work together or at least talk to each other (using, for example, a website).