Well then, how do we efficiently find people who have a high expected benefit to the program of AI safety? In fact, how can expected benefit even be measured? It would be great to have criteria—and testing methods—for measuring how likely any given person is to advance the field from the perspective of the consensus of researchers, so that effort can be efficiently allocated. (Presumably, some notion of what those criteria are already exists and I simply don’t know it.)
Also, suppose it were possible to construct systems for “incidentally” inducing people to be more rational without them having to explicitly optimize themselves for it or even necessarily realize it was happening—just as a side effect of using the system. Would you consider it worthwhile for someone to invest time and effort in developing “rationality-catalyzing tools” that can be used by the general public?
An already existing example, albeit a bulky one that is hard to create, is a prediction market—it provides incentives for learning to update one’s beliefs rationally. Perhaps a better example is that app the name of which I forget which purports to show a range of news reports about any given event with their suspected biases marked visibly, to encourage and enable people to compare multiple perspectives on reality. I am interested in trying to find ways of making more and better such tools, or ideally—the holy grail! - a rationality-catalyzing social media platform to counter the epistemically corrosive ones currently ascendant. (That would be, of course, quite a moonshot, but I’ve been thinking about it for years, and intending for weeks to try to write a post summarizing my ideas thus far.)
High G factor seems the key thing we’re attempting to select for, along with other generally helpful traits like conscientiousness, altruism, personability, and relevant domain-specific skills.
Rationality catalyzing tools could be very beneficial if successful. If your internal yum-meter points towards that being the most engaging thing for you, it seems a reasonable path (and will be a good way to grow even if the moonshot part does not land).
Well then, how do we efficiently find people who have a high expected benefit to the program of AI safety? In fact, how can expected benefit even be measured? It would be great to have criteria—and testing methods—for measuring how likely any given person is to advance the field from the perspective of the consensus of researchers, so that effort can be efficiently allocated. (Presumably, some notion of what those criteria are already exists and I simply don’t know it.)
Also, suppose it were possible to construct systems for “incidentally” inducing people to be more rational without them having to explicitly optimize themselves for it or even necessarily realize it was happening—just as a side effect of using the system. Would you consider it worthwhile for someone to invest time and effort in developing “rationality-catalyzing tools” that can be used by the general public?
An already existing example, albeit a bulky one that is hard to create, is a prediction market—it provides incentives for learning to update one’s beliefs rationally. Perhaps a better example is that app the name of which I forget which purports to show a range of news reports about any given event with their suspected biases marked visibly, to encourage and enable people to compare multiple perspectives on reality. I am interested in trying to find ways of making more and better such tools, or ideally—the holy grail! - a rationality-catalyzing social media platform to counter the epistemically corrosive ones currently ascendant. (That would be, of course, quite a moonshot, but I’ve been thinking about it for years, and intending for weeks to try to write a post summarizing my ideas thus far.)
High G factor seems the key thing we’re attempting to select for, along with other generally helpful traits like conscientiousness, altruism, personability, and relevant domain-specific skills.
Rationality catalyzing tools could be very beneficial if successful. If your internal yum-meter points towards that being the most engaging thing for you, it seems a reasonable path (and will be a good way to grow even if the moonshot part does not land).