Footsoldiers who chant the right tribal words tithe portions of their income to the cause, and money is the bottleneck for a lot of things.
They also vote for politicians who pledge to do something about the problem and who thus can be leaned on by the cause to keep their word, on pain of not being re-elected. Policy makers have to be involved in retarding the advancement of AI as well, after all.
Raising awareness for this among people who aren’t already close to the movement increases the chance that a young person with great potential still unsure about their future trajectory will choose to go into AI safety as opposed to some other field, while they might never have heard of it otherwise.
We need to raise the sanity waterline of the world and the more people are aware that we exist and that we rely on something called “rationality”, the more people will make an attempt to actually learn about it, particularly if we actively provide ways for them to do that alongside the evangelization activities. (Pamphlets explaining basics of Bayesian reasoning for people who hate math, for instance. That could be done visually easily enough.)
We are not bottlenecked by money thanks to the several billionaires who’ve joined the effort. We are somewhat bottlenecked by ability to distribute that money to the right places which is not helped by footsoldiers who don’t have good models (and in fact harmed if it makes it easier for vultures to get easy money and spam the ecosystem).
We don’t have useful things for politicians to do currently, especially not ones which are remotely close to the overton window. This might change, so it might in some futures be useful to get political support, but most of the things they could do would be counterproductive so getting uninformed support is not a clear win.
3 is a fair point, and reason for scaling up targeted outreach towards high potential people (but only weakly for mass-popularization).
Rationality is a harder sell than x-risk, and a less direct route to getting people who help. Raising the sanity waterline is good, but it is so far from where it would need to be to matter that.. yeah, I don’t hold much hope of civilization waking up to this, even if some subgroups do.
And, most importantly, there are massive costs to having an ecosystem full of footsoldiers in terms of being able to make good collective decisions.
Well then, how do we efficiently find people who have a high expected benefit to the program of AI safety? In fact, how can expected benefit even be measured? It would be great to have criteria—and testing methods—for measuring how likely any given person is to advance the field from the perspective of the consensus of researchers, so that effort can be efficiently allocated. (Presumably, some notion of what those criteria are already exists and I simply don’t know it.)
Also, suppose it were possible to construct systems for “incidentally” inducing people to be more rational without them having to explicitly optimize themselves for it or even necessarily realize it was happening—just as a side effect of using the system. Would you consider it worthwhile for someone to invest time and effort in developing “rationality-catalyzing tools” that can be used by the general public?
An already existing example, albeit a bulky one that is hard to create, is a prediction market—it provides incentives for learning to update one’s beliefs rationally. Perhaps a better example is that app the name of which I forget which purports to show a range of news reports about any given event with their suspected biases marked visibly, to encourage and enable people to compare multiple perspectives on reality. I am interested in trying to find ways of making more and better such tools, or ideally—the holy grail! - a rationality-catalyzing social media platform to counter the epistemically corrosive ones currently ascendant. (That would be, of course, quite a moonshot, but I’ve been thinking about it for years, and intending for weeks to try to write a post summarizing my ideas thus far.)
High G factor seems the key thing we’re attempting to select for, along with other generally helpful traits like conscientiousness, altruism, personability, and relevant domain-specific skills.
Rationality catalyzing tools could be very beneficial if successful. If your internal yum-meter points towards that being the most engaging thing for you, it seems a reasonable path (and will be a good way to grow even if the moonshot part does not land).
Footsoldiers who chant the right tribal words tithe portions of their income to the cause, and money is the bottleneck for a lot of things.
They also vote for politicians who pledge to do something about the problem and who thus can be leaned on by the cause to keep their word, on pain of not being re-elected. Policy makers have to be involved in retarding the advancement of AI as well, after all.
Raising awareness for this among people who aren’t already close to the movement increases the chance that a young person with great potential still unsure about their future trajectory will choose to go into AI safety as opposed to some other field, while they might never have heard of it otherwise.
We need to raise the sanity waterline of the world and the more people are aware that we exist and that we rely on something called “rationality”, the more people will make an attempt to actually learn about it, particularly if we actively provide ways for them to do that alongside the evangelization activities. (Pamphlets explaining basics of Bayesian reasoning for people who hate math, for instance. That could be done visually easily enough.)
We are not bottlenecked by money thanks to the several billionaires who’ve joined the effort. We are somewhat bottlenecked by ability to distribute that money to the right places which is not helped by footsoldiers who don’t have good models (and in fact harmed if it makes it easier for vultures to get easy money and spam the ecosystem).
We don’t have useful things for politicians to do currently, especially not ones which are remotely close to the overton window. This might change, so it might in some futures be useful to get political support, but most of the things they could do would be counterproductive so getting uninformed support is not a clear win.
3 is a fair point, and reason for scaling up targeted outreach towards high potential people (but only weakly for mass-popularization).
Rationality is a harder sell than x-risk, and a less direct route to getting people who help. Raising the sanity waterline is good, but it is so far from where it would need to be to matter that.. yeah, I don’t hold much hope of civilization waking up to this, even if some subgroups do.
And, most importantly, there are massive costs to having an ecosystem full of footsoldiers in terms of being able to make good collective decisions.
Well then, how do we efficiently find people who have a high expected benefit to the program of AI safety? In fact, how can expected benefit even be measured? It would be great to have criteria—and testing methods—for measuring how likely any given person is to advance the field from the perspective of the consensus of researchers, so that effort can be efficiently allocated. (Presumably, some notion of what those criteria are already exists and I simply don’t know it.)
Also, suppose it were possible to construct systems for “incidentally” inducing people to be more rational without them having to explicitly optimize themselves for it or even necessarily realize it was happening—just as a side effect of using the system. Would you consider it worthwhile for someone to invest time and effort in developing “rationality-catalyzing tools” that can be used by the general public?
An already existing example, albeit a bulky one that is hard to create, is a prediction market—it provides incentives for learning to update one’s beliefs rationally. Perhaps a better example is that app the name of which I forget which purports to show a range of news reports about any given event with their suspected biases marked visibly, to encourage and enable people to compare multiple perspectives on reality. I am interested in trying to find ways of making more and better such tools, or ideally—the holy grail! - a rationality-catalyzing social media platform to counter the epistemically corrosive ones currently ascendant. (That would be, of course, quite a moonshot, but I’ve been thinking about it for years, and intending for weeks to try to write a post summarizing my ideas thus far.)
High G factor seems the key thing we’re attempting to select for, along with other generally helpful traits like conscientiousness, altruism, personability, and relevant domain-specific skills.
Rationality catalyzing tools could be very beneficial if successful. If your internal yum-meter points towards that being the most engaging thing for you, it seems a reasonable path (and will be a good way to grow even if the moonshot part does not land).