I personally think the probability of success would be maximized if we were to first contact high-status members of the rationalist community, get them on board with this plan, and ask them to contact Scott Aaronson as well as contact professors who would be willing to contact other professors.
The link to Scott Aaronson’s blog says he provisionally would be willing to take a leave of absence from his job to work on alignment full-time for a year for $500k. I believe EA has enough funds that they could fund that if they deemed it to be worthwhile. I think the chance of success would be greatest if we contacted Eliezer and/or whoever is in charge of funds, asked them to make Scott a formal offer, and sent Scott an email with the offer and an invitation to talk to somebody (maybe Paul Christiano, his former student) working on alignment to see what kinds of things they think are worth working on.
I think even with the perfect email from most members of this community, the chances that e.g. Terry Tao reads it, takes it seriously, and works on alignment are not very good, due to lack of easily verifiable credibility of the sender. Institutional affiliation at least partly remedies this, and so I think it would be preferable if an email came from another professor who directly tried to convince them.
I think cold-emailing Jacob Steinhardt/Robin Hanson/etc. asking them to email other academics would have a better chance of succeeding given that the former indeed participate on this forum. However, even here, I think people are inclined to pay more attention to the views of those closer to them. My impression is that Eliezer and other high-ranked members of the rationalist community have closer connections to these alignment-interested professors (and know many more such professors) and could more successfully convince them to reach out to their colleagues about AI safety.
I don’t mean to suggest that these less-direct ways are necessarily better. If for instance Eliezer is not willing to talk to Jacob about this, then it might be better to contact Jacob than to do nothing. If you are not able to reach Jacob by any method, it might be better to contact Tao directly than to do nothing. I guess I only wish to say that you might want to attempt these more established channels before reaching out personally.
I also think many academics may be averse to contacting their colleagues about AI safety as it may come with a risk to their academic reputation. So I think it is worth keeping in mind that the chance of succeeding at this may not be very high.
Finally, thank you again for the original post—I think it is important.
I personally think the probability of success would be maximized if we were to first contact high-status members of the rationalist community, get them on board with this plan, and ask them to contact Scott Aaronson as well as contact professors who would be willing to contact other professors.
The link to Scott Aaronson’s blog says he provisionally would be willing to take a leave of absence from his job to work on alignment full-time for a year for $500k. I believe EA has enough funds that they could fund that if they deemed it to be worthwhile. I think the chance of success would be greatest if we contacted Eliezer and/or whoever is in charge of funds, asked them to make Scott a formal offer, and sent Scott an email with the offer and an invitation to talk to somebody (maybe Paul Christiano, his former student) working on alignment to see what kinds of things they think are worth working on.
I think even with the perfect email from most members of this community, the chances that e.g. Terry Tao reads it, takes it seriously, and works on alignment are not very good, due to lack of easily verifiable credibility of the sender. Institutional affiliation at least partly remedies this, and so I think it would be preferable if an email came from another professor who directly tried to convince them.
I think cold-emailing Jacob Steinhardt/Robin Hanson/etc. asking them to email other academics would have a better chance of succeeding given that the former indeed participate on this forum. However, even here, I think people are inclined to pay more attention to the views of those closer to them. My impression is that Eliezer and other high-ranked members of the rationalist community have closer connections to these alignment-interested professors (and know many more such professors) and could more successfully convince them to reach out to their colleagues about AI safety.
I don’t mean to suggest that these less-direct ways are necessarily better. If for instance Eliezer is not willing to talk to Jacob about this, then it might be better to contact Jacob than to do nothing. If you are not able to reach Jacob by any method, it might be better to contact Tao directly than to do nothing. I guess I only wish to say that you might want to attempt these more established channels before reaching out personally.
I also think many academics may be averse to contacting their colleagues about AI safety as it may come with a risk to their academic reputation. So I think it is worth keeping in mind that the chance of succeeding at this may not be very high.
Finally, thank you again for the original post—I think it is important.