Hi, long-time lurker but first-time poster with a background in math here. I personally agree that it would be a good idea if we were to at least try to get some extremely talented mathematicians to think about alignment. Even if they decide not to, it might still be interesting to see what kinds of objections they have to working on it (e.g. is it because they think it’s futile and doomed to failure, because they think AGI is never going to happen, because they think alignment will not be an issue, because they feel they have nothing to contribute, or because it’s not technically interesting enough?).
However, I would also like to second TekhneMakre’s concerns about the format and content of the email. If you sample some comments on posts on Terry Tao’s blog, you will find that there are a number of commenters who would probably best be described as cranks who indefatigably try to convince Terry that their theories are worth paying attention to, that Terry is currently not wisely spending his time, etc. He (sensibly) ignores these comments, and has probably learned for the sake of sanity not to engage with anyone who seems to fit this bill. I am concerned that the email outlined in your post will set off the same response and thus be ignored. AI safety is still a rather fringe idea amongst academics, at least partly because it is speculative and lacking concreteness. It took me years as an academic-adjacent person to be even somewhat convinced that it could be a problem (I still am not totally convinced, but I am convinced it is at least worth looking into). I do not think an email appealing to emotion and anecdotes is likely to convince someone from that background encountering this problem.
I have three alternative suggestions; I’m not sure how good they are, so take them each with a grain of salt:
Firstly, note that Scott Aaronson has said here https://scottaaronson.blog/?p=6288#comment-1928043 that he would provisionally be willing to think about alignment for a year. This seems like it would have several advantages (1) He has already signaled interest, provisionally, so it would be easier to convince him that it might be worth working on, (2) He is already acquainted with many of the arguments for taking AGI seriously, so could start working on the problem more immediately, (3) He is well acquainted with the rationalist community and so would not be put off by rationalist norms or affiliated ideas such as EA (which I believe accounts for the skepticism of at least some academics), (4) Scott’s area of work is CS theory, which seems like it would be more relevant to alignment than Tao’s fields of interest.
Secondly, there are some academics who take AI safety arguments seriously. Jacob Steinhardt comes to mind, but I’m sure there are a decent number of others, especially given recent progress on AI. If these academics were to contact other top academics asking them to consider working on AI safety, the request would come across as much more credible. They would also know how to frame the problem in such a way to pique the interest of top mathematicians/computer scientists.
Thirdly, note that there are many academics who are open to working on big policy problems that do not directly concern their primary research interests. Terry Tao, I believe, is one of them, as evidenced by https://newsroom.ucla.edu/dept/faculty/professor-terence-tao-named-to-president-bidens-presidents-council-of-advisors-on-science-and-technology . I’m not sure to what extent this is an easier problem or a desirable course of action, but if you could convince some people in politics that this problem is worth taking seriously, it is possible that the government might directly ask these scientists to think about it.
This last point is not a suggestion, but I would like to add one note. Eliezer claims that he was told that you cannot pay top mathematicians to work on problems. I believe this is somewhat false. There are many examples of very talented professors and PhD students leaving academia to work at hedge funds. One example is Abhinav Kumar, who a few years ago was 1 of 4 coauthors on a paper solving the long-open problem on optimal sphere packings in 24 dimensions. He left an Associate Professorship at MIT to work at Renaissance Technologies (a hedge fund). Not exactly in the same vein, but Huawei has recruited 4 Fields medalists to work with them (e.g. see https://www.ihes.fr/en/laurent-lafforgue-en/ for one example) although I’m not certain whether they are working on applied problems. I cannot say whether money is a motivating factor in any given one of these cases, but there are more examples like this, and I think it is fair to say that at least some substantial fraction of all such people involved might have been motivated at least partly by money.
Seems that I wasn’t the only person to notice Scott’s comment on his blog :) He’s just announced that he’ll be working on alignment at OpenAI for a year: https://scottaaronson.blog/?p=6484
Oh wow, didn’t realise how recent the Huawei recruitment of Field medalists was! This from today. Maybe we need to convince Huawei to care about AGI Alignment :)
Then do you think I should contact Jacob Steinhardt to ask him what I should write to interest Tao and avoid seeming like a crank?
There isn’t much I can do about SA other than telling him to work on the problem in his free time.
Unless something extraordinary happens I’m definitely not contacting anyone in politics. Politicians being interested in AGI is a nightmarish scenario and those news about Huawei don’t help my paranoia about the issue.
I personally think the probability of success would be maximized if we were to first contact high-status members of the rationalist community, get them on board with this plan, and ask them to contact Scott Aaronson as well as contact professors who would be willing to contact other professors.
The link to Scott Aaronson’s blog says he provisionally would be willing to take a leave of absence from his job to work on alignment full-time for a year for $500k. I believe EA has enough funds that they could fund that if they deemed it to be worthwhile. I think the chance of success would be greatest if we contacted Eliezer and/or whoever is in charge of funds, asked them to make Scott a formal offer, and sent Scott an email with the offer and an invitation to talk to somebody (maybe Paul Christiano, his former student) working on alignment to see what kinds of things they think are worth working on.
I think even with the perfect email from most members of this community, the chances that e.g. Terry Tao reads it, takes it seriously, and works on alignment are not very good, due to lack of easily verifiable credibility of the sender. Institutional affiliation at least partly remedies this, and so I think it would be preferable if an email came from another professor who directly tried to convince them.
I think cold-emailing Jacob Steinhardt/Robin Hanson/etc. asking them to email other academics would have a better chance of succeeding given that the former indeed participate on this forum. However, even here, I think people are inclined to pay more attention to the views of those closer to them. My impression is that Eliezer and other high-ranked members of the rationalist community have closer connections to these alignment-interested professors (and know many more such professors) and could more successfully convince them to reach out to their colleagues about AI safety.
I don’t mean to suggest that these less-direct ways are necessarily better. If for instance Eliezer is not willing to talk to Jacob about this, then it might be better to contact Jacob than to do nothing. If you are not able to reach Jacob by any method, it might be better to contact Tao directly than to do nothing. I guess I only wish to say that you might want to attempt these more established channels before reaching out personally.
I also think many academics may be averse to contacting their colleagues about AI safety as it may come with a risk to their academic reputation. So I think it is worth keeping in mind that the chance of succeeding at this may not be very high.
Finally, thank you again for the original post—I think it is important.
Hi, long-time lurker but first-time poster with a background in math here. I personally agree that it would be a good idea if we were to at least try to get some extremely talented mathematicians to think about alignment. Even if they decide not to, it might still be interesting to see what kinds of objections they have to working on it (e.g. is it because they think it’s futile and doomed to failure, because they think AGI is never going to happen, because they think alignment will not be an issue, because they feel they have nothing to contribute, or because it’s not technically interesting enough?).
However, I would also like to second TekhneMakre’s concerns about the format and content of the email. If you sample some comments on posts on Terry Tao’s blog, you will find that there are a number of commenters who would probably best be described as cranks who indefatigably try to convince Terry that their theories are worth paying attention to, that Terry is currently not wisely spending his time, etc. He (sensibly) ignores these comments, and has probably learned for the sake of sanity not to engage with anyone who seems to fit this bill. I am concerned that the email outlined in your post will set off the same response and thus be ignored. AI safety is still a rather fringe idea amongst academics, at least partly because it is speculative and lacking concreteness. It took me years as an academic-adjacent person to be even somewhat convinced that it could be a problem (I still am not totally convinced, but I am convinced it is at least worth looking into). I do not think an email appealing to emotion and anecdotes is likely to convince someone from that background encountering this problem.
I have three alternative suggestions; I’m not sure how good they are, so take them each with a grain of salt:
Firstly, note that Scott Aaronson has said here https://scottaaronson.blog/?p=6288#comment-1928043 that he would provisionally be willing to think about alignment for a year. This seems like it would have several advantages (1) He has already signaled interest, provisionally, so it would be easier to convince him that it might be worth working on, (2) He is already acquainted with many of the arguments for taking AGI seriously, so could start working on the problem more immediately, (3) He is well acquainted with the rationalist community and so would not be put off by rationalist norms or affiliated ideas such as EA (which I believe accounts for the skepticism of at least some academics), (4) Scott’s area of work is CS theory, which seems like it would be more relevant to alignment than Tao’s fields of interest.
Secondly, there are some academics who take AI safety arguments seriously. Jacob Steinhardt comes to mind, but I’m sure there are a decent number of others, especially given recent progress on AI. If these academics were to contact other top academics asking them to consider working on AI safety, the request would come across as much more credible. They would also know how to frame the problem in such a way to pique the interest of top mathematicians/computer scientists.
Thirdly, note that there are many academics who are open to working on big policy problems that do not directly concern their primary research interests. Terry Tao, I believe, is one of them, as evidenced by https://newsroom.ucla.edu/dept/faculty/professor-terence-tao-named-to-president-bidens-presidents-council-of-advisors-on-science-and-technology . I’m not sure to what extent this is an easier problem or a desirable course of action, but if you could convince some people in politics that this problem is worth taking seriously, it is possible that the government might directly ask these scientists to think about it.
This last point is not a suggestion, but I would like to add one note. Eliezer claims that he was told that you cannot pay top mathematicians to work on problems. I believe this is somewhat false. There are many examples of very talented professors and PhD students leaving academia to work at hedge funds. One example is Abhinav Kumar, who a few years ago was 1 of 4 coauthors on a paper solving the long-open problem on optimal sphere packings in 24 dimensions. He left an Associate Professorship at MIT to work at Renaissance Technologies (a hedge fund). Not exactly in the same vein, but Huawei has recruited 4 Fields medalists to work with them (e.g. see https://www.ihes.fr/en/laurent-lafforgue-en/ for one example) although I’m not certain whether they are working on applied problems. I cannot say whether money is a motivating factor in any given one of these cases, but there are more examples like this, and I think it is fair to say that at least some substantial fraction of all such people involved might have been motivated at least partly by money.
Seems that I wasn’t the only person to notice Scott’s comment on his blog :) He’s just announced that he’ll be working on alignment at OpenAI for a year: https://scottaaronson.blog/?p=6484
Oh wow, didn’t realise how recent the Huawei recruitment of Field medalists was! This from today. Maybe we need to convince Huawei to care about AGI Alignment :)
Then do you think I should contact Jacob Steinhardt to ask him what I should write to interest Tao and avoid seeming like a crank?
There isn’t much I can do about SA other than telling him to work on the problem in his free time.
Unless something extraordinary happens I’m definitely not contacting anyone in politics. Politicians being interested in AGI is a nightmarish scenario and those news about Huawei don’t help my paranoia about the issue.
I personally think the probability of success would be maximized if we were to first contact high-status members of the rationalist community, get them on board with this plan, and ask them to contact Scott Aaronson as well as contact professors who would be willing to contact other professors.
The link to Scott Aaronson’s blog says he provisionally would be willing to take a leave of absence from his job to work on alignment full-time for a year for $500k. I believe EA has enough funds that they could fund that if they deemed it to be worthwhile. I think the chance of success would be greatest if we contacted Eliezer and/or whoever is in charge of funds, asked them to make Scott a formal offer, and sent Scott an email with the offer and an invitation to talk to somebody (maybe Paul Christiano, his former student) working on alignment to see what kinds of things they think are worth working on.
I think even with the perfect email from most members of this community, the chances that e.g. Terry Tao reads it, takes it seriously, and works on alignment are not very good, due to lack of easily verifiable credibility of the sender. Institutional affiliation at least partly remedies this, and so I think it would be preferable if an email came from another professor who directly tried to convince them.
I think cold-emailing Jacob Steinhardt/Robin Hanson/etc. asking them to email other academics would have a better chance of succeeding given that the former indeed participate on this forum. However, even here, I think people are inclined to pay more attention to the views of those closer to them. My impression is that Eliezer and other high-ranked members of the rationalist community have closer connections to these alignment-interested professors (and know many more such professors) and could more successfully convince them to reach out to their colleagues about AI safety.
I don’t mean to suggest that these less-direct ways are necessarily better. If for instance Eliezer is not willing to talk to Jacob about this, then it might be better to contact Jacob than to do nothing. If you are not able to reach Jacob by any method, it might be better to contact Tao directly than to do nothing. I guess I only wish to say that you might want to attempt these more established channels before reaching out personally.
I also think many academics may be averse to contacting their colleagues about AI safety as it may come with a risk to their academic reputation. So I think it is worth keeping in mind that the chance of succeeding at this may not be very high.
Finally, thank you again for the original post—I think it is important.