I wonder what it would take to bring Terence Tao on board..
At any rate, this is good news, the more high status people in academia take Alignment seriously, the easier it becomes to convince the next one, in what I hope is a virtuous cycle!
I always assumed that “Why don’t we give Terence Tao a million dollars to work on AGI alignment?” was using Tao to refer to a class of people. Your comment implies that it would be especially valuable for Tao specifically to work on it.
Why should we believe that Tao would be especially likely to be able to make progress on AGI alignment (e.g. compared to other recent fields medal winners like Peter Scholze)?
I’ve also been perplexed by the focus on Tao in particular. In fact, I’ve long thought that if it’s a good idea to recruit a top mathematician to alignment, then Peter Scholze would be a better choice since
he’s probably the greatest active mathematician
he’s built his career out of paradigmatizing pre-paradigmatic areas of math
That said, I’m quite confident that Scholze is too busy revolutionizing everything he touches in mathematics to be interested in switching to alignment, so this is all moot.
(Also, I recognize that playing the “which one mathematician would be the single best to recruit to alignment?” game is not actually particularly useful, but it’s been a pet peeve of mine for a while that Tao is the poster child of the push to recruit a mathematician, hence this comment.)
Thanks, I’ve added him to my list of people to contact. If someone else wants to do it instead, reply to this comment so that we don’t interfere with each other.
I’ve always used “Tao” to mean “brillant mathematicians” but I also think he has surprisingly eclectic research interests and in particular has done significant work in image processing, which shows a willingness to work on applied mathematics and may be relevant for AI work.
I must say however that I’ve changed my mind on this issue and that AI alignment research would be better served by hiring a shit ton of PhD students with a promise of giving 80% of them 3-5 years short term research positions after their PhD and giving 80% of those tenure afterward. I think we made a mistake by assuming that pre paradigmatic research means only genius are useful, and that a pure number strategy would help a lot (also genius mathematicians willing to work on interesting problems are not that much interested in money overwise they would work in finance, but they are very much interested by getting a fixed stable position before 35. The fact that France is still relevant in mathematical research while under paying its researcher by a lot is proof of that).
I always assumed that “Why don’t we give Terence Tao a million dollars to work on AGI alignment?” was using Tao to refer to a class of people. Your comment implies that it would be especially valuable for Tao specifically to work on it.
When I’ve talked about this, I’ve always meant both literally hire Tao and try to find young people of the same ability.
While I too was using Tao as a reference class, it’s not the only reason for mentioning him. I simply expect that people with IQs that ridiculously high are simply better suited to tackling novel topics, and I do mean novel, building a field from scratch, ideally with mathematical precision.
All the more if they have a proven track record, especially in mathematics, and I suspect that if Tao could be convinced to work on the problem, he would have genuinely significant insight. That and a cheerleader effect, which wouldn’t be necessary in an ideal world, but that’s hardly the one we live in is it?
Why should we believe that Tao would be especially likely to be able to make progress on AGI alignment (e.g. compared to other recent fields medal winners like Peter Scholze)?
Well, his name is alliterative, so there’s that.
(I’m being glib here, but I agree that there’s a much broader class of people who have a similar level of brilliance to Tao, but less name recognition, who could contribute quite a lot if they were to work on the problem.)
That’s a great point! It’ll also help with communicating the difficulty of the problem if they’ll conclude that the field is in trouble and time running out (in case that’s true – experts disagree here). I think AI strategy people should consider trying to get more ambassadors on board. (I think I see the ambassador effect as more important now than those people’s direct contributions, but you definitely only want ambassadors whose understanding of AI risk is crystal clear.)
Edit: That said, bringing in reputable people from outside ML may not be a good strategy to convince opinion leaders within ML, so this could backfire.
I wonder what it would take to bring Terence Tao on board..
At any rate, this is good news, the more high status people in academia take Alignment seriously, the easier it becomes to convince the next one, in what I hope is a virtuous cycle!
I always assumed that “Why don’t we give Terence Tao a million dollars to work on AGI alignment?” was using Tao to refer to a class of people. Your comment implies that it would be especially valuable for Tao specifically to work on it.
Why should we believe that Tao would be especially likely to be able to make progress on AGI alignment (e.g. compared to other recent fields medal winners like Peter Scholze)?
I’ve also been perplexed by the focus on Tao in particular. In fact, I’ve long thought that if it’s a good idea to recruit a top mathematician to alignment, then Peter Scholze would be a better choice since
he’s probably the greatest active mathematician
he’s built his career out of paradigmatizing pre-paradigmatic areas of math
he has an interest in computer proof-checking.
That said, I’m quite confident that Scholze is too busy revolutionizing everything he touches in mathematics to be interested in switching to alignment, so this is all moot.
(Also, I recognize that playing the “which one mathematician would be the single best to recruit to alignment?” game is not actually particularly useful, but it’s been a pet peeve of mine for a while that Tao is the poster child of the push to recruit a mathematician, hence this comment.)
Thanks, I’ve added him to my list of people to contact. If someone else wants to do it instead, reply to this comment so that we don’t interfere with each other.
I’ve always used “Tao” to mean “brillant mathematicians” but I also think he has surprisingly eclectic research interests and in particular has done significant work in image processing, which shows a willingness to work on applied mathematics and may be relevant for AI work.
I must say however that I’ve changed my mind on this issue and that AI alignment research would be better served by hiring a shit ton of PhD students with a promise of giving 80% of them 3-5 years short term research positions after their PhD and giving 80% of those tenure afterward. I think we made a mistake by assuming that pre paradigmatic research means only genius are useful, and that a pure number strategy would help a lot (also genius mathematicians willing to work on interesting problems are not that much interested in money overwise they would work in finance, but they are very much interested by getting a fixed stable position before 35. The fact that France is still relevant in mathematical research while under paying its researcher by a lot is proof of that).
Hear hear
IIRC you were looking into these ideas more seriously, any progress?
No.
We should send somebody or somebodies to the Heidelberg Laureate Forum. High EV.
When I’ve talked about this, I’ve always meant both literally hire Tao and try to find young people of the same ability.
While I too was using Tao as a reference class, it’s not the only reason for mentioning him. I simply expect that people with IQs that ridiculously high are simply better suited to tackling novel topics, and I do mean novel, building a field from scratch, ideally with mathematical precision.
All the more if they have a proven track record, especially in mathematics, and I suspect that if Tao could be convinced to work on the problem, he would have genuinely significant insight. That and a cheerleader effect, which wouldn’t be necessary in an ideal world, but that’s hardly the one we live in is it?
Well, his name is alliterative, so there’s that.
(I’m being glib here, but I agree that there’s a much broader class of people who have a similar level of brilliance to Tao, but less name recognition, who could contribute quite a lot if they were to work on the problem.)
That’s a great point! It’ll also help with communicating the difficulty of the problem if they’ll conclude that the field is in trouble and time running out (in case that’s true – experts disagree here). I think AI strategy people should consider trying to get more ambassadors on board. (I think I see the ambassador effect as more important now than those people’s direct contributions, but you definitely only want ambassadors whose understanding of AI risk is crystal clear.)
Edit: That said, bringing in reputable people from outside ML may not be a good strategy to convince opinion leaders within ML, so this could backfire.
There are already people taking care of that, see this question I asked recently.