We’d absolutely pay him if he showed up and said he wanted to work on the problem. Every time I’ve asked about trying anything like this, all the advisors claim that you cannot pay people at the Terry Tao level to work on problems that don’t interest them. We have already extensively verified that it doesn’t particularly work for eg university professors.
Every time I’ve asked about trying anything like this, all the advisors claim that you cannot pay people at the Terry Tao level to work on problems that don’t interest them.
As I am sure you would agree, Neumann/Tao-level people are a very different breed from even very, very, very good professors. It is plausible they are significantly more sane than the average genius.
Given the enormous glut of money in EA trying to help here and the terrifying thing where a lot of the people who matter have really short timelines, I think it is worth testing this empirically with Tao himself and Tao-level people.
It is worth noting that Neumann occasionally did contract work for extraordinary sums.
Neumann/Tao-level people are a very different breed from even very, very, very good professors. It is plausible they are significantly more sane than the average genius.
I’m not sure whether the unspoken context of this comment is “We tried to hire Terry Tao and he declined, citing lack of interest in AI alignment” vs “we assume, based on not having been contacted by Terry Tao, that he is not interested in AI alignment.”
If the latter: the implicit assumption seems to be that if Terry Tao would find AI alignment to be an interesting project, we should strongly expect him to both know about it and have approached MIRI regarding it, neither which seems particularly likely given the low public profile of both AI alignment in general and MIRI in particular.
You’re probably already aware of this, but just in case not:
Demis Hassabissaid the following about getting Terrence Tao to work on AI safety:
I always imagine that as we got closer to the sort of gray zone that you were talking about earlier, the best thing to do might be to pause the pushing of the performance of these systems so that you can analyze down to minute detail exactly and maybe even prove things mathematically about the system so that you know the limits and otherwise of the systems that you’re building. At that point I think all the world’s greatest minds should probably be thinking about this problem. So that was what I would be advocating to you know the Terence Tao’s of this world, the best mathematicians. Actually I’ve even talked to him about this—I know you’re working on the Riemann hypothesis or something which is the best thing in mathematics but actually this is more pressing. I have this sort of idea of like almost uh ‘Avengers assembled’ of the scientific world because that’s a bit of like my dream.
The header image of Tao’s blog is a graph representing “flattening the curve” of the Covid-19 spread. One avenue for convincing elite talent that alignment is a problem is a media campaign that brings the problem of alignment into popular consciousness.
I have some ideas about how this might begin. “Educational” YouTuber CGP Grey, (5.2M subscribers) got talked into making a pair of videos advocating for anti-aging research by another large YouTuber, Kurzgesagt (18M subscribers). I’d bet that they could both be persuaded into making AI alignment videos.
Can you clarify what the term “Pascal-mugged” means in your comment?
From what I can tell, the main reason why CGP Grey made those videos was because he’s had a long-running desire to live to see the future. He’s talked about it in his podcast with Brady Haran and it’s hinted at in some of his older videos. I don’t think there was much more to it than that.
As for Kurzgesagt, I believe it was coordinated with Keith Comito, the president of lifespan.io. As for why they suddenly decided to coordinate with lifespan.io, I have little idea. However, since their video was launched with conjunction with CGP Grey, it’s possible that CGP Grey was the first one to bring it up, after which Kurzgesagt reached out to people who could help.
Grey’s 2014 video Humans Need Not Apply, about humans being skilled-out of the economy, was the first introduction for me and probably lots of other people to the idea that AI might cause problems. I’m sure he’d be up for making a video about alignment.
I don’t have any such advice at the moment. It’s not clear to me what makes a difference at this point.
What do you think about unironically hiring Terry Tao?
We’d absolutely pay him if he showed up and said he wanted to work on the problem. Every time I’ve asked about trying anything like this, all the advisors claim that you cannot pay people at the Terry Tao level to work on problems that don’t interest them. We have already extensively verified that it doesn’t particularly work for eg university professors.
As I am sure you would agree, Neumann/Tao-level people are a very different breed from even very, very, very good professors. It is plausible they are significantly more sane than the average genius.
Given the enormous glut of money in EA trying to help here and the terrifying thing where a lot of the people who matter have really short timelines, I think it is worth testing this empirically with Tao himself and Tao-level people.
It is worth noting that Neumann occasionally did contract work for extraordinary sums.
Von Neumann wanted to nuke easten Europe.
I’m not sure whether the unspoken context of this comment is “We tried to hire Terry Tao and he declined, citing lack of interest in AI alignment” vs “we assume, based on not having been contacted by Terry Tao, that he is not interested in AI alignment.”
If the latter: the implicit assumption seems to be that if Terry Tao would find AI alignment to be an interesting project, we should strongly expect him to both know about it and have approached MIRI regarding it, neither which seems particularly likely given the low public profile of both AI alignment in general and MIRI in particular.
If the former: bummer.
You’re probably already aware of this, but just in case not:
Demis Hassabis said the following about getting Terrence Tao to work on AI safety:
The header image of Tao’s blog is a graph representing “flattening the curve” of the Covid-19 spread. One avenue for convincing elite talent that alignment is a problem is a media campaign that brings the problem of alignment into popular consciousness.
I have some ideas about how this might begin. “Educational” YouTuber CGP Grey, (5.2M subscribers) got talked into making a pair of videos advocating for anti-aging research by another large YouTuber, Kurzgesagt (18M subscribers). I’d bet that they could both be persuaded into making AI alignment videos.
Can you clarify what the term “Pascal-mugged” means in your comment?
From what I can tell, the main reason why CGP Grey made those videos was because he’s had a long-running desire to live to see the future. He’s talked about it in his podcast with Brady Haran and it’s hinted at in some of his older videos. I don’t think there was much more to it than that.
As for Kurzgesagt, I believe it was coordinated with Keith Comito, the president of lifespan.io. As for why they suddenly decided to coordinate with lifespan.io, I have little idea. However, since their video was launched with conjunction with CGP Grey, it’s possible that CGP Grey was the first one to bring it up, after which Kurzgesagt reached out to people who could help.
Edited, poor choice.
I think they discuss it around here.
I think CGP Grey recommended Bostrom’s Superintelligence in a podcast once.
Edit: Source
Grey’s 2014 video Humans Need Not Apply, about humans being skilled-out of the economy, was the first introduction for me and probably lots of other people to the idea that AI might cause problems. I’m sure he’d be up for making a video about alignment.
What do you think about trying to actually interest him?
What do you think about ironically hiring Terry Tao?
Not even a “In 90% of possible worlds, we’re irreversibly doomed, but in the remaining 10%, here’s the advice that would work”?
:(