This has been discussed several times in the past, see:
Have You Tried Hiring People?, a LW post
It talks about this ACX comment thread
Greg Coulbourn’s “Mega-money for mega-smart people to solve AGI Alignment”
Comments arguing for Terence Tao specifically
But I’m not aware of anyone that has actually even tried to do something like this.
Of special interest is this comment by Eliezer about Tao:
We’d absolutely pay him if he showed up and said he wanted to work on the problem. Every time I’ve asked about trying anything like this, all the advisors claim that you cannot pay people at the Terry Tao level to work on problems that don’t interest them. We have already extensively verified that it doesn’t particularly work for eg university professors.
So if anyone has contacted him or people like him (instead of regular college professors), I’d like to know how that went.
Otherwise, especially for people that aren’t merely pessimistic but measure success probability in log-odds, sending that email is a low cost action that we should definitely try.
So you (whoever is reading this) have until June 23rd to convince me that I shouldn’t send this to his @math.ucla.edu address:
Edit: I’ve been informed that someone with much better chances of success will be trying to contact him soon, so the priority now is to convince Demis Hassabis (see further below) and to find other similarly talented people.
Title: Have you considered working on AI alignment?
Body:
It is not once but twice that I have heard leaders of AI research orgs say they want you to work on AI alignment. Demis Hassabis said on a podcast that when we near AGI (i.e. when we no longer have time) he would want to assemble a team with you on it but, I quote, “I didn’t quite tell him the full plan of that”. And Eliezer Yudkowsky of MIRI (contact@intelligence.org) said in an online comment “We’d absolutely pay him if he showed up and said he wanted to work on the problem. Every time I’ve asked about trying anything like this, all the advisors claim that you cannot pay people at the Terry Tao level to work on problems that don’t interest them.”, so he didn’t even send you an email. I know that to you it isn’t the most interesting problem to think about[1] but it really, actually is a very very important and urgent completely open problem. It isn’t simply a theoretical concern, if Demis’ predictions of 10 to 20 years to AGI are anywhere near correct, it will deeply affect you and your family (and everyone else).
If you are ever interested you can start by reading the pages linked in EA Cambridge’s AGI Safety Fundamentals course or the Alignment Forum.
Best of luck,
P.
You can do any of:
Reporting your past results.
Convincing me that this is a net negative on expectation.
The worst thing I can think of that can realistically happen is this leading to something like the Einstein-Szilard letter. But considering that Elon Musk has already tried to warn governments, I don’t think it would change much.
Arguing that it is important to wait until we have the results of the AI Safety Arguments Competition or something similar. I currently don’t think so, he should be convinced for the same reasons we are convinced.
Suggesting changes to the email or a new email altogether. If you think it is terrible, say so!
He declines many kinds of email requests, see points 5 and 12 here.
If you have more social capital than me (e.g. if you know him) or you work at an alignment organisation, volunteering to send the email yourself. He could think “If they actually care about hiring me, why aren’t they contacting me directly?”.
Saying under what conditions your organisation would be willing to hire him, so I can add it to the email.
What you probably shouldn’t do is to send your own email without telling the rest of us. His attention is a limited resource and bothering him with many different emails might reduce his sympathy for the cause.
And other than him, how many people do you think have a comparable chance of solving the problem or making significant progress? And how do we identify them? By the number of citations? Prizes won? I would like to have a list like that along with conditions under which each alignment org would hire each person. The probability of convincing Tao might be low, but with, say, 100 people like him the chances of finding someone might be decent.
I’m pretty sure that most of them haven’t heard about alignment, or have and just discarded it as something not worth thinking about. I don’t think this means that they couldn’t do great alignment work if they tried, maybe getting them to seriously think about the problem at all is the hard part, and after that their genius simply generalises to this new area.
Relatedly, does anyone here know why Demis Hassabis isn’t assembling his dream team right now? The same as above applies, but until the 1st of July June 23rd:
Title: Are you sure you should wait before creating your dream alignment team?
Body:
On S2, Ep9 of the DeepMind podcast you said that when we get close to AGI we should pause pushing the performance of AI systems to guarantee they are safe. What do you think the timeline would be like in that scenario? When we get close, while DeepMind and maybe some other teams might pause the development, everyone else will keep working as fast as possible to get to the finish line, and all else equal whoever devotes less resources to non-capabilities work will get there first. Creating AGI is already a formidable task, but we at least have formalisms like AIXI that can serve as a guide by telling us how we could achieve general intelligence given unlimited computing power. For alignment we have no such thing, existing IRL algorithms couldn’t learn human values even with direct IO access and unlimited computing power, and then there is the inner alignment problem. If we don’t start working on the theoretical basis of alignment now, we won’t have aligned systems when the time comes.
This should be obvious to him, but just in case.
Thank you for posting about this here so that you can get feedback, and so that other people can know how much people are doing this sort of thing (and by the same token it could be good for people who’ve already done this sort of thing to say so).
I have a bit of a sinking feeling reading your draft; I’ll try to say concrete things about it, but I don’t think I’ll capture all of what’s behind the feeling. I think part of the feeling is about, this just won’t work.
Part of it is like, the email seems to come from a mindset that doesn’t give weight to curiosity and serious investigation (what Tao does with his time).
I think there’s a sort of violence or pushiness here that’s anti-helpful. It doesn’t acknowledge that Tao doesn’t have good reason to trust your judgements about what’s “very very important and urgent”, and people who go around telling other people what things are “very very important and urgent” in contexts without trust in judgement are often trying to coerce people into doing things they don’t want to do. It doesn’t acknowledge that people aren’t utility maximizing machines, but instead have momentum and joy and specialization and context. (Not to say that Tao doesn’t deserve to be informed about the consequences of events happening in the world and his possible effect on those consequences, and not to say that stating beliefs is bad, and not to say that Tao might just be curious and learn about AI risk if it’s shown to him in the right way.)
Another thing is the sources recommendation. The links should be to technical arguments about problems in AI alignment and technical descriptions of the overall problem, the sort of thing X-risk and AI-risk thinkers say to each other, not to material prepared with introductoriness in mind.
This is kind of weird and pushy. On the face of it, it looks like either you’re confused and think that big high status people to you are also big high status people to Tao and therefore should be able to give him orders about what to work on, and that Tao is even the sort of entity that takes orders; or at least, it looks like you yourself are trying to take orders from big high status people, propagating perceived urgency from them to whoever else, without regard to individual agents’s local/private information about what’s good for them to do. Like, it looks like you got scared, flailed and grasped for whatever the high status people said they think might be cool, and then wanted to push that. (I’m being blunt here, but to be clear, if something like this is happening, that’s very empathizable-with; I don’t think “you’re bad” or anything like that, and doing stuff that seems like it would have good consequences is generally good.)
This is sort of absurd: 1. if Tao were interested, he could likely have lots of conversations with competent AI alignment thinkers, which would be a much better use of his time, and 2. frankly, it seems like you’re posturing as someone giving orders to Tao.
I agree with TekhneMakre...it comes across like an average looking unconfident person asking out a gorgeous celeb. Probably a friend approaching him is best, but an email can’t hurt. I would get a few people together to work on it...my approach would be to represent truly who we are as a motivated group of people that has the desire to write this email to him by saying something like, “There’s a great forum of AI interested and concerned folks that we are a part of, many of us on the younger side, and we fear for the future of humanity from misaligned AI and we look to people like you Dr. Tao as being the kind of gifted person we hope could become involved early in helping guide AI in the right directions that would keep us all safe. We are younger and up and coming, so we don’t know how to appeal to what interests you, so we’re just laying it out there so you can know there are thousands of us and we’re hoping to create a conversation with you and your high level peers to drive some energy in this direction and maybe your direct involvement. Thanks.”
I was trying to rely on Tao’s trust in Demis’s judgement, since he is an AI researcher. Mentioning Eliezer is mainly so he has someone to contact if he wants to get hired.
I wanted his thinking to be “this competent entity has spent some of his computational resources verifying that it is important to solve this problem, and now that I’m reminded of that I should also throw mine at it”.
Is he truly mostly interested in what he considers to be mentally stimulating? Not in improving the world, or in social nonsense, or guaranteeing that his family is completely safe from all threats?
Then was including this link a bad idea? It gives examples of areas a mathematician might find interesting. And if not that, then what should I say? I’ve got nothing better. Do you know any technical introduction to alignment that he might like?
And about getting him to talk to other people, if anyone volunteers just DM me your contact information so that I can include it in the email (or reply directly if you don’t care about it being public). I mean, what else could I do?
If you plan to rewrite that letter with a less pushy tone (I agree 100% with the comment from TechneMakre) I think it might be useful if you try to change the framework of the problem a bit. Imagine that a random guy is writing to you instead, and he is telling you to work on deviating possible meteorites reaching Earth. What sort of email would make you compelled to reply?
I’ll rewrite it but I can’t just model other people after me. If I were writing it for someone like myself it would be a concise explanation of the main argument to make me want to spend time thinking about it followed by a more detailed explanation or links to further reading. As long as it isn’t mean I don’t think I would care if it’s giving me orders, begging for help or giving me information without asking for anything at all. But he at least already knows that unaligned AIs are a problem, I can only remind him of that, link to reading material or say that other people also think he should work on it.
But now the priority of that is lower, see the edit to the post. Do you think that the email to Demis Hassabis has similar problems or that it should stay like it is now?
Does the stuff about pushiness make sense to you? What do you think of it? I think as is, the letter, if Tao reads it, would be mildly harmful, for the reasons described by other commenters.
I think I get it, but even if I didn’t now I know that’s how it sounds, and I think I know how to improve it. That will be for other mathematicians though (at least Maxim Kontsevich), see the edit to the post. Does the tone in the email to Demis seem like the right one to you?
In terms of tone it seems considerably less bad. I definitely like it more than the other one because it seems to make arguments rather than give social cues. It might be improved by adding links giving technical descriptions about the terms you use (e.g. inner alignment (Hubinger’s paper), IRL (maybe a Russell paper on CIRL)). I still don’t think it would work, simply because I would guess Hassabis gets a lot of email from randos who are confused and the email doesn’t seem to distinguish you from that (this may be totally unfair to you, and I’m not saying it’s correct or not, it’s just what I expect to happen). I also feel nervous about talking about arms races like that, enforcing a narrative where they’re not only real but the default (this is an awkward thing to think because it sounds like I’m trying to manage Hassabis’s social environment deceptively, and usually I would think that worrying about “reinforcing narratives” isn’t a main thing to worry about and instead one should just say what one thinks, but, still my instincts say to worry about that here, which might be incorrect).
I think that to a non-trivial extent, we have a limited supply of such efforts. The more times Terry has been contacted about this, the less likely he is to respond positively.
And to some extent I suppose this is true about reaching out to such figures more generally. Ie. maybe word gets out that we’ve been doing such outreach and by the time we contact John Doe, even if it’s our first time contacting John Doe, we may have exhausted our supply of John Doe’s patience.
So then, I don’t think such an action is as low cost as it may seem. It costs more than the time it takes to write the email.
What makes more sense to me is to try to traverse through social networks and reach him that way. Figure out which nodes are close to him who he listens to. Note that they might be bloggers like Scott Alexander or someone like Dan Luu. From there think about which of those nodes make sense to pursue. Maybe one, maybe multiple. Then backtrack and think about how we can utilize our current connections to reach those nodes.
I also think it’d be worth brainstorming more creative solutions with a bunch of yoda timers. I’ll try one right now.
Billboard ad.
Well put together video.
Figure out what sort of people have the skillset for this and pay them as consultants.
Be like Hermione and read books to educate ourselves first. Especially because it signals competence. Signals of incompetence could make someone like Terry be a lot more likely to reject us.
Research more directly whether stuff like this has been done before. Talk to people who have done it to see what advice they have.
Put a bounty on it. $50k to get a response from him.
Attach money to it. $50k to Terry to sit down and discuss this.
Start with (way) less prestigious people. Presumably they’re easier to reach and perhaps to convince. Then with a bunch of them working on it, higher prestige people would start to notice and be more easily convinced.
Some of these ideas seem pretty solid. My sense is that the best path forward is:
More brainstorming from the community.
Get in touch with various organizations (MIRI, CEA...) to see where they are at with this stuff.
Education. Figure out what academic fields/topics are relevant. Learn about them. Probably do some sort of write-up. Nothing too crazy, but I think the low hanging fruits should be addressed.
Decide on a path forward. I suspect that the initiatives should come from an organization like MIRI or CEA, because I assume people like Terry would be more likely to respond to representatives of decently prestigious organizations.
This is a little rambly. Sorry. I’ll end here.
These are all good ideas, but I also think it’s important not to Chesterton’s Fence too hard. A lot of passionate people avoid doing alignment stuff because they assume it’s already been considered and decided against, even though the field doesn’t have that many people and much of its cultural capital is new.
Be serious, and deliberate, and make sure you’re giving it the best shot if this is the only shot we have, but most importantly, actually do it. There are not many other people trying.
Thanks for saying that. I think I needed to hear it.
These are a lot of good ideas. I comment above I think a good approach is to truly represent that we are a bunch of younger people who fear for the future...this would appeal to a lot of folks at his level, to know the kids are scared and need his help.
Hi, long-time lurker but first-time poster with a background in math here. I personally agree that it would be a good idea if we were to at least try to get some extremely talented mathematicians to think about alignment. Even if they decide not to, it might still be interesting to see what kinds of objections they have to working on it (e.g. is it because they think it’s futile and doomed to failure, because they think AGI is never going to happen, because they think alignment will not be an issue, because they feel they have nothing to contribute, or because it’s not technically interesting enough?).
However, I would also like to second TekhneMakre’s concerns about the format and content of the email. If you sample some comments on posts on Terry Tao’s blog, you will find that there are a number of commenters who would probably best be described as cranks who indefatigably try to convince Terry that their theories are worth paying attention to, that Terry is currently not wisely spending his time, etc. He (sensibly) ignores these comments, and has probably learned for the sake of sanity not to engage with anyone who seems to fit this bill. I am concerned that the email outlined in your post will set off the same response and thus be ignored. AI safety is still a rather fringe idea amongst academics, at least partly because it is speculative and lacking concreteness. It took me years as an academic-adjacent person to be even somewhat convinced that it could be a problem (I still am not totally convinced, but I am convinced it is at least worth looking into). I do not think an email appealing to emotion and anecdotes is likely to convince someone from that background encountering this problem.
I have three alternative suggestions; I’m not sure how good they are, so take them each with a grain of salt:
Firstly, note that Scott Aaronson has said here https://scottaaronson.blog/?p=6288#comment-1928043 that he would provisionally be willing to think about alignment for a year. This seems like it would have several advantages (1) He has already signaled interest, provisionally, so it would be easier to convince him that it might be worth working on, (2) He is already acquainted with many of the arguments for taking AGI seriously, so could start working on the problem more immediately, (3) He is well acquainted with the rationalist community and so would not be put off by rationalist norms or affiliated ideas such as EA (which I believe accounts for the skepticism of at least some academics), (4) Scott’s area of work is CS theory, which seems like it would be more relevant to alignment than Tao’s fields of interest.
Secondly, there are some academics who take AI safety arguments seriously. Jacob Steinhardt comes to mind, but I’m sure there are a decent number of others, especially given recent progress on AI. If these academics were to contact other top academics asking them to consider working on AI safety, the request would come across as much more credible. They would also know how to frame the problem in such a way to pique the interest of top mathematicians/computer scientists.
Thirdly, note that there are many academics who are open to working on big policy problems that do not directly concern their primary research interests. Terry Tao, I believe, is one of them, as evidenced by https://newsroom.ucla.edu/dept/faculty/professor-terence-tao-named-to-president-bidens-presidents-council-of-advisors-on-science-and-technology . I’m not sure to what extent this is an easier problem or a desirable course of action, but if you could convince some people in politics that this problem is worth taking seriously, it is possible that the government might directly ask these scientists to think about it.
This last point is not a suggestion, but I would like to add one note. Eliezer claims that he was told that you cannot pay top mathematicians to work on problems. I believe this is somewhat false. There are many examples of very talented professors and PhD students leaving academia to work at hedge funds. One example is Abhinav Kumar, who a few years ago was 1 of 4 coauthors on a paper solving the long-open problem on optimal sphere packings in 24 dimensions. He left an Associate Professorship at MIT to work at Renaissance Technologies (a hedge fund). Not exactly in the same vein, but Huawei has recruited 4 Fields medalists to work with them (e.g. see https://www.ihes.fr/en/laurent-lafforgue-en/ for one example) although I’m not certain whether they are working on applied problems. I cannot say whether money is a motivating factor in any given one of these cases, but there are more examples like this, and I think it is fair to say that at least some substantial fraction of all such people involved might have been motivated at least partly by money.
Seems that I wasn’t the only person to notice Scott’s comment on his blog :) He’s just announced that he’ll be working on alignment at OpenAI for a year: https://scottaaronson.blog/?p=6484
Oh wow, didn’t realise how recent the Huawei recruitment of Field medalists was! This from today. Maybe we need to convince Huawei to care about AGI Alignment :)
Then do you think I should contact Jacob Steinhardt to ask him what I should write to interest Tao and avoid seeming like a crank?
There isn’t much I can do about SA other than telling him to work on the problem in his free time.
Unless something extraordinary happens I’m definitely not contacting anyone in politics. Politicians being interested in AGI is a nightmarish scenario and those news about Huawei don’t help my paranoia about the issue.
I personally think the probability of success would be maximized if we were to first contact high-status members of the rationalist community, get them on board with this plan, and ask them to contact Scott Aaronson as well as contact professors who would be willing to contact other professors.
The link to Scott Aaronson’s blog says he provisionally would be willing to take a leave of absence from his job to work on alignment full-time for a year for $500k. I believe EA has enough funds that they could fund that if they deemed it to be worthwhile. I think the chance of success would be greatest if we contacted Eliezer and/or whoever is in charge of funds, asked them to make Scott a formal offer, and sent Scott an email with the offer and an invitation to talk to somebody (maybe Paul Christiano, his former student) working on alignment to see what kinds of things they think are worth working on.
I think even with the perfect email from most members of this community, the chances that e.g. Terry Tao reads it, takes it seriously, and works on alignment are not very good, due to lack of easily verifiable credibility of the sender. Institutional affiliation at least partly remedies this, and so I think it would be preferable if an email came from another professor who directly tried to convince them.
I think cold-emailing Jacob Steinhardt/Robin Hanson/etc. asking them to email other academics would have a better chance of succeeding given that the former indeed participate on this forum. However, even here, I think people are inclined to pay more attention to the views of those closer to them. My impression is that Eliezer and other high-ranked members of the rationalist community have closer connections to these alignment-interested professors (and know many more such professors) and could more successfully convince them to reach out to their colleagues about AI safety.
I don’t mean to suggest that these less-direct ways are necessarily better. If for instance Eliezer is not willing to talk to Jacob about this, then it might be better to contact Jacob than to do nothing. If you are not able to reach Jacob by any method, it might be better to contact Tao directly than to do nothing. I guess I only wish to say that you might want to attempt these more established channels before reaching out personally.
I also think many academics may be averse to contacting their colleagues about AI safety as it may come with a risk to their academic reputation. So I think it is worth keeping in mind that the chance of succeeding at this may not be very high.
Finally, thank you again for the original post—I think it is important.
My concern is less your email, and more the precedent. Having the rationality community model and encourage obviously undesired forms of contact with high-prestige figures seems like it could lead to intrusions of privacy. One person sending an email is ignorable. If emails, phone calls, unsolicited office visits, etc. start piling up under the banner of “AI risk,” it could feel quite invasive to those on the receiving end. My concern in particular is that people doing as you’re doing may not have the capacity to coordinate their actions. We may not even know whether or how much “randomly emailing Terry Tao about X risk” is going on.
That’s part of the point of the post, to coordinate so that fewer emails are sent. I asked if anyone tried something similar and asked people not to send their own emails without telling the rest of us.
I think Maxim Kontsevich might be a better candidate for an elite mathematician to try to recruit. Check out this 2014 panel with him, Tao and some other eminent mathematicians—he alone said that he thought HLAI(in math) is plausible in our lifetimes, but also that working on it might be immoral(!) He also mentioned an AI forecast by Kolmogorov that I had never heard of before, so it seems he has some pre-existing interest in the area.
Um. If you want to convince a mathematician like Terry Tao to be interested in AI alignment, you will need to present yourself as a reasonably competent mathematician or related expert and actually formulate an AI problem in such a way so that someone like Terry Tao would be interested in it. If you yourself are not interested in the problem, then Terry Tao will not be interested in it either.
Terry Tao is interested in random matrix theory (he wrote the book on it), and random matrix theory is somewhat related to my approach to AI interpretability and alignment. If you are going to send these problems to a mathematician, please inform me about this before you do so.
My approach to alignment: Given matrices A1,…,Ar;B1,…,Br, define a superoperator Γ(A1,…,Ar;B1,…,Br) by setting
Γ(A1,…,Ar;B1,…,Br)(X)=∑rk=1AkXB∗k, and define Φ(A1,…,Ar)=Γ(A1,…,Ar;A1,…,Ar). Define the L2-spectral radius of A1,…,Ar as ρ2(A1,…,Ar)=ρ(Φ(A1,…,Ar))1/2. Here, ρ(A)=limn→∞∥An∥1/n is the usual spectral radius.
Define ρK2,d(A1,…,Ar)=max{ρ(Γ(A1,…,Ar;X1,…,Xr))ρ2(X1,…,Xr):X1,…,Xr∈Md(K)}. Here, K is either the field of reals, field of complex numbers, or division ring of quaternions.
Given matrices A1,…,Ar;B1,…,Br, define
∥(A1,…,Ar)≃(B1,…,Br)∥=Γ(A1,…,Ar;B1,…,Br)ρ2(A1,…,Ar)ρ2(B1,…,Br). The value ∥(A1,…,Ar)≃(B1,…,Br)∥ is always a real number in the interval [0,1] that is a measure of how jointly similar the tuples (A1,…,Ar),(B1,…,Br) are. The motivation behind ρK2,d(A1,…,Ar) is that ρK2,d(A1,…,Ar)ρ2(A1,…,Ar) is always a real number in [0,1] (well except when the denominator is zero) that measures how well A1,…,Ar can be approximated by d×d-matrices. Informally, ρK2,d(A1,…,Ar)ρ2(A1,…,Ar) measures how random A1,…,Ar are where a lower value of ρK2,d(A1,…,Ar)ρ2(A1,…,Ar) indicates a lower degree of randomness.
A better theoretical understanding of ρK2,d(A1,…,Ar) would be great. If X1,…,Xr∈Md(K) and ρ(Γ(A1,…,Ar;X1,…,Xr))ϕ2(X1,…,Xr) is locally maximized, then we say that (X1,…,Xr) is an LSRDR of (A1,…,Ar). Said differently, (X1,…,Xr)∈Md(K) is an LSRDR of (A1,…,Ar) if the similarity ∥(A1,…,Ar)≃(X1,…,Xr)∥ is maximized.
Here, the notion of an LSRDR is a machine learning notion that seems to be much more interpretable and much less subject to noise than many other machine learning notions. But a solid mathematical theory behind LSRDRs would help us understand not just what LSRDRs do, but the mathematical theory would help us understand why they do it.
Problems in random matrix theory concerning LSRDRs:
Suppose that U1,…,Ur are random matrices (according to some distribution). Then what are some bounds for ρK2,d(U1,…,Ur).
Suppose that U1,…,Ur are random matrices and A1,…,Ar are non-random matrices. What can we say about the spectrum of Γ(A1,…,Ar;U1,…,Ur)? My computer experiments indicate that this spectrum satisfies the circular law, and the radius of the disc for this circular law is proportional to ρ2(A1,…,Ar), but a proof of this circular law would be nice.
Tensors can be naturally associated with collections of matrices. Suppose now that U1,…,Ur are the matrices associated with a random tensor. Then what are some bounds for ρK2,d(U1,…,Ur).
P.S. By massively downvoting my posts where I talk about mathematics that is clearly applicable to AI interpretability and alignment, the people on this site are simply demonstrating that they need to do a lot of soul searching before they annoy people like Terry Tao with their lack of mathematical expertise.
P.P.S. Instead of trying to get a high profile mathematician like Terry Tao to be interested in problems, it may be better to search for a specific mathematician who is an expert in a specific area related to AI alignment since it may be easier to contact a lower profile mathematician, and a lower profile mathematician may have more specific things to say and contribute. You are lucky that Terry Tao is interested in random matrix theory, but this does not mean that Terry Tao is interested in anything in the intersection between alignment and random matrix theory. It may be better to search harder for mathematicians who are interested in your specific problems.
P.P.P.S. To get more mathematicians interested in alignment, it may be a good idea to develop AI systems that behave much more mathematically. Neural networks currently do not behave very mathematically since they look like the things that engineers would come up with instead of mathematicians.
P.P.P.P.S. I have developed the notion of an LSRDR for cryptocurrency research because I am using this to evaluate the cryptographic security of cryptographic functions.
I have heard about the thing where you commit to a $100m reward for any ML or mathmetician who solves alignment, and simultaneously pay 100 top ML and mathmeticians $1m over the course of a year to do nothing but pursue a solution to alignment (pursuing the bounty in the process). Even if all 100 of them fail, you still selected the best 100 out of every mathmetician who applied for those positions, so a large proportion of them might pursue the problem on their own afterwards in pursuit of the ongoing $100 million bounty. One way or another, many of these influential people will be convinced that the problem is significant and tell their friends, or even contract their friends as consultants to help with the problem.
There’s plenty of trust issues, going both ways, but I’m not a grantmaker or lawyer and I think some smart, experienced people could probably figure out how to mitigate most of them.
I really want this to happen.
And why stop at Terry Tao? We could also email other top mathematicians and physicists.
I would put that in Google doc, it will be easier to suggest changes etc