Is either a recruitment ad for a cult, or a trial for new members of an exclusive club. It’s certainly not designed to intrigue what would be top applicants.
If you think the reason MIRI hasn’t paid a Fields medalist $500k a year is that they are averse to doing weird stuff… have you ever met them? They are the least-averse-to-doing-weird-stuff institution I have ever met.
Ah, OK. Fair enough then. I think I get why they seem like a cult from the outside, to many people at least. I think where I’m coming from in this thread is that I’ve had enough interactions with enough MIRI people (and read and judged enough of their work) to be pretty confident that they aren’t a cult and that they are genuinely trying to save the world from AI doom and that some of them at least are probably more competent than me, generally speaking, and also exerting more effort and also less bound by convention than me.
I agree that it seems like they should be offering Terence Tao ten million dollars to work with them for six months, and I don’t know why they haven’t. (Or maybe they did and I just didn’t hear about it?) I just have enough knowledge about them to rule out the facile “oh they are incompetent” or “oh it’s because they are a cult” explanations. I was trying to brainstorm more serious answers (answers consistent with what I know about them) based on the various things I had heard (e.g. the experience of OpenPhil trying something similar, which led to a bunch of high-caliber academics producing useless research) and got frustrated with all the facile/cult accusations.
Being weird in your heavily filtered weirdness-optimized social environment is very different from being weird out there in the real world with the normies, though.
The “contribute to the Alignment Forum” path isn’t easier than traveling to a workshop, either. You’re supposed to start an alignment blog, or publish alignment papers, or become a regular on LW just to have a chance at entering the hallowed ranks of the Alignment Forum, which, in turn, gives you a chance at ascending to the sanctum sanctorum of MIRI.
This is what prospective applicants see when they first try to post something to the Alignment Forum:
We accept very few new members to the AI Alignment Forum. Instead, our usual suggestion is that visitors post to LessWrong.com, a large and vibrant intellectual community with a strong interest in alignment research, along with rationality, philosophy, and a wide variety of other topics.
Posts and comments on LessWrong frequently get promoted to the AI Alignment Forum, where they’ll automatically be visible to contributors here. We also use LessWrong as one of the main sources of new Alignment Forum members.
If you have produced technical work on AI alignment, on LessWrong or elsewhere—e.g., papers, blog posts, or comments—you’re welcome to link to it here so we can take it into account in any future decisions to expand the ranks of the AI Alignment Forum.
It’s a lot easier than university departments, which require you to get multiple degrees (taking up something like a decade of your life depending on how you count it) before your application will even be considered.
By this stage of their careers, they already have those bits of paper. MIRI are asking people who don’t a priori highly value alignment research to jump through extra hoops they haven’t already cleared, for what they probably perceive as a slim chance of a job outside their wheelhouse. I know a reasonable number of hard science academics, and I don’t know any who would put in that amount of effort in the application for a job they thought would be highly applied for by more qualified applicants. The very phrasing makes it sound like they expect hundreds of applicants and are trying to be exclusive. If nothing else is changed, that should be.
Maybe they do in fact receive hundreds of applicants and must exclude most of them?
It’s not MIRI’s fault that there isn’t a pre-existing academic discipline of AI alignment research.
Imagine SpaceX had a branch office in some very poor country that literally didn’t have any engineering education whatsoever. Should they then lower their standards and invite applicants who never studied engineering? No, they should just deal with the fact that they won’t have very many qualified people, and/or they should do things like host workshops and stuff to help people learn engineering.
It wasn’t Los Alamos’ fault that there wasn’t a pre-existing academic discipline of nuclear engineering, but they got by anyway, because they had Von Neumann and other very smart people. If MIRI is to get by, they need to recruit Von Neumann-level people. Like maybe Terry Tao.
Just to be clear, there was a thriving field of nuclear engineering, and Los Almost was run mostly by leading figures in that field. Also, money was never a constraint on the Manhattan Program and it’s success had practically nothing to do with the availability of funding, but instead all to do with the war, the involvement of a number of top scientists, and the existence of a pretty concrete engineering problem that one could throw tons of manpower at.
The Manhattan project itself did not develop any substantial nuclear theory, and was almost purely an an engineering project. I do not know what we would get by emulating it at this point in time. The scientists involved in the Manhattan project did not continue running things like the Manhattan project, they went into other institutions optimized for intellectual process that were not capable of absorbing large amounts of money or manpower productively, despite some of them likely being able to get funding for similar things (some of them went on and built giant particle colliders, though this did not generally completely revolutionize or drastically accelerate the development of new scientific theories, though it sure was helpful).
Universities aren’t known for getting things done. Corporations are. Are you trying to signal exclusivity and prestige, or are you trying to save the lightcone?
Universities are pretty well known for getting things done. Most nobel prize winning work happens in them, for instance. It’s just that the things they do are not optimized for being the things corporations do.
I’m not saying we shouldn’t be actually trying to hire people. In fact I have a post from last year saying that exact thing. But if you think corporations are the model for what we should be going for, I think we have very different mechanistic models of how research gets done.
Getting an advanced degree in, say, CS, qualifies me to work for many different companies. Racking up karma posting mathematically rigorous research on the Alignment Forum qualifies me to work at one (1) place: MIRI. If I take the “PhD in CS” route, I have power to negotiate my salary, to be selective about who I work for. Every step I take along the Alignment Forum path is wasted[1] unless MIRI deigns to let me in.
I’m under the impression that that’s partially a sign of civilizational failure from metastasizing bureaucracies. I’ve always heard that the ultra-successful Silicon Valley companies never required a degree (and also that that meritocratic culture has eroded and been partially replaced by credentialism, causing stasis).
EDIT: to be clear, this means I disagree with the ridiculous hyperbole upthread of it being “cultish”, and in a lot of ways I’m sure the barriers to employment in traditional fields are higher. Still, as an outsider who’s missing lots of relevant info, it does seem like it should be possible to do a lot better.
I think you’re buying the hype of how much Alignment Forum posts help you even get the attention of MIRI way too much. I have a much easier time asking university departments for feedback, and there is a much smoother process for applying there.
Those multiple degrees are high cost but very low risk, because even if you don’t get into the university department, these degrees will give you lots of option value, while a 6 month gap in your CV trying to learn AI Safety on your own does not. More likely you will not survive the hit on your mental health.
I personally decided not to even try AI Safety research for this reason.
Grad school is infamously bad for your mental health. Something like one-third of my classmates dropped out. My AI safety researcher friends are overall less depressed and more happy, as far as I can tell, though it’s hard to tell because it’s so subjective.
These less depressed people you talk about, are they already getting paid as AI safety researchers, or are they self-studying to (hopefully) become AI safety researchers?
In any case, I’m clearly generalising from my own situation, so it may not extend very far. To flesh out this data point: I had 2 years of runway, so money wasn’t a problem, but I already felt beaten down by LW to the extent that I couldn’t really take any more hits to my self-esteem, so I couldn’t risk putting myself up for rejection again. That’s basically why I mostly left LW.
Oh come, don’t be hyperbolic. The main things that makes a cult a cult are absent. And I’m under the impression that plenty of places have a standard path for inexperienced people that involves an internship or whatever. And since AI alignment is an infant field, no one has the relevant experience on their resumes. (The OP mentions professional recruiters, but I would guess that the skill of recruiting high-quality programmers doesn’t translate to recruiting high-quality alignment researchers.)
I do agree that, as an outsider, it seems like it should be much more possible to turn money into productive-researcher-hours, even if that requires recruiting people at Tao’s caliber, and the fact that that’s not happening is confusing & worrying to me. (Though I do feel bad for the MIRI people in this conversation; it’s not entirely fair, since if somehow they in fact have good reason to believe that the set of people who can productively contribute is much tinier that we’d hope (eg: Tao said no, and literally everyone else isn’t good enough), they might have to avoid explicitly explaining that to avoid rudeness and bad PR.)
I’m going to keep giving MIRI my money because it seems like everyone else is on a more-doomed path, but as a donor I would prefer to see more visible experimentation (since they said their research agendas didn’t pan out and they don’t see a path to survival). Eg I’m happy with the Visible Thoughts project. (My current guess (hope?) is that they are experimenting with some things that they can’t talk about, which I’m 100% fine with; still seems like some worthwhile experimentation could be public.)
Imagine you’re a 32-year old software engineer with a decade of quality work experience and a Bachelor’s in CS. You apply for a job at Microsoft, and they tell you that since their tech stack is very unusual, you have to do a six-month unpaid internship as part of the application process, and there is no guarantee that you get the job afterwards.
This is not how things work. You hire smart people, then you train them. It can take months before new employees are generating value, and they can always leave before their training is complete. This risk is absorbed by the employer.
If you have an unusual tech stack then the question of how to train people in that tech stack is fairly trivial. In a pre-pragmatic field, the question of how to train people to effectively work in the field is nontrivial.
Wait, is the workshop 6 months? I assumed it was more like a week or two.
This is not how things work. You hire smart people, then you train them
Sometimes that is how things work. Sometimes you do train them first while not paying them, then you hire them. And for most 32-year old software engineers, they have to go through a 4-8 year training credentialing process that you have to pay year’s worth of salary to go to. I don’t see that as a good thing, and indeed the most successful places are famous for not doing that, but still.
To reiterate, I of course definitely agree that they should try using money more. But this is all roughly in the same universe of annoying hoop-jumping as typical jobs, and not roughly in the same universe as the Branch Davidians, and I object to that ridiculous hyperbole.
they have to go through a 4-8 year training process that you have to pay year’s worth of salary to go to
They go through a 4-8 year credentialing process that is a costly and hard-to-Goodhart signal of intelligence, conscientiousness, and obedience. The actual learning is incidental.
The traditional way has its costs and benefits (one insanely wasteful and expensive path that opens up lots of opportunities), as does the MIRI way (a somewhat time-consuming path that opens up a single opportunity). It seems like there’s room for improvement in both, but both are obviously much closer to each other than either one is to Scientology, and that was the absurd comparison I was arguing against in my original comment. And that comparison doesn’t get any less absurd just because getting a computer science degree is a qualification for a lot of things.
Sure it does. I was saying that the traditional pathway is pretty ridiculous and onerous. (And I was saying that to argue that MIRI’s onerous application requirements are more like the traditional pathway and less like Scientology; I am objecting to the hyperbole in calling it the latter.) The response was that the traditional pathway is even more ridiculous and wasteful than I was giving it credit for. So yeah, I’d say that slightly strengthens my argument.
Based on what’s been said in this thread, donating more money to MIRI has precisely zero impact on whether they achieve their goals, so why continue to donate to them?
Based on what’s been said in this thread, donating more money to MIRI has precisely zero impact on whether they achieve their goals
Well obviously, I disagree with this! As I said in my comment, I’m eg tentatively happy about the Visible Thoughts project. I’m hopeful to see more experimentation in the future, hopefully eventually narrowing down to an actual plan.
Worst case scenario, giving them more money now would at least make them more able to “take advantage of a miracle” in the future (though obviously I’m really really hoping for more than that).
That seems a bit like a Pascal’s mugging to me, especially considering there are plenty of other organizations to give to which don’t rely on a potential future miracle which may or may not require a tremendous sum of already existing money in the organization...
Are you taking the view that someone seeing the ad is not going to think MIRI is a cult unless it ticks all the boxes? Because they are actually going to be put off if it ticks any of the boxes
I’m taking the view that most people would think it’s an onerous requirement and they’re not willing to jump through those hoops, not that it’s a cult. It just doesn’t tick the boxes of that, unless we’re defining that so widely as to include, I dunno, the typical “be a good fit for the workplace culture!” requirement that lots of jobs annoyingly have.
It’s obviously much closer to “pay several hundred thousand dollars to be trained at an institution for 4-6 years (an institution that only considers you worthy if your essay about your personality, life goals, values, and how they combat racism is a good match to their mission), and then either have several years of experience or do an unpaid internship with us to have a good chance” than it is to the Peoples Temple. To say otherwise is, as I said, obviously ridiculous hyperbole.
they might have to avoid explicitly explaining that to avoid rudeness and bad PR
Well, I don’t think that is the thing to worry about. Eliezer having high standards would be no news to me, but if I learn about MIRI being dishonest for PR reasons a second time, I am probably going to lose all the trust I have left.
not just sitting on piles of cash because it would be “weird” to pay a Fields medalist 500k a year.
They literally paid Kmett 400k/year foryears to work on some approach to explainable AI in Haskell.
I think people in this thread vastly overestimate how much money MIRI has (they have ~10M, see the 990s and the donations page https://intelligence.org/topcontributors/), and underestimate how much would top people cost. I think the top 1% earners in the US all make >500k/year? Maybe if not the top 1% the top 0.5%?
Even Kmett (who is famous in the Haskell community, but is no Terence Tao) is almost certainly making way more than 500k$ now
From a rando outsider’s perspective, MIRI has not made any public indication that they are funding-constrained, particularly given that their donation page says explicitly that:
We’re not running a formal fundraiser this year but are participating in end-of-year matching events, including Giving Tuesday.
Which more or less sounds like “we don’t need any more money but if you want to give us some that’s cool”
I think people in this thread vastly overestimate how much money MIRI has, and underestimate how much would top people cost.
This implies that MIRI is very much funding-constrained, and unless you have elite talent then you should earn to give to organizations that will recruit those with elite talent. This applies to me and most people reading this, who are only around 2-4 sigmas above the mean.
I highly doubt most people reading this are “around 2-4 sigmas above the mean”, if that’s even a meaningful concept.
The choice between earning to give and direct work is definitely nontrivial though: there are many precedents of useful work done by “average” individuals, even in mathematics.
But I do get the feeling that MIRI thinks the relative value of hiring random expensive people would be <0, which seems consistent with how other groups trying to solve hard problems approach things. E.g. I don’t see Tesla paying billions to famous mathematicians/smart people to “solve self-driving”.
If MIRI would want to hire someone like Terence Tao for a million dollars per year they likely couldn’t simply do that out of their normal budget. To do this they would need to convince donors to give them additional money for that purpose.
If there would be a general sense that this would be the way forward in MIRI and MIRI could express that to donors, I would expect they would get the donor money for it.
They would need to compete with lots of other projects working on AI Alignment. But yes, I fundamentally agree: if there was a project that convincingly had a >1% chance of solving AI alignment it seems very likely it would be able to raise ~1M/year (maybe even ~10?)
They would need to compete with lots of other projects working on AI Alignment.
I don’t think that’s the case. I think that if OpenPhil would believe that there’s more room for funding in promising AI alignment research they would spend more money on it than they currently do.
I think the main reason that they aren’t giving MIRI more money than they are giving, is that they don’t believe that MIRI would spend more money effectively.
This:
Is either a recruitment ad for a cult, or a trial for new members of an exclusive club. It’s certainly not designed to intrigue what would be top applicants.
If you think the reason MIRI hasn’t paid a Fields medalist $500k a year is that they are averse to doing weird stuff… have you ever met them? They are the least-averse-to-doing-weird-stuff institution I have ever met.
I have not.
Ah, OK. Fair enough then. I think I get why they seem like a cult from the outside, to many people at least. I think where I’m coming from in this thread is that I’ve had enough interactions with enough MIRI people (and read and judged enough of their work) to be pretty confident that they aren’t a cult and that they are genuinely trying to save the world from AI doom and that some of them at least are probably more competent than me, generally speaking, and also exerting more effort and also less bound by convention than me.
I agree that it seems like they should be offering Terence Tao ten million dollars to work with them for six months, and I don’t know why they haven’t. (Or maybe they did and I just didn’t hear about it?) I just have enough knowledge about them to rule out the facile “oh they are incompetent” or “oh it’s because they are a cult” explanations. I was trying to brainstorm more serious answers (answers consistent with what I know about them) based on the various things I had heard (e.g. the experience of OpenPhil trying something similar, which led to a bunch of high-caliber academics producing useless research) and got frustrated with all the facile/cult accusations.
Being weird in your heavily filtered weirdness-optimized social environment is very different from being weird out there in the real world with the normies, though.
The “contribute to the Alignment Forum” path isn’t easier than traveling to a workshop, either. You’re supposed to start an alignment blog, or publish alignment papers, or become a regular on LW just to have a chance at entering the hallowed ranks of the Alignment Forum, which, in turn, gives you a chance at ascending to the sanctum sanctorum of MIRI.
This is what prospective applicants see when they first try to post something to the Alignment Forum:
Not just physically travel to a MIRI research workshop—apply to attend a MIRI workshop. Can’t have just any riff-raff coming in, after all.
It’s a lot easier than university departments, which require you to get multiple degrees (taking up something like a decade of your life depending on how you count it) before your application will even be considered.
By this stage of their careers, they already have those bits of paper. MIRI are asking people who don’t a priori highly value alignment research to jump through extra hoops they haven’t already cleared, for what they probably perceive as a slim chance of a job outside their wheelhouse. I know a reasonable number of hard science academics, and I don’t know any who would put in that amount of effort in the application for a job they thought would be highly applied for by more qualified applicants. The very phrasing makes it sound like they expect hundreds of applicants and are trying to be exclusive. If nothing else is changed, that should be.
Maybe they do in fact receive hundreds of applicants and must exclude most of them?
It’s not MIRI’s fault that there isn’t a pre-existing academic discipline of AI alignment research.
Imagine SpaceX had a branch office in some very poor country that literally didn’t have any engineering education whatsoever. Should they then lower their standards and invite applicants who never studied engineering? No, they should just deal with the fact that they won’t have very many qualified people, and/or they should do things like host workshops and stuff to help people learn engineering.
It wasn’t Los Alamos’ fault that there wasn’t a pre-existing academic discipline of nuclear engineering, but they got by anyway, because they had Von Neumann and other very smart people. If MIRI is to get by, they need to recruit Von Neumann-level people. Like maybe Terry Tao.
Just to be clear, there was a thriving field of nuclear engineering, and Los Almost was run mostly by leading figures in that field. Also, money was never a constraint on the Manhattan Program and it’s success had practically nothing to do with the availability of funding, but instead all to do with the war, the involvement of a number of top scientists, and the existence of a pretty concrete engineering problem that one could throw tons of manpower at.
The Manhattan project itself did not develop any substantial nuclear theory, and was almost purely an an engineering project. I do not know what we would get by emulating it at this point in time. The scientists involved in the Manhattan project did not continue running things like the Manhattan project, they went into other institutions optimized for intellectual process that were not capable of absorbing large amounts of money or manpower productively, despite some of them likely being able to get funding for similar things (some of them went on and built giant particle colliders, though this did not generally completely revolutionize or drastically accelerate the development of new scientific theories, though it sure was helpful).
I don’t think we disagree?
I think you do in practice, because you seem to believe that the best way to recruit those people is via a strategy like the above.
Universities aren’t known for getting things done. Corporations are. Are you trying to signal exclusivity and prestige, or are you trying to save the lightcone?
Universities are pretty well known for getting things done. Most nobel prize winning work happens in them, for instance. It’s just that the things they do are not optimized for being the things corporations do.
I’m not saying we shouldn’t be actually trying to hire people. In fact I have a post from last year saying that exact thing. But if you think corporations are the model for what we should be going for, I think we have very different mechanistic models of how research gets done.
You took the words right out of my draft comment.
Corporations also often require advanced degrees in specific fields. Or multiple years of work experience.
Getting an advanced degree in, say, CS, qualifies me to work for many different companies. Racking up karma posting mathematically rigorous research on the Alignment Forum qualifies me to work at one (1) place: MIRI. If I take the “PhD in CS” route, I have power to negotiate my salary, to be selective about who I work for. Every step I take along the Alignment Forum path is wasted[1] unless MIRI deigns to let me in.
Not counting positive externalities
See my reply to benjamin upthread.
I’m under the impression that that’s partially a sign of civilizational failure from metastasizing bureaucracies. I’ve always heard that the ultra-successful Silicon Valley companies never required a degree (and also that that meritocratic culture has eroded and been partially replaced by credentialism, causing stasis).
EDIT: to be clear, this means I disagree with the ridiculous hyperbole upthread of it being “cultish”, and in a lot of ways I’m sure the barriers to employment in traditional fields are higher. Still, as an outsider who’s missing lots of relevant info, it does seem like it should be possible to do a lot better.
I think you’re buying the hype of how much Alignment Forum posts help you even get the attention of MIRI way too much. I have a much easier time asking university departments for feedback, and there is a much smoother process for applying there.
Those multiple degrees are high cost but very low risk, because even if you don’t get into the university department, these degrees will give you lots of option value, while a 6 month gap in your CV trying to learn AI Safety on your own does not. More likely you will not survive the hit on your mental health.
I personally decided not to even try AI Safety research for this reason.
Grad school is infamously bad for your mental health. Something like one-third of my classmates dropped out. My AI safety researcher friends are overall less depressed and more happy, as far as I can tell, though it’s hard to tell because it’s so subjective.
These less depressed people you talk about, are they already getting paid as AI safety researchers, or are they self-studying to (hopefully) become AI safety researchers?
In any case, I’m clearly generalising from my own situation, so it may not extend very far. To flesh out this data point: I had 2 years of runway, so money wasn’t a problem, but I already felt beaten down by LW to the extent that I couldn’t really take any more hits to my self-esteem, so I couldn’t risk putting myself up for rejection again. That’s basically why I mostly left LW.
Ah, good point, mostly the former category. I only know a few people in the latter category.
Oh come, don’t be hyperbolic. The main things that makes a cult a cult are absent. And I’m under the impression that plenty of places have a standard path for inexperienced people that involves an internship or whatever. And since AI alignment is an infant field, no one has the relevant experience on their resumes. (The OP mentions professional recruiters, but I would guess that the skill of recruiting high-quality programmers doesn’t translate to recruiting high-quality alignment researchers.)
I do agree that, as an outsider, it seems like it should be much more possible to turn money into productive-researcher-hours, even if that requires recruiting people at Tao’s caliber, and the fact that that’s not happening is confusing & worrying to me. (Though I do feel bad for the MIRI people in this conversation; it’s not entirely fair, since if somehow they in fact have good reason to believe that the set of people who can productively contribute is much tinier that we’d hope (eg: Tao said no, and literally everyone else isn’t good enough), they might have to avoid explicitly explaining that to avoid rudeness and bad PR.)
I’m going to keep giving MIRI my money because it seems like everyone else is on a more-doomed path, but as a donor I would prefer to see more visible experimentation (since they said their research agendas didn’t pan out and they don’t see a path to survival). Eg I’m happy with the Visible Thoughts project. (My current guess (hope?) is that they are experimenting with some things that they can’t talk about, which I’m 100% fine with; still seems like some worthwhile experimentation could be public.)
Imagine you’re a 32-year old software engineer with a decade of quality work experience and a Bachelor’s in CS. You apply for a job at Microsoft, and they tell you that since their tech stack is very unusual, you have to do a six-month unpaid internship as part of the application process, and there is no guarantee that you get the job afterwards.
This is not how things work. You hire smart people, then you train them. It can take months before new employees are generating value, and they can always leave before their training is complete. This risk is absorbed by the employer.
If you have an unusual tech stack then the question of how to train people in that tech stack is fairly trivial. In a pre-pragmatic field, the question of how to train people to effectively work in the field is nontrivial.
Wait, is the workshop 6 months? I assumed it was more like a week or two.
Sometimes that is how things work. Sometimes you do train them first while not paying them, then you hire them. And for most 32-year old software engineers, they have to go through a 4-8 year
trainingcredentialing process that you have to pay year’s worth of salary to go to. I don’t see that as a good thing, and indeed the most successful places are famous for not doing that, but still.To reiterate, I of course definitely agree that they should try using money more. But this is all roughly in the same universe of annoying hoop-jumping as typical jobs, and not roughly in the same universe as the Branch Davidians, and I object to that ridiculous hyperbole.
They go through a 4-8 year credentialing process that is a costly and hard-to-Goodhart signal of intelligence, conscientiousness, and obedience. The actual learning is incidental.
Okay, edited. If anything, that strengthens my point.
See this comment.
… And? What point do you think I’m arguing?
The traditional way has its costs and benefits (one insanely wasteful and expensive path that opens up lots of opportunities), as does the MIRI way (a somewhat time-consuming path that opens up a single opportunity). It seems like there’s room for improvement in both, but both are obviously much closer to each other than either one is to Scientology, and that was the absurd comparison I was arguing against in my original comment. And that comparison doesn’t get any less absurd just because getting a computer science degree is a qualification for a lot of things.
No, it doesn’t.
Sure it does. I was saying that the traditional pathway is pretty ridiculous and onerous. (And I was saying that to argue that MIRI’s onerous application requirements are more like the traditional pathway and less like Scientology; I am objecting to the hyperbole in calling it the latter.) The response was that the traditional pathway is even more ridiculous and wasteful than I was giving it credit for. So yeah, I’d say that slightly strengthens my argument.
Based on what’s been said in this thread, donating more money to MIRI has precisely zero impact on whether they achieve their goals, so why continue to donate to them?
FWIW I don’t donate to MIRI anymore much myself precisely because they aren’t funding-constrained. And a MIRI employee even advised me as much.
Well obviously, I disagree with this! As I said in my comment, I’m eg tentatively happy about the Visible Thoughts project. I’m hopeful to see more experimentation in the future, hopefully eventually narrowing down to an actual plan.
Worst case scenario, giving them more money now would at least make them more able to “take advantage of a miracle” in the future (though obviously I’m really really hoping for more than that).
That seems a bit like a Pascal’s mugging to me, especially considering there are plenty of other organizations to give to which don’t rely on a potential future miracle which may or may not require a tremendous sum of already existing money in the organization...
Are you taking the view that someone seeing the ad is not going to think MIRI is a cult unless it ticks all the boxes? Because they are actually going to be put off if it ticks any of the boxes
I’m taking the view that most people would think it’s an onerous requirement and they’re not willing to jump through those hoops, not that it’s a cult. It just doesn’t tick the boxes of that, unless we’re defining that so widely as to include, I dunno, the typical “be a good fit for the workplace culture!” requirement that lots of jobs annoyingly have.
It’s obviously much closer to “pay several hundred thousand dollars to be trained at an institution for 4-6 years (an institution that only considers you worthy if your essay about your personality, life goals, values, and how they combat racism is a good match to their mission), and then either have several years of experience or do an unpaid internship with us to have a good chance” than it is to the Peoples Temple. To say otherwise is, as I said, obviously ridiculous hyperbole.
Well, I don’t think that is the thing to worry about. Eliezer having high standards would be no news to me, but if I learn about MIRI being dishonest for PR reasons a second time, I am probably going to lose all the trust I have left.
I don’t think “no comment”, or rather making undetailed but entirely true comments, is dishonest.
I agree.
They literally paid Kmett 400k/year for years to work on some approach to explainable AI in Haskell.
I think people in this thread vastly overestimate how much money MIRI has (they have ~10M, see the 990s and the donations page https://intelligence.org/topcontributors/), and underestimate how much would top people cost.
I think the top 1% earners in the US all make >500k/year? Maybe if not the top 1% the top 0.5%?
Even Kmett (who is famous in the Haskell community, but is no Terence Tao) is almost certainly making way more than 500k$ now
From a rando outsider’s perspective, MIRI has not made any public indication that they are funding-constrained, particularly given that their donation page says explicitly that:
Which more or less sounds like “we don’t need any more money but if you want to give us some that’s cool”
This implies that MIRI is very much funding-constrained, and unless you have elite talent then you should earn to give to organizations that will recruit those with elite talent. This applies to me and most people reading this, who are only around 2-4 sigmas above the mean.
I highly doubt most people reading this are “around 2-4 sigmas above the mean”, if that’s even a meaningful concept.
The choice between earning to give and direct work is definitely nontrivial though: there are many precedents of useful work done by “average” individuals, even in mathematics.
But I do get the feeling that MIRI thinks the relative value of hiring random expensive people would be <0, which seems consistent with how other groups trying to solve hard problems approach things.
E.g. I don’t see Tesla paying billions to famous mathematicians/smart people to “solve self-driving”.
Edit: Yudkowsky answered https://www.lesswrong.com/posts/34Gkqus9vusXRevR8/late-2021-miri-conversations-ama-discussion?commentId=9K2ioAJGDRfRuDDCs , apparently I was wrong and it’s because you can’t just pay top people to work on problems that don’t interest them.
If MIRI would want to hire someone like Terence Tao for a million dollars per year they likely couldn’t simply do that out of their normal budget. To do this they would need to convince donors to give them additional money for that purpose.
If there would be a general sense that this would be the way forward in MIRI and MIRI could express that to donors, I would expect they would get the donor money for it.
They would need to compete with lots of other projects working on AI Alignment.
But yes, I fundamentally agree: if there was a project that convincingly had a >1% chance of solving AI alignment it seems very likely it would be able to raise ~1M/year (maybe even ~10?)
I don’t think that’s the case. I think that if OpenPhil would believe that there’s more room for funding in promising AI alignment research they would spend more money on it than they currently do.
I think the main reason that they aren’t giving MIRI more money than they are giving, is that they don’t believe that MIRI would spend more money effectively.