I have some money that I was saving for something like this, but I also just saw Eliezer’s (very convincing) request for CFAR donations yesterday and heard a rumor that SIAI was trying to get people to donate to CFAR because they needed it more.
This seems weird to me because I would expect that with SIAI’s latest announcement they have shifted from waterline-raising/community-building to more technical areas where CFAR success would be of less help to them, but I’d be very interested in hearing from an SIAI higher-up whether they really want my money or whether they would prefer I give it to CFAR instead.
1) In the long run, for CFAR to succeed, it has to be supported by a CFAR donor base that doesn’t funge against SIAI money. I expect/hope that CFAR will have a substantially larger budget in the long run than SIAI. In the long run, then, marginal x-risk minimizers should be donating to SIAI.
2) But since CFAR is at a very young and very vital stage in its development and has very little funding, it needs money right now. And CFAR really really needs to succeed for SIAI to be viable in the long-term.
So my guess is that a given dollar is probably more valuable at CFAR right this instant, and we hope this changes very soon (due to CFAR having its own support base)...
...but...
...SIAI has previously supported CFAR, is probably going to make a loan to CFAR in the future, and therefore it doesn’t matter as much exactly which organization you give to right now, except that if one maxes out its matching funds you probably want to donate to the other until it also maxes...
...and...
...even the judgment about exactly where a marginal dollar spent is more valuable is, necessarily, extremely uncertain to me. My own judgment favors CFAR at the current margins, but it’s a very tough decision. Obviously! SIAI has given money to CFAR. If it had been obvious that this amount should’ve been shifted in direction A or direction B to minimize x-risk, we would’ve necessarily been organizationally irrational, or organizationally selfish, about the exact amount. SIAI has been giving CFAR amounts on the lower side of our error bounds because of the hope (uncertainty) that future-CFAR will prove effective at fundraising. Which rationally implies, and does actually imply, that an added dollar of marginal spending is more valuable at CFAR (in my estimates).
The upshot is that you should donate to whichever organization gets you more excited, like Luke said. SIAI is donating/loaning round-number amounts to CFAR, so where you donate $2K does change marginal spending at both organizations—we’re not going to be exactly re-fine-tuning the dollar amounts flowing from SIAI to CFAR based on donations of that magnitude. It’s a genuine decision on your part, and has a genuine effect. But from my own standpoint, “flip a coin to decide which one” is pretty close to my own current stance. For this to be false would imply that SIAI and I had a substantive x-risk-estimate disagreement which resulted in too much or too little funding (from my perspective) flowing to CFAR. Which is not the case, except insofar as we’ve been giving too little to CFAR in the uncertain hope that it can scale up fundraising faster than SIAI later. Taking this uncertainty into account, the margins balance. Leaving it out, a marginal absolute dollar of spending at CFAR does more good (somewhat) (in my estimation).
Thank you; that helps clarify the issue for me. Since people who know more seem to think it’s a tossup and SIAI motivates me more, I gave $250 to them.
And CFAR really really needs to succeed for SIAI to be viable in the long-term.
That’s an extremely strong claim. Is that actually your belief? Not merely that CFAR success would be useful to SIAI success? There is no alternate plan for SIAI to be successful that doesn’t rely on CFAR?
I have backup plans, but they tend to look a lot like “Try founding CFAR again.”
I don’t know of any good way to scale funding or core FAI researchers for SIAI without rationalists. There’s other things I could try, and would if necessary try, but I spent years trying various SIAI-things before LW started actually working. Just because I wouldn’t give up no matter what, doesn’t mean there wouldn’t be a fairly large chunk of success-probability sliced off if CFAR failed, and a larger chunk of probability sliced off if I couldn’t make any alternative to CFAR work.
I realize a lot of people think it shouldn’t be impossible to fund SIAI without all that rationality stuff. They haven’t tried it. Lots of stuff sounds easy if you haven’t tried it.
Thankyou Eliezer. I’m fascinated by the reasoning and analysis that you’re hinting at here. It helps puts the decisions you and SIAI have made in perspective.
Could you give a ballpark estimate of how much of the importance of successful rationality spin offs is based on expectations of producing core FAI researchers versus producing FAI funding?
I’ve tried less hard to get core FAI researchers than funding. I suspect that given sufficient funding produced by magic, it would be possible to solve the core-FAI-researchers issue by finding the people and talking to them directly—but I haven’t tried it!
How much money would you need magicked to allow you to shed fundraising and infrastructure, etc, and just hire and hole up with a dream team of hyper-competent maths wonks? Restated, at which set amount would SIAI be comfortably able to aggressively pursue its long-term research?
So my guess is that a given dollar is probably more valuable at CFAR right this instant, and we hope this changes very soon (due to CFAR having its own support base)...
an added dollar of marginal spending is more valuable at CFAR (in my estimates).
[SI has now] shifted from waterline-raising/community-building to more technical areas where CFAR success would be of less help to them
Remember that the original motivation for the waterline-raising/community-building stuff at SI was specifically to support SI’s narrower goals involving technical research. Eliezer wrote in 2009 that “after years of bogging down [at SI] I threw up my hands and explicitly recursed on the job of creating rationalists,” because Friendly AI is one of those causes that needs people to be “a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts.”
So, CFAR’s own efforts at waterline-raising and community-building should end up helping SI in the same way Less Wrong did, even though SI won’t capture all or even most of that value, and even though CFAR doesn’t teach classes on AI risk.
I’ve certainly found it to be the case that on average, people who get in contact with SI via an interest in rationality tend to be more useful than people who get in contact with SI via an interest in transhumanism or the singularity. (Though there are plenty of exceptions! E.g. Edwin Evans, Rick Schwall, Peter Thiel, Carl Shulman, and Louie Helm came to SI via the singularity materials.)
If someone has pretty good rationality skills, then it usually doesn’t take long to persuade them of the basics about AI risk. But if someone is filtered instead for a strong interest in transhumanism or the singularity (and not necessarily rationality), then the conclusions they draw about AI risk, even after argument, often appear damn-near random.
There’s also the fact that SI needs unusually good philosophers, and CFAR-style rationality training has some potential to help with that.
I’d be very interested in hearing from an SIAI higher-up whether they really want my money or whether they would prefer I give it to CFAR instead.
My own response to this has generally been that you should give to whichever organization you’re most excited to support!
My own response to this has generally been that you should give to whichever organization you’re most excited to support!
Why is that your response?
More precisely… do you actually believe that I should base my charitable giving on my level of excitement? Or do you assert that despite not believing it for some reason?
Basically, it’s because I think both organizations Do Great Good with marginal dollars at this time, but the world is too uncertain to tell whether marginal dollars do more good at CFAR or SI. (X-risk reducers confused by this statement probably have a lower estimate of CFAR’s impact on x-risk reduction than I do.) For normal humans who make giving decisions mostly by emotion, giving to the one they’re most excited about should cause them to give the maximum amount they’re going to give. For weird humans who make giving decisions mostly by multiplication, well, they’ve already translated “whichever organization you’re most excited to support” into “whichever organization maximizes my expected utility [at least, with reference to the utility function which represents my philanthropic goals].”
I have some money that I was saving for something like this, but I also just saw Eliezer’s (very convincing) request for CFAR donations yesterday and heard a rumor that SIAI was trying to get people to donate to CFAR because they needed it more.
This seems weird to me because I would expect that with SIAI’s latest announcement they have shifted from waterline-raising/community-building to more technical areas where CFAR success would be of less help to them, but I’d be very interested in hearing from an SIAI higher-up whether they really want my money or whether they would prefer I give it to CFAR instead.
1) In the long run, for CFAR to succeed, it has to be supported by a CFAR donor base that doesn’t funge against SIAI money. I expect/hope that CFAR will have a substantially larger budget in the long run than SIAI. In the long run, then, marginal x-risk minimizers should be donating to SIAI.
2) But since CFAR is at a very young and very vital stage in its development and has very little funding, it needs money right now. And CFAR really really needs to succeed for SIAI to be viable in the long-term.
So my guess is that a given dollar is probably more valuable at CFAR right this instant, and we hope this changes very soon (due to CFAR having its own support base)...
...but...
...SIAI has previously supported CFAR, is probably going to make a loan to CFAR in the future, and therefore it doesn’t matter as much exactly which organization you give to right now, except that if one maxes out its matching funds you probably want to donate to the other until it also maxes...
...and...
...even the judgment about exactly where a marginal dollar spent is more valuable is, necessarily, extremely uncertain to me. My own judgment favors CFAR at the current margins, but it’s a very tough decision. Obviously! SIAI has given money to CFAR. If it had been obvious that this amount should’ve been shifted in direction A or direction B to minimize x-risk, we would’ve necessarily been organizationally irrational, or organizationally selfish, about the exact amount. SIAI has been giving CFAR amounts on the lower side of our error bounds because of the hope (uncertainty) that future-CFAR will prove effective at fundraising. Which rationally implies, and does actually imply, that an added dollar of marginal spending is more valuable at CFAR (in my estimates).
The upshot is that you should donate to whichever organization gets you more excited, like Luke said. SIAI is donating/loaning round-number amounts to CFAR, so where you donate $2K does change marginal spending at both organizations—we’re not going to be exactly re-fine-tuning the dollar amounts flowing from SIAI to CFAR based on donations of that magnitude. It’s a genuine decision on your part, and has a genuine effect. But from my own standpoint, “flip a coin to decide which one” is pretty close to my own current stance. For this to be false would imply that SIAI and I had a substantive x-risk-estimate disagreement which resulted in too much or too little funding (from my perspective) flowing to CFAR. Which is not the case, except insofar as we’ve been giving too little to CFAR in the uncertain hope that it can scale up fundraising faster than SIAI later. Taking this uncertainty into account, the margins balance. Leaving it out, a marginal absolute dollar of spending at CFAR does more good (somewhat) (in my estimation).
Thank you; that helps clarify the issue for me. Since people who know more seem to think it’s a tossup and SIAI motivates me more, I gave $250 to them.
That’s an extremely strong claim. Is that actually your belief? Not merely that CFAR success would be useful to SIAI success? There is no alternate plan for SIAI to be successful that doesn’t rely on CFAR?
I have backup plans, but they tend to look a lot like “Try founding CFAR again.”
I don’t know of any good way to scale funding or core FAI researchers for SIAI without rationalists. There’s other things I could try, and would if necessary try, but I spent years trying various SIAI-things before LW started actually working. Just because I wouldn’t give up no matter what, doesn’t mean there wouldn’t be a fairly large chunk of success-probability sliced off if CFAR failed, and a larger chunk of probability sliced off if I couldn’t make any alternative to CFAR work.
I realize a lot of people think it shouldn’t be impossible to fund SIAI without all that rationality stuff. They haven’t tried it. Lots of stuff sounds easy if you haven’t tried it.
Thankyou Eliezer. I’m fascinated by the reasoning and analysis that you’re hinting at here. It helps puts the decisions you and SIAI have made in perspective.
Could you give a ballpark estimate of how much of the importance of successful rationality spin offs is based on expectations of producing core FAI researchers versus producing FAI funding?
I’ve tried less hard to get core FAI researchers than funding. I suspect that given sufficient funding produced by magic, it would be possible to solve the core-FAI-researchers issue by finding the people and talking to them directly—but I haven’t tried it!
How much money would you need magicked to allow you to shed fundraising and infrastructure, etc, and just hire and hole up with a dream team of hyper-competent maths wonks? Restated, at which set amount would SIAI be comfortably able to aggressively pursue its long-term research?
He once mentioned a figure of US $10 million / year. Feels like he’s made a similar remark more recently, but it didn’t show in my brief search.
Is this still your view?
Remember that the original motivation for the waterline-raising/community-building stuff at SI was specifically to support SI’s narrower goals involving technical research. Eliezer wrote in 2009 that “after years of bogging down [at SI] I threw up my hands and explicitly recursed on the job of creating rationalists,” because Friendly AI is one of those causes that needs people to be “a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts.”
So, CFAR’s own efforts at waterline-raising and community-building should end up helping SI in the same way Less Wrong did, even though SI won’t capture all or even most of that value, and even though CFAR doesn’t teach classes on AI risk.
I’ve certainly found it to be the case that on average, people who get in contact with SI via an interest in rationality tend to be more useful than people who get in contact with SI via an interest in transhumanism or the singularity. (Though there are plenty of exceptions! E.g. Edwin Evans, Rick Schwall, Peter Thiel, Carl Shulman, and Louie Helm came to SI via the singularity materials.)
If someone has pretty good rationality skills, then it usually doesn’t take long to persuade them of the basics about AI risk. But if someone is filtered instead for a strong interest in transhumanism or the singularity (and not necessarily rationality), then the conclusions they draw about AI risk, even after argument, often appear damn-near random.
There’s also the fact that SI needs unusually good philosophers, and CFAR-style rationality training has some potential to help with that.
My own response to this has generally been that you should give to whichever organization you’re most excited to support!
Why is that your response?
More precisely… do you actually believe that I should base my charitable giving on my level of excitement? Or do you assert that despite not believing it for some reason?
Oh, right...
Basically, it’s because I think both organizations Do Great Good with marginal dollars at this time, but the world is too uncertain to tell whether marginal dollars do more good at CFAR or SI. (X-risk reducers confused by this statement probably have a lower estimate of CFAR’s impact on x-risk reduction than I do.) For normal humans who make giving decisions mostly by emotion, giving to the one they’re most excited about should cause them to give the maximum amount they’re going to give. For weird humans who make giving decisions mostly by multiplication, well, they’ve already translated “whichever organization you’re most excited to support” into “whichever organization maximizes my expected utility [at least, with reference to the utility function which represents my philanthropic goals].”