[SI has now] shifted from waterline-raising/community-building to more technical areas where CFAR success would be of less help to them
Remember that the original motivation for the waterline-raising/community-building stuff at SI was specifically to support SI’s narrower goals involving technical research. Eliezer wrote in 2009 that “after years of bogging down [at SI] I threw up my hands and explicitly recursed on the job of creating rationalists,” because Friendly AI is one of those causes that needs people to be “a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts.”
So, CFAR’s own efforts at waterline-raising and community-building should end up helping SI in the same way Less Wrong did, even though SI won’t capture all or even most of that value, and even though CFAR doesn’t teach classes on AI risk.
I’ve certainly found it to be the case that on average, people who get in contact with SI via an interest in rationality tend to be more useful than people who get in contact with SI via an interest in transhumanism or the singularity. (Though there are plenty of exceptions! E.g. Edwin Evans, Rick Schwall, Peter Thiel, Carl Shulman, and Louie Helm came to SI via the singularity materials.)
If someone has pretty good rationality skills, then it usually doesn’t take long to persuade them of the basics about AI risk. But if someone is filtered instead for a strong interest in transhumanism or the singularity (and not necessarily rationality), then the conclusions they draw about AI risk, even after argument, often appear damn-near random.
There’s also the fact that SI needs unusually good philosophers, and CFAR-style rationality training has some potential to help with that.
I’d be very interested in hearing from an SIAI higher-up whether they really want my money or whether they would prefer I give it to CFAR instead.
My own response to this has generally been that you should give to whichever organization you’re most excited to support!
My own response to this has generally been that you should give to whichever organization you’re most excited to support!
Why is that your response?
More precisely… do you actually believe that I should base my charitable giving on my level of excitement? Or do you assert that despite not believing it for some reason?
Basically, it’s because I think both organizations Do Great Good with marginal dollars at this time, but the world is too uncertain to tell whether marginal dollars do more good at CFAR or SI. (X-risk reducers confused by this statement probably have a lower estimate of CFAR’s impact on x-risk reduction than I do.) For normal humans who make giving decisions mostly by emotion, giving to the one they’re most excited about should cause them to give the maximum amount they’re going to give. For weird humans who make giving decisions mostly by multiplication, well, they’ve already translated “whichever organization you’re most excited to support” into “whichever organization maximizes my expected utility [at least, with reference to the utility function which represents my philanthropic goals].”
Remember that the original motivation for the waterline-raising/community-building stuff at SI was specifically to support SI’s narrower goals involving technical research. Eliezer wrote in 2009 that “after years of bogging down [at SI] I threw up my hands and explicitly recursed on the job of creating rationalists,” because Friendly AI is one of those causes that needs people to be “a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts.”
So, CFAR’s own efforts at waterline-raising and community-building should end up helping SI in the same way Less Wrong did, even though SI won’t capture all or even most of that value, and even though CFAR doesn’t teach classes on AI risk.
I’ve certainly found it to be the case that on average, people who get in contact with SI via an interest in rationality tend to be more useful than people who get in contact with SI via an interest in transhumanism or the singularity. (Though there are plenty of exceptions! E.g. Edwin Evans, Rick Schwall, Peter Thiel, Carl Shulman, and Louie Helm came to SI via the singularity materials.)
If someone has pretty good rationality skills, then it usually doesn’t take long to persuade them of the basics about AI risk. But if someone is filtered instead for a strong interest in transhumanism or the singularity (and not necessarily rationality), then the conclusions they draw about AI risk, even after argument, often appear damn-near random.
There’s also the fact that SI needs unusually good philosophers, and CFAR-style rationality training has some potential to help with that.
My own response to this has generally been that you should give to whichever organization you’re most excited to support!
Why is that your response?
More precisely… do you actually believe that I should base my charitable giving on my level of excitement? Or do you assert that despite not believing it for some reason?
Oh, right...
Basically, it’s because I think both organizations Do Great Good with marginal dollars at this time, but the world is too uncertain to tell whether marginal dollars do more good at CFAR or SI. (X-risk reducers confused by this statement probably have a lower estimate of CFAR’s impact on x-risk reduction than I do.) For normal humans who make giving decisions mostly by emotion, giving to the one they’re most excited about should cause them to give the maximum amount they’re going to give. For weird humans who make giving decisions mostly by multiplication, well, they’ve already translated “whichever organization you’re most excited to support” into “whichever organization maximizes my expected utility [at least, with reference to the utility function which represents my philanthropic goals].”