I have backup plans, but they tend to look a lot like “Try founding CFAR again.”
I don’t know of any good way to scale funding or core FAI researchers for SIAI without rationalists. There’s other things I could try, and would if necessary try, but I spent years trying various SIAI-things before LW started actually working. Just because I wouldn’t give up no matter what, doesn’t mean there wouldn’t be a fairly large chunk of success-probability sliced off if CFAR failed, and a larger chunk of probability sliced off if I couldn’t make any alternative to CFAR work.
I realize a lot of people think it shouldn’t be impossible to fund SIAI without all that rationality stuff. They haven’t tried it. Lots of stuff sounds easy if you haven’t tried it.
Thankyou Eliezer. I’m fascinated by the reasoning and analysis that you’re hinting at here. It helps puts the decisions you and SIAI have made in perspective.
Could you give a ballpark estimate of how much of the importance of successful rationality spin offs is based on expectations of producing core FAI researchers versus producing FAI funding?
I’ve tried less hard to get core FAI researchers than funding. I suspect that given sufficient funding produced by magic, it would be possible to solve the core-FAI-researchers issue by finding the people and talking to them directly—but I haven’t tried it!
How much money would you need magicked to allow you to shed fundraising and infrastructure, etc, and just hire and hole up with a dream team of hyper-competent maths wonks? Restated, at which set amount would SIAI be comfortably able to aggressively pursue its long-term research?
I have backup plans, but they tend to look a lot like “Try founding CFAR again.”
I don’t know of any good way to scale funding or core FAI researchers for SIAI without rationalists. There’s other things I could try, and would if necessary try, but I spent years trying various SIAI-things before LW started actually working. Just because I wouldn’t give up no matter what, doesn’t mean there wouldn’t be a fairly large chunk of success-probability sliced off if CFAR failed, and a larger chunk of probability sliced off if I couldn’t make any alternative to CFAR work.
I realize a lot of people think it shouldn’t be impossible to fund SIAI without all that rationality stuff. They haven’t tried it. Lots of stuff sounds easy if you haven’t tried it.
Thankyou Eliezer. I’m fascinated by the reasoning and analysis that you’re hinting at here. It helps puts the decisions you and SIAI have made in perspective.
Could you give a ballpark estimate of how much of the importance of successful rationality spin offs is based on expectations of producing core FAI researchers versus producing FAI funding?
I’ve tried less hard to get core FAI researchers than funding. I suspect that given sufficient funding produced by magic, it would be possible to solve the core-FAI-researchers issue by finding the people and talking to them directly—but I haven’t tried it!
How much money would you need magicked to allow you to shed fundraising and infrastructure, etc, and just hire and hole up with a dream team of hyper-competent maths wonks? Restated, at which set amount would SIAI be comfortably able to aggressively pursue its long-term research?
He once mentioned a figure of US $10 million / year. Feels like he’s made a similar remark more recently, but it didn’t show in my brief search.