From what I understand, SIAI was meant to eventually support at least 10 full time FAI researchers/implementers. How is Eliezer supposed to “make the same impact” by doing research part time while working a day job?
I think the hard problem is finding 10 capable and motivated researchers, and any such people would keep working even without SIAI. Eliezer can make impact the same way he always does: by proving to the Internet that the topic is interesting.
Why? I gave the example of Wei Dai who works independently from the SIAI. If you know any people besides Eliezer who do comparable work at the SIAI, who are they?
The problem with your example is that I don’t work on FAI, I work on certain topics of philosophical interest to me that happen to be relevant to FAI theory. If I were interested in actually building an FAI, I’d definitely want a secure source of funding for a whole team to work on it full time, and a building to work in. It seems implausible that that’s not a big improvement (in likelihood of success) over a bunch of volunteers working part time and just collaborating over the Internet.
More generally, money tends to be useful for getting anything accomplished. You seem to be saying that FAI is an exception, and I really don’t understand why… Or are you just saying that SIAI in particular is doing a bad job with the money that it’s getting? If that’s the case, why not offer some constructive suggestions instead of just making “digs” at it?
I don’t believe FAI is ready to be an engineering project. As Richard Hamming would put it, “we do not have an attack”. You can’t build a 747 before some hobbyist invents the first flyer. The “throw money and people at it” approach has been tried many times with AGI, how is FAI different? I think right now most progress should come from people like you, satisfying their personal interest. As for the best use of SIAI money, I’d use Givewell to get rid of it, or just throw some parties and have fun all around, because money isn’t the limiting factor in making math breakthroughs happen.
I think right now most progress should come from people like you, satisfying their personal interest.
I think the problem with that is that most people have multiple interests, or their interests can shift (perhaps subconsciously) based on considerations of money and status. FAI-related fields have to compete with other fields for a small pool of highly capable researchers, and the lack of money and status (which would come with funding) does not help.
I don’t believe FAI is ready to be an engineering project.
Me either, but I think that one, SIAI can use the money to support FAI-related research in the mean time, and two, given that time is not on our side, it seems like a good idea to build up the necessary institutional infrastructure to support FAI as an engineering project, just in case someone makes an unexpected theoretical breakthrough.
Sorry for deleting my comment, I didn’t think you’d answer it so quickly. For posterity, it said: “Is their research secret? Any pointers?”
Here’s the list of SIAI publications. Apart from Eliezer’s writings, there’s only one moderately interesting item on the list: Peter de Blanc’s “convergence of expected utility” (or divergence, rather). That’s… good, I guess? My point stands.
Yes. If anyone finds out why Marcello’s research is secret, they have to be killed and cryopreserved for interrogation after the singularity.
Now why do you even ask why should people be afraid of something going terribly wrong at SIAI? Keeping it secret in order to avoid signaling the moment where it becomes necessary to keep it secret? Hmm...
From what I understand, SIAI was meant to eventually support at least 10 full time FAI researchers/implementers. How is Eliezer supposed to “make the same impact” by doing research part time while working a day job?
I think the hard problem is finding 10 capable and motivated researchers, and any such people would keep working even without SIAI. Eliezer can make impact the same way he always does: by proving to the Internet that the topic is interesting.
Again: why isn’t it obvious to you that it would be easier for these people to have a source of funding and a building to work in?
No. Just no.
Why? I gave the example of Wei Dai who works independently from the SIAI. If you know any people besides Eliezer who do comparable work at the SIAI, who are they?
The problem with your example is that I don’t work on FAI, I work on certain topics of philosophical interest to me that happen to be relevant to FAI theory. If I were interested in actually building an FAI, I’d definitely want a secure source of funding for a whole team to work on it full time, and a building to work in. It seems implausible that that’s not a big improvement (in likelihood of success) over a bunch of volunteers working part time and just collaborating over the Internet.
More generally, money tends to be useful for getting anything accomplished. You seem to be saying that FAI is an exception, and I really don’t understand why… Or are you just saying that SIAI in particular is doing a bad job with the money that it’s getting? If that’s the case, why not offer some constructive suggestions instead of just making “digs” at it?
I don’t believe FAI is ready to be an engineering project. As Richard Hamming would put it, “we do not have an attack”. You can’t build a 747 before some hobbyist invents the first flyer. The “throw money and people at it” approach has been tried many times with AGI, how is FAI different? I think right now most progress should come from people like you, satisfying their personal interest. As for the best use of SIAI money, I’d use Givewell to get rid of it, or just throw some parties and have fun all around, because money isn’t the limiting factor in making math breakthroughs happen.
I think the problem with that is that most people have multiple interests, or their interests can shift (perhaps subconsciously) based on considerations of money and status. FAI-related fields have to compete with other fields for a small pool of highly capable researchers, and the lack of money and status (which would come with funding) does not help.
Me either, but I think that one, SIAI can use the money to support FAI-related research in the mean time, and two, given that time is not on our side, it seems like a good idea to build up the necessary institutional infrastructure to support FAI as an engineering project, just in case someone makes an unexpected theoretical breakthrough.
Marcello, Anna Salamon, Carl Shulman, Nick Tarleton, plus a few up-and-coming people I am not acquainted with.
I don’t do any work comparable to Eliezer’s.
Why don’t you? You are brilliant, and you understand the problem statement, you merely need to study the right things to get started.
I don’t do any original work comparable to Eliezer.
I don’t do anything comparable to Eliezer.
Is their research secret? Any pointers?
Marcello’s research is secret, but not that of the others.
Sorry for deleting my comment, I didn’t think you’d answer it so quickly. For posterity, it said: “Is their research secret? Any pointers?”
Here’s the list of SIAI publications. Apart from Eliezer’s writings, there’s only one moderately interesting item on the list: Peter de Blanc’s “convergence of expected utility” (or divergence, rather). That’s… good, I guess? My point stands.
Is it secret why it’s secret? I can’t imagine.
Yes. If anyone finds out why Marcello’s research is secret, they have to be killed and cryopreserved for interrogation after the singularity.
Now why do you even ask why should people be afraid of something going terribly wrong at SIAI? Keeping it secret in order to avoid signaling the moment where it becomes necessary to keep it secret? Hmm...