How are you going to address the perceived and actual lack of rigor associated with [Singularity Institute]?
I upvoted this question originally, and while I appreciate your response, I don’t feel you addressed what, for me, is the crux of the matter. If the SIAI is so focussed on “solving the most important problems in mathematics, computer science, and philosophy”, then where is the progress?
The worry is that the SIAI is seen as somewhere where people pontificate endlessly about the problem, without actually doing useful work towards the solution. It is important to raise awareness of the dangers of an UFAI situation, but you’re claiming that you also want the SIAI to be more than that.
But it’s hard to take that seriously when there is so little evidence of problems actually getting solved, particularly the hard ones in mathematics and computer science. Eliezer’s TDT draft is a step in the right direction, as it’s at least evidence that some work is getting done, but it’s the sort of thing I’d like to see much, much more of. In addition, it could do with tightening up, and I think the rigour of submitting it to an actual academic journal would be extremely helpful. Even if you don’t want to do that, a public draft at least allows some kind of assessment of the work you’re doing.
As for the philosophy, I think that’s in better shape, but not an awful lot better. There’s good material in the sequences, but at the end of the day they’re a series of thoughtful blog posts, not a polished, well-structured series of arguments. The quality is better than some published philosophy, but that’s not saying much. Again, I think the discipline required to shape some of the material up to get it published would be a good thing.
As long as the SIAI continues to not publish, or otherwise make available, credible documents indicating rigorous progress it is going to be perceived as lacking in rigour. And those of us who aren’t privy to what is actually going on in there may worry that this indicates an actual lack of rigour.
As long as the SIAI continues to not publish, or otherwise make available, credible documents indicating rigorous progress it is going to be perceived as lacking in rigour. And those of us who aren’t privy to what is actually going on in there may worry that this indicates an actual lack of rigour.
I couldn’t agree more.
This is why I talk almost non-stop within Singularity Institute about how we need to be publishing the research that we’re doing. It’s why I’ve been trying to squeeze in hours (around helping with the Summit and now being Executive Director) that allow me to author and co-author papers that summarize the current state of research, like ‘The Singularity and Machine Ethics’ and many others that are in progress: ‘Intelligence Explosion: Evidence and Import’, ‘How to Do Research That Contributes Toward a Positive Singularity’, and Open Problems in Friendly Artificial Intelligence. Granted, only the last one could constitute significant research progress, but one reason it’s hard to make research progress is that not even the basics have been summarized with good form and clarity anywhere, so I’m first working on these kinds of “platform” documents as enablers of future research progress.
My concern with showing the research that’s going on is also why, in the video above, I repeatedly asked for people with experience writing up research papers to contact me.
Eliezer once wrote about how our lack of a PhD on staff and other common complaints didn’t seem to be people’s “true rejection” of Singularity Institute, but I think the “you don’t publish enough research” is a pretty decent candidate for being many people’s true rejection.
Believe me, few things would make me happier than having the resources to publish those 30-40 papers I talked about that are sitting in people’s heads but not on paper.
Eliezer’s TDT draft is a step in the right direction, as it’s at least evidence that some work is getting done, but it’s the sort of thing I’d like to see much, much more of.
Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won’t see much more of it ever.
There’s good material in the sequences, but at the end of the day they’re a series of thoughtful blog posts, not a polished, well-structured series of arguments. The quality is better than some published philosophy, but that’s not saying much.
Indeed! Think about it this way, if Less Wrong would have been around for 3000 years and the field of academic philosophy would have been founded a few years ago then most of it would probably be better than Less Wrong..
Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won’t see much more of it ever.
I’m not sure how true this is, but suppose it is. Then it seems to me that the SIAI has got a problem. They need people to take them seriously, in order to attract funding and researchers, but they can’t release any evidence that might make people take them seriously, as it’s regarded as “too dangerous”. Dilemma.
Secrecy and a perceived lack of rigour seem likely to go hand in hand. And for those of us outside the SIAI, who are trying to decide whether to take it seriously, said secrecy also makes it seem likely that there is an actual lack of rigour.
Perhaps this just demonstrates that any organization seriously aiming to make FAI has to be secretive, and hence have a bad public image. Which would be interesting. But in that case, the answer to the original question may just be: “We can’t really, because it would be too dangerous”, which would at least be something.
Indeed! Think about it this way, if Less Wrong would have been around for 3000 years and the field of academic philosophy would have been founded a few years ago then most of it would probably be better than Less Wrong..
And perhaps, just perhaps, LW might have something to learn from that older sibling… I appreciate the desire to declare all past philosophy diseased and start again from nothing, but I think it’s misguided. Even if you don’t like much of contemporary philosophy, modern-day philosophers are often well-trained critical thinkers, and so a bit of attention from them might help shape things up a bit.
Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released.
I’m not sure that “most of it” is too dangerous to be released. There is quite a lot of research that can be done in the open. If there wasn’t, we wouldn’t be trying to write a document like Open Problems in Friendly AI for the public.
You’ve managed to come up with excuses for not posting something as rudimentary as statistics that would substantiate your claims of success for rationality bootcamps.
“That would take too much time!” → So a volunteer can do it for you. → “But it’s private so we can’t release it.” → So anonymize it. → “That takes too much work too.” → Um? → “Hey, our alums dress nicely now, that should be enough proof.”
It seems that signaling rigor in hidden domains through a policy of rigor in open domains would be appropriate, and possibly sufficient. It may be expensive, but hopefully the domains addressed would still be of some benefit.
Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won’t see much more of it ever.
That seems unlikely—well, the being too dangerous, not sure about the regarding. The philosophy of digitizing human preferences seems particularly releasable to me, but depending on how you break the causes of unFAI into malice/stupidity, it can be a good idea to release pretty much anything that’s easier to apply to FAI than to unFAI.
Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won’t see much more of it ever.
I’d be surprised. There is plenty left that I would expect Eliezer to consider releaseable.
There is plenty left that I would expect Eliezer to consider releaseable.
Carl Shulman wrote that Eliezer is reluctant to release work that he thinks is relevant to building AGI.
Think about his risk estimations of certain game and decision theoretic thought experiments. What could possible be less risky than those thought experiments while still retaining enough rigor that one would be able to judge if actual progress has been made?
There is plenty of work that could be done and released that is not directly about AGI construction or the other few secrecy requiring areas.
Right, the friendly AI problem is incredible broad. I wish there was a list of known problems that need to be solved. But I am pretty sure there is a large category of problems that Eliezer would be reluctant to even talk about.
I upvoted this question originally, and while I appreciate your response, I don’t feel you addressed what, for me, is the crux of the matter. If the SIAI is so focussed on “solving the most important problems in mathematics, computer science, and philosophy”, then where is the progress?
The worry is that the SIAI is seen as somewhere where people pontificate endlessly about the problem, without actually doing useful work towards the solution. It is important to raise awareness of the dangers of an UFAI situation, but you’re claiming that you also want the SIAI to be more than that.
But it’s hard to take that seriously when there is so little evidence of problems actually getting solved, particularly the hard ones in mathematics and computer science. Eliezer’s TDT draft is a step in the right direction, as it’s at least evidence that some work is getting done, but it’s the sort of thing I’d like to see much, much more of. In addition, it could do with tightening up, and I think the rigour of submitting it to an actual academic journal would be extremely helpful. Even if you don’t want to do that, a public draft at least allows some kind of assessment of the work you’re doing.
As for the philosophy, I think that’s in better shape, but not an awful lot better. There’s good material in the sequences, but at the end of the day they’re a series of thoughtful blog posts, not a polished, well-structured series of arguments. The quality is better than some published philosophy, but that’s not saying much. Again, I think the discipline required to shape some of the material up to get it published would be a good thing.
As long as the SIAI continues to not publish, or otherwise make available, credible documents indicating rigorous progress it is going to be perceived as lacking in rigour. And those of us who aren’t privy to what is actually going on in there may worry that this indicates an actual lack of rigour.
I couldn’t agree more.
This is why I talk almost non-stop within Singularity Institute about how we need to be publishing the research that we’re doing. It’s why I’ve been trying to squeeze in hours (around helping with the Summit and now being Executive Director) that allow me to author and co-author papers that summarize the current state of research, like ‘The Singularity and Machine Ethics’ and many others that are in progress: ‘Intelligence Explosion: Evidence and Import’, ‘How to Do Research That Contributes Toward a Positive Singularity’, and Open Problems in Friendly Artificial Intelligence. Granted, only the last one could constitute significant research progress, but one reason it’s hard to make research progress is that not even the basics have been summarized with good form and clarity anywhere, so I’m first working on these kinds of “platform” documents as enablers of future research progress.
My concern with showing the research that’s going on is also why, in the video above, I repeatedly asked for people with experience writing up research papers to contact me.
Eliezer once wrote about how our lack of a PhD on staff and other common complaints didn’t seem to be people’s “true rejection” of Singularity Institute, but I think the “you don’t publish enough research” is a pretty decent candidate for being many people’s true rejection.
Believe me, few things would make me happier than having the resources to publish those 30-40 papers I talked about that are sitting in people’s heads but not on paper.
So it sounds like your answer is: “Publishing research would help, and we’re working on it.”
That’s great! It’s just good that you’ve got a plan. After all, the question was “How are you going to address the perceived lack of rigour”.
Correct!
Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won’t see much more of it ever.
Indeed! Think about it this way, if Less Wrong would have been around for 3000 years and the field of academic philosophy would have been founded a few years ago then most of it would probably be better than Less Wrong..
I’m not sure how true this is, but suppose it is. Then it seems to me that the SIAI has got a problem. They need people to take them seriously, in order to attract funding and researchers, but they can’t release any evidence that might make people take them seriously, as it’s regarded as “too dangerous”. Dilemma.
Secrecy and a perceived lack of rigour seem likely to go hand in hand. And for those of us outside the SIAI, who are trying to decide whether to take it seriously, said secrecy also makes it seem likely that there is an actual lack of rigour.
Perhaps this just demonstrates that any organization seriously aiming to make FAI has to be secretive, and hence have a bad public image. Which would be interesting. But in that case, the answer to the original question may just be: “We can’t really, because it would be too dangerous”, which would at least be something.
And perhaps, just perhaps, LW might have something to learn from that older sibling… I appreciate the desire to declare all past philosophy diseased and start again from nothing, but I think it’s misguided. Even if you don’t like much of contemporary philosophy, modern-day philosophers are often well-trained critical thinkers, and so a bit of attention from them might help shape things up a bit.
I’m not sure that “most of it” is too dangerous to be released. There is quite a lot of research that can be done in the open. If there wasn’t, we wouldn’t be trying to write a document like Open Problems in Friendly AI for the public.
You’ve managed to come up with excuses for not posting something as rudimentary as statistics that would substantiate your claims of success for rationality bootcamps.
“That would take too much time!” → So a volunteer can do it for you. → “But it’s private so we can’t release it.” → So anonymize it. → “That takes too much work too.” → Um? → “Hey, our alums dress nicely now, that should be enough proof.”
Frankly, that doesn’t bode well.
It seems that signaling rigor in hidden domains through a policy of rigor in open domains would be appropriate, and possibly sufficient. It may be expensive, but hopefully the domains addressed would still be of some benefit.
That seems unlikely—well, the being too dangerous, not sure about the regarding. The philosophy of digitizing human preferences seems particularly releasable to me, but depending on how you break the causes of unFAI into malice/stupidity, it can be a good idea to release pretty much anything that’s easier to apply to FAI than to unFAI.
I’d be surprised. There is plenty left that I would expect Eliezer to consider releaseable.
Carl Shulman wrote that Eliezer is reluctant to release work that he thinks is relevant to building AGI.
Think about his risk estimations of certain game and decision theoretic thought experiments. What could possible be less risky than those thought experiments while still retaining enough rigor that one would be able to judge if actual progress has been made?
(Suggest substituting “Eliezer” for “he” in the above sentence.)
There is plenty of work that could be done and released that is not directly about AGI construction or the other few secrecy requiring areas.
Right, the friendly AI problem is incredible broad. I wish there was a list of known problems that need to be solved. But I am pretty sure there is a large category of problems that Eliezer would be reluctant to even talk about.