There is plenty left that I would expect Eliezer to consider releaseable.
Carl Shulman wrote that Eliezer is reluctant to release work that he thinks is relevant to building AGI.
Think about his risk estimations of certain game and decision theoretic thought experiments. What could possible be less risky than those thought experiments while still retaining enough rigor that one would be able to judge if actual progress has been made?
There is plenty of work that could be done and released that is not directly about AGI construction or the other few secrecy requiring areas.
Right, the friendly AI problem is incredible broad. I wish there was a list of known problems that need to be solved. But I am pretty sure there is a large category of problems that Eliezer would be reluctant to even talk about.
Carl Shulman wrote that Eliezer is reluctant to release work that he thinks is relevant to building AGI.
Think about his risk estimations of certain game and decision theoretic thought experiments. What could possible be less risky than those thought experiments while still retaining enough rigor that one would be able to judge if actual progress has been made?
(Suggest substituting “Eliezer” for “he” in the above sentence.)
There is plenty of work that could be done and released that is not directly about AGI construction or the other few secrecy requiring areas.
Right, the friendly AI problem is incredible broad. I wish there was a list of known problems that need to be solved. But I am pretty sure there is a large category of problems that Eliezer would be reluctant to even talk about.