Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won’t see much more of it ever.
I’d be surprised. There is plenty left that I would expect Eliezer to consider releaseable.
There is plenty left that I would expect Eliezer to consider releaseable.
Carl Shulman wrote that Eliezer is reluctant to release work that he thinks is relevant to building AGI.
Think about his risk estimations of certain game and decision theoretic thought experiments. What could possible be less risky than those thought experiments while still retaining enough rigor that one would be able to judge if actual progress has been made?
There is plenty of work that could be done and released that is not directly about AGI construction or the other few secrecy requiring areas.
Right, the friendly AI problem is incredible broad. I wish there was a list of known problems that need to be solved. But I am pretty sure there is a large category of problems that Eliezer would be reluctant to even talk about.
I’d be surprised. There is plenty left that I would expect Eliezer to consider releaseable.
Carl Shulman wrote that Eliezer is reluctant to release work that he thinks is relevant to building AGI.
Think about his risk estimations of certain game and decision theoretic thought experiments. What could possible be less risky than those thought experiments while still retaining enough rigor that one would be able to judge if actual progress has been made?
(Suggest substituting “Eliezer” for “he” in the above sentence.)
There is plenty of work that could be done and released that is not directly about AGI construction or the other few secrecy requiring areas.
Right, the friendly AI problem is incredible broad. I wish there was a list of known problems that need to be solved. But I am pretty sure there is a large category of problems that Eliezer would be reluctant to even talk about.