Eliezer’s TDT draft is a step in the right direction, as it’s at least evidence that some work is getting done, but it’s the sort of thing I’d like to see much, much more of.
Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won’t see much more of it ever.
There’s good material in the sequences, but at the end of the day they’re a series of thoughtful blog posts, not a polished, well-structured series of arguments. The quality is better than some published philosophy, but that’s not saying much.
Indeed! Think about it this way, if Less Wrong would have been around for 3000 years and the field of academic philosophy would have been founded a few years ago then most of it would probably be better than Less Wrong..
Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won’t see much more of it ever.
I’m not sure how true this is, but suppose it is. Then it seems to me that the SIAI has got a problem. They need people to take them seriously, in order to attract funding and researchers, but they can’t release any evidence that might make people take them seriously, as it’s regarded as “too dangerous”. Dilemma.
Secrecy and a perceived lack of rigour seem likely to go hand in hand. And for those of us outside the SIAI, who are trying to decide whether to take it seriously, said secrecy also makes it seem likely that there is an actual lack of rigour.
Perhaps this just demonstrates that any organization seriously aiming to make FAI has to be secretive, and hence have a bad public image. Which would be interesting. But in that case, the answer to the original question may just be: “We can’t really, because it would be too dangerous”, which would at least be something.
Indeed! Think about it this way, if Less Wrong would have been around for 3000 years and the field of academic philosophy would have been founded a few years ago then most of it would probably be better than Less Wrong..
And perhaps, just perhaps, LW might have something to learn from that older sibling… I appreciate the desire to declare all past philosophy diseased and start again from nothing, but I think it’s misguided. Even if you don’t like much of contemporary philosophy, modern-day philosophers are often well-trained critical thinkers, and so a bit of attention from them might help shape things up a bit.
Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released.
I’m not sure that “most of it” is too dangerous to be released. There is quite a lot of research that can be done in the open. If there wasn’t, we wouldn’t be trying to write a document like Open Problems in Friendly AI for the public.
You’ve managed to come up with excuses for not posting something as rudimentary as statistics that would substantiate your claims of success for rationality bootcamps.
“That would take too much time!” → So a volunteer can do it for you. → “But it’s private so we can’t release it.” → So anonymize it. → “That takes too much work too.” → Um? → “Hey, our alums dress nicely now, that should be enough proof.”
It seems that signaling rigor in hidden domains through a policy of rigor in open domains would be appropriate, and possibly sufficient. It may be expensive, but hopefully the domains addressed would still be of some benefit.
Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won’t see much more of it ever.
That seems unlikely—well, the being too dangerous, not sure about the regarding. The philosophy of digitizing human preferences seems particularly releasable to me, but depending on how you break the causes of unFAI into malice/stupidity, it can be a good idea to release pretty much anything that’s easier to apply to FAI than to unFAI.
Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won’t see much more of it ever.
I’d be surprised. There is plenty left that I would expect Eliezer to consider releaseable.
There is plenty left that I would expect Eliezer to consider releaseable.
Carl Shulman wrote that Eliezer is reluctant to release work that he thinks is relevant to building AGI.
Think about his risk estimations of certain game and decision theoretic thought experiments. What could possible be less risky than those thought experiments while still retaining enough rigor that one would be able to judge if actual progress has been made?
There is plenty of work that could be done and released that is not directly about AGI construction or the other few secrecy requiring areas.
Right, the friendly AI problem is incredible broad. I wish there was a list of known problems that need to be solved. But I am pretty sure there is a large category of problems that Eliezer would be reluctant to even talk about.
Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won’t see much more of it ever.
Indeed! Think about it this way, if Less Wrong would have been around for 3000 years and the field of academic philosophy would have been founded a few years ago then most of it would probably be better than Less Wrong..
I’m not sure how true this is, but suppose it is. Then it seems to me that the SIAI has got a problem. They need people to take them seriously, in order to attract funding and researchers, but they can’t release any evidence that might make people take them seriously, as it’s regarded as “too dangerous”. Dilemma.
Secrecy and a perceived lack of rigour seem likely to go hand in hand. And for those of us outside the SIAI, who are trying to decide whether to take it seriously, said secrecy also makes it seem likely that there is an actual lack of rigour.
Perhaps this just demonstrates that any organization seriously aiming to make FAI has to be secretive, and hence have a bad public image. Which would be interesting. But in that case, the answer to the original question may just be: “We can’t really, because it would be too dangerous”, which would at least be something.
And perhaps, just perhaps, LW might have something to learn from that older sibling… I appreciate the desire to declare all past philosophy diseased and start again from nothing, but I think it’s misguided. Even if you don’t like much of contemporary philosophy, modern-day philosophers are often well-trained critical thinkers, and so a bit of attention from them might help shape things up a bit.
I’m not sure that “most of it” is too dangerous to be released. There is quite a lot of research that can be done in the open. If there wasn’t, we wouldn’t be trying to write a document like Open Problems in Friendly AI for the public.
You’ve managed to come up with excuses for not posting something as rudimentary as statistics that would substantiate your claims of success for rationality bootcamps.
“That would take too much time!” → So a volunteer can do it for you. → “But it’s private so we can’t release it.” → So anonymize it. → “That takes too much work too.” → Um? → “Hey, our alums dress nicely now, that should be enough proof.”
Frankly, that doesn’t bode well.
It seems that signaling rigor in hidden domains through a policy of rigor in open domains would be appropriate, and possibly sufficient. It may be expensive, but hopefully the domains addressed would still be of some benefit.
That seems unlikely—well, the being too dangerous, not sure about the regarding. The philosophy of digitizing human preferences seems particularly releasable to me, but depending on how you break the causes of unFAI into malice/stupidity, it can be a good idea to release pretty much anything that’s easier to apply to FAI than to unFAI.
I’d be surprised. There is plenty left that I would expect Eliezer to consider releaseable.
Carl Shulman wrote that Eliezer is reluctant to release work that he thinks is relevant to building AGI.
Think about his risk estimations of certain game and decision theoretic thought experiments. What could possible be less risky than those thought experiments while still retaining enough rigor that one would be able to judge if actual progress has been made?
(Suggest substituting “Eliezer” for “he” in the above sentence.)
There is plenty of work that could be done and released that is not directly about AGI construction or the other few secrecy requiring areas.
Right, the friendly AI problem is incredible broad. I wish there was a list of known problems that need to be solved. But I am pretty sure there is a large category of problems that Eliezer would be reluctant to even talk about.