Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released.
I’m not sure that “most of it” is too dangerous to be released. There is quite a lot of research that can be done in the open. If there wasn’t, we wouldn’t be trying to write a document like Open Problems in Friendly AI for the public.
You’ve managed to come up with excuses for not posting something as rudimentary as statistics that would substantiate your claims of success for rationality bootcamps.
“That would take too much time!” → So a volunteer can do it for you. → “But it’s private so we can’t release it.” → So anonymize it. → “That takes too much work too.” → Um? → “Hey, our alums dress nicely now, that should be enough proof.”
It seems that signaling rigor in hidden domains through a policy of rigor in open domains would be appropriate, and possibly sufficient. It may be expensive, but hopefully the domains addressed would still be of some benefit.
I’m not sure that “most of it” is too dangerous to be released. There is quite a lot of research that can be done in the open. If there wasn’t, we wouldn’t be trying to write a document like Open Problems in Friendly AI for the public.
You’ve managed to come up with excuses for not posting something as rudimentary as statistics that would substantiate your claims of success for rationality bootcamps.
“That would take too much time!” → So a volunteer can do it for you. → “But it’s private so we can’t release it.” → So anonymize it. → “That takes too much work too.” → Um? → “Hey, our alums dress nicely now, that should be enough proof.”
Frankly, that doesn’t bode well.
It seems that signaling rigor in hidden domains through a policy of rigor in open domains would be appropriate, and possibly sufficient. It may be expensive, but hopefully the domains addressed would still be of some benefit.