I disagree with the reasoning in this reply to Tom (and in nostalgebraist’s reply). If stuff like this is net-positive to post about on LW, the chain of reasoning to arrive at that conclusion seems to me like it has to look different from the reasoning in these comments. E.g.:
“It seems unlikely that comments on lesswrong speed up capabilities research”—If “unlikely” here meant “only 40% likely”, then it would obviously be a bad idea to post a capabilities insight. The degree of unlikeliness obviously matters, and it has to be weighed against the expected benefit of sharing the insight.
At the policy level, “How does this weigh against the expected benefits?” has to take into account that the quality and rarity of LWers’ insights is likely to vary a lot by individual and across time; and it has to take into account that the risk level of LW posts is very correlated with the benefit level. In the worlds where there’s not much future risk of a LWer someday posting a dangerous capabilities insight, there’s also less future benefit to LW posts, since we’re probably not generating many useful ideas in general (especially about AGI and AGI alignment).
“it seems important to know which problems in capabilities research can be alleviate”—What specific safety progress does this enable? (Maybe there’s something, but ‘it seems safety-relevant because it’s a fact about ML’ seems to prove too much. What’s the actual implicit path by which humanity ends up safely navigating the AGI transition?)
‘We should require a high bar before we’re willing to not-post potentially-world-destroying information to LW, because LW has a strong commitment to epistemic rationality’ seems like an obviously terrible argument to me. People should not post stuff to the public Internet that destroys the world just because the place they’re posting is a website that cares about Bayesianism and belief accuracy.
(Note that this is a high-level point I’m making about the kind of arguments being made here, not about the object-level question.)
In the worlds where there’s not much future risk of a LWer someday posting a dangerous capabilities insight, there’s also less future benefit to LW posts, since we’re probably not generating many useful ideas in general (especially about AGI and AGI alignment).
This seems correct, though it’s still valuable to flesh out that it seems possible to have LW posts that are helpful for alignment but not for capabilities: namely, such posts that summarize insights from capabilities research that are known to ~all capabilities people while known to few alignment people.
The main reason I shifted more to your viewpoint now is that capabilities insights might influence people who do not yet know a lot about capabilities to work on that in the future, instead of working on alignment. Therefore, I’m also not sure if Marius’ heuristic “Has company-X-who-cares-mostly-about-capabilities likely thought about this already?” for deciding whether something is infohazardy is safe.
‘We should require a high bar before we’re willing to not-post potentially-world-destroying information to LW, because LW has a strong commitment to epistemic rationality’ seems like an obviously terrible argument to me. People should not post stuff to the public Internet that destroys the world just because the place they’re posting is a website that cares about Bayesianism and belief accuracy.
Yes, that seems correct (though I’m a bit unhappy about you bluntly straw-manning my position). I think after reflection I would phrase my point as follows: ”There is a conflict between Lesswrongs commitment to epistemic rationality on the one hand, and the commitment to restrict info hazards on the other hand. Lesswrong’s commitment to epistemic rationality exists for good reasons, and should not be given up lightly. Therefore, whenever we restrict discussion and information about certain topics, we should have thought about this with great care.”
I don’t yet have a fleshed-out view on this, but I did move a bit in Tom’s direction.
We should require a high bar before we’re willing to not-post potentially-world-destroying information to LW, because LW has a strong commitment to epistemic rationality’ seems like an obviously terrible argument to me.
I think that argument is good if you expand out its reasoning. The reason we have a strong commitment to epistemic rationality is because learning and teaching true things is almost always very good. You need to establish a fair chunk of probable bad to outweigh it.
I disagree with the reasoning in this reply to Tom (and in nostalgebraist’s reply). If stuff like this is net-positive to post about on LW, the chain of reasoning to arrive at that conclusion seems to me like it has to look different from the reasoning in these comments. E.g.:
“It seems unlikely that comments on lesswrong speed up capabilities research”—If “unlikely” here meant “only 40% likely”, then it would obviously be a bad idea to post a capabilities insight. The degree of unlikeliness obviously matters, and it has to be weighed against the expected benefit of sharing the insight.
At the policy level, “How does this weigh against the expected benefits?” has to take into account that the quality and rarity of LWers’ insights is likely to vary a lot by individual and across time; and it has to take into account that the risk level of LW posts is very correlated with the benefit level. In the worlds where there’s not much future risk of a LWer someday posting a dangerous capabilities insight, there’s also less future benefit to LW posts, since we’re probably not generating many useful ideas in general (especially about AGI and AGI alignment).
“it seems important to know which problems in capabilities research can be alleviate”—What specific safety progress does this enable? (Maybe there’s something, but ‘it seems safety-relevant because it’s a fact about ML’ seems to prove too much. What’s the actual implicit path by which humanity ends up safely navigating the AGI transition?)
‘We should require a high bar before we’re willing to not-post potentially-world-destroying information to LW, because LW has a strong commitment to epistemic rationality’ seems like an obviously terrible argument to me. People should not post stuff to the public Internet that destroys the world just because the place they’re posting is a website that cares about Bayesianism and belief accuracy.
(Note that this is a high-level point I’m making about the kind of arguments being made here, not about the object-level question.)
Thanks for your answer!
This seems correct, though it’s still valuable to flesh out that it seems possible to have LW posts that are helpful for alignment but not for capabilities: namely, such posts that summarize insights from capabilities research that are known to ~all capabilities people while known to few alignment people.
The main reason I shifted more to your viewpoint now is that capabilities insights might influence people who do not yet know a lot about capabilities to work on that in the future, instead of working on alignment. Therefore, I’m also not sure if Marius’ heuristic “Has company-X-who-cares-mostly-about-capabilities likely thought about this already?” for deciding whether something is infohazardy is safe.
Yes, that seems correct (though I’m a bit unhappy about you bluntly straw-manning my position). I think after reflection I would phrase my point as follows:
”There is a conflict between Lesswrongs commitment to epistemic rationality on the one hand, and the commitment to restrict info hazards on the other hand. Lesswrong’s commitment to epistemic rationality exists for good reasons, and should not be given up lightly. Therefore, whenever we restrict discussion and information about certain topics, we should have thought about this with great care.”
I don’t yet have a fleshed-out view on this, but I did move a bit in Tom’s direction.
I think that argument is good if you expand out its reasoning. The reason we have a strong commitment to epistemic rationality is because learning and teaching true things is almost always very good. You need to establish a fair chunk of probable bad to outweigh it.