It is not that people people’s decision-making skill is optimized such that you can consistently reverse someone’s opinion to get something that accurately tracks reality. If that was the case then they are implicitly tracking reality very well already. Reversed stupidity is not intelligence.
Sure, I think this helps tease out the moral valence point I was trying to make. “Don’t allow them near” implies their advice is actively harmful, which in turn suggests that reversing it could be a good idea. But as you say, this is implausible. A more plausible statement is that their advice is basically noise—you shouldn’t pay too much attention to it. I expect OP would’ve said something like that if they were focused on descriptive accuracy rather than scapegoating.
Another way to illuminate the moral dimension of this conversation: If we’re talking about poor decision-making, perhaps MIRI and FHI should also be discussed? They did a lot to create interest in AGI, and MIRI failed to create good alignment researchers by its own lights. Now after doing advocacy off and on for years, and creating this situation, they’re pivoting to 100% advocacy.
Could MIRI be made up of good people who are “great at technical stuff”, yet apt to shoot themselves in the foot when it comes to communicating with the public? It’s hard for me to imagine an upvoted post on this forum saying “MIRI shouldn’t be allowed anywhere near AI safety communications”.
Sure, I think this helps tease out the moral valence point I was trying to make. “Don’t allow them near” implies their advice is actively harmful, which in turn suggests that reversing it could be a good idea. But as you say, this is implausible. A more plausible statement is that their advice is basically noise—you shouldn’t pay too much attention to it. I expect OP would’ve said something like that if they were focused on descriptive accuracy rather than scapegoating.
Another way to illuminate the moral dimension of this conversation: If we’re talking about poor decision-making, perhaps MIRI and FHI should also be discussed? They did a lot to create interest in AGI, and MIRI failed to create good alignment researchers by its own lights. Now after doing advocacy off and on for years, and creating this situation, they’re pivoting to 100% advocacy.
Could MIRI be made up of good people who are “great at technical stuff”, yet apt to shoot themselves in the foot when it comes to communicating with the public? It’s hard for me to imagine an upvoted post on this forum saying “MIRI shouldn’t be allowed anywhere near AI safety communications”.