Everyone, everyone, literally everyone in AI alignment is severely wrong about at least one core thing, and disagreements still persist on seemingly-obviously-foolish things.
If by ‘severely wrong about at least one core thing’ you just mean ‘systemically severely miscalibrated on some very important topic ’, then my guess is that many people operating in the rough prosaic alignment prosaic alignment paradigm probably don’t suffer from this issue. It’s just not that hard to be roughly calibrated. This is perhaps a random technical point.
If by ‘severely wrong about at least one core thing’ you just mean ‘systemically severely miscalibrated on some very important topic ’, then my guess is that many people operating in the rough prosaic alignment prosaic alignment paradigm probably don’t suffer from this issue. It’s just not that hard to be roughly calibrated. This is perhaps a random technical point.