Just to check, surely you’re not saying an extrapolated version of Archimedes and a thousand people who agreed with him wouldn’t have turned out OK?
It seems to me that we have some quite strong evidence against rotten bastards theory in that intelligent and well-informed people IRL seem to converge away from bastardly beliefs. Still, rotten bastards theory seems worth thinking about for negative-utilitarian reasons.
I’m not Eliezer nor am I a pro, but I think I agree with Eliezer’s account, and as a first attempt I think it’s something like this...
When X judges that Y should Z, X is judging that Z is the solution to the problem W, where W is a rigid designator for the problem structure implicitly defined by the machinery shared by X and Y which they both use to make desirability judgments. (Or at least X is asserting that it’s shared.) Due to the nature of W, becoming informed will cause X and Y to get closer to the solution of W, but wanting-it-when-informed is not what makes that solution moral.