Well, I think it’s not very hard (even in our circles) to find people doing consequentialism badly, looking only at short-term / easily observable consequences (I think this is especially common among newer EA folk, and some wannabe-slytherin-types). It seemed likely Zvi meant a stronger version of the claim though, which I’m not sure how I’d operationalize.
Name one example? :)
I’m here from the future to say “Sam Bankman-fried”.
Well, I think it’s not very hard (even in our circles) to find people doing consequentialism badly, looking only at short-term / easily observable consequences (I think this is especially common among newer EA folk, and some wannabe-slytherin-types). It seemed likely Zvi meant a stronger version of the claim though, which I’m not sure how I’d operationalize.