That’s a tough one. My serious answer is probably getting in contact with and arranging a direct transfer to Shahar Avin, who has done some unusually high-EV research there (disinformation, counterintelligence, and information monopolization causing a civilization-wide cascade of epistemic failures, including impacting EA and AI safety). For example, the Ukraine-related infowar has created an equilibrium that basically gives both sides a free pass to reintroduce large-scale torture of civilians, there’s tons of ways that information asymmetry brings big people out-of-distribution and introduces incentives to hurt little people.
That’s a tough one. My serious answer is probably getting in contact with and arranging a direct transfer to Shahar Avin, who has done some unusually high-EV research there (disinformation, counterintelligence, and information monopolization causing a civilization-wide cascade of epistemic failures, including impacting EA and AI safety). For example, the Ukraine-related infowar has created an equilibrium that basically gives both sides a free pass to reintroduce large-scale torture of civilians, there’s tons of ways that information asymmetry brings big people out-of-distribution and introduces incentives to hurt little people.