These might be some of the most neglected and most strategically-relevant ideas about AGI futures: Pareto-topian goal alignment and ‘Pareto-preferred futures, meaning futures that would be strongly approximately preferred by more or less everyone‘: https://www.youtube.com/watch?v=1lqBra8r468. These futures could be achievable because automation could bring massive economic gains, which, if allocated (reasonably, not-even-necessarily-perfectly) equitably, could make ~everyone much better off (hence the ‘strongly approximately preferred by more or less everyone’). I think this discourse could be crucial to incentivize less racing more coordination, including e.g. for pausing at the right time to allow for more AI safety work to get done, yet I’m not seeing it almost anywhere in the public sphere.
These might be some of the most neglected and most strategically-relevant ideas about AGI futures: Pareto-topian goal alignment and ‘Pareto-preferred futures, meaning futures that would be strongly approximately preferred by more or less everyone‘: https://www.youtube.com/watch?v=1lqBra8r468. These futures could be achievable because automation could bring massive economic gains, which, if allocated (reasonably, not-even-necessarily-perfectly) equitably, could make ~everyone much better off (hence the ‘strongly approximately preferred by more or less everyone’). I think this discourse could be crucial to incentivize less racing more coordination, including e.g. for pausing at the right time to allow for more AI safety work to get done, yet I’m not seeing it almost anywhere in the public sphere.