I’m curious about the extent to which you expect the future to be awesome-by-default as long as we avoid all clear catastrophes along the way; vs to what extent you think we just has a decent chance of getting a non-negligible fraction of all potential value (and working to avoid catastrophes is one of the most tractable ways of improving the expected value).
Proposed tentative operationalisation:
World A is just like our world, except that we don’t experience any ~GCR on Earth in the next couple of centuries, and we solve the problem of making competitive intent-aligned AI.
In world B, we also don’t experience any GCR soon and we also solve alignment. In addition, you and your chosen collaborators get to design and implement some long-reflection-style scheme that you think will best capture the aggregate of human and non-human desires. All coordination and cooperation problems on Earth are magically solved. Though no particular values are forced upon anyone, everyone is happy to stop and think about what they really want, and contribute to exercises designed to illuminate this.
How much better do you think world B is compared to world A? (Assuming that a world where Earth-originating intelligence goes extinct has a baseline value of 0.)
I would guess GCRs are generally less impactful than pressures that lead our collective preferences to evolve in a way that we wouldn’t like on reflection. Such failures are unrecoverable catastrophes in the sense that we have no desire to recover, but in a pluralistic society they would not necessarily or even typically be global. You could view alignment failures as an example of values drifting, given that the main thing at stake are our preferences about the universe’s future rather than the destruction of earth-originating intelligent life.
I expect this is the kind of thing I would be working on if I thought that alignment risk was less severe. My best guess about what to do is probably just futurism—understanding what is likely to happen and giving us more time to think about that seems great. Maybe eventually that leads to a different priority.
I’m curious about the extent to which you expect the future to be awesome-by-default as long as we avoid all clear catastrophes along the way; vs to what extent you think we just has a decent chance of getting a non-negligible fraction of all potential value (and working to avoid catastrophes is one of the most tractable ways of improving the expected value).
Proposed tentative operationalisation:
World A is just like our world, except that we don’t experience any ~GCR on Earth in the next couple of centuries, and we solve the problem of making competitive intent-aligned AI.
In world B, we also don’t experience any GCR soon and we also solve alignment. In addition, you and your chosen collaborators get to design and implement some long-reflection-style scheme that you think will best capture the aggregate of human and non-human desires. All coordination and cooperation problems on Earth are magically solved. Though no particular values are forced upon anyone, everyone is happy to stop and think about what they really want, and contribute to exercises designed to illuminate this.
How much better do you think world B is compared to world A? (Assuming that a world where Earth-originating intelligence goes extinct has a baseline value of 0.)
I would guess GCRs are generally less impactful than pressures that lead our collective preferences to evolve in a way that we wouldn’t like on reflection. Such failures are unrecoverable catastrophes in the sense that we have no desire to recover, but in a pluralistic society they would not necessarily or even typically be global. You could view alignment failures as an example of values drifting, given that the main thing at stake are our preferences about the universe’s future rather than the destruction of earth-originating intelligent life.
I expect this is the kind of thing I would be working on if I thought that alignment risk was less severe. My best guess about what to do is probably just futurism—understanding what is likely to happen and giving us more time to think about that seems great. Maybe eventually that leads to a different priority.