Have you written about this in a post or paper somewhere? (I’m thinking of writing a post about this and related topics and would like to read and build upon existing literature.)
Not usefully. If I had to link to something on it, I might link to the Ought mission page, but I don’t have any substantive analysis to point to.
As far as how we interact with each other, it seems likely that once superintelligent AIs come into existence, all or most interactions between humans will be mediated through AIs, which surely will have a much greater effect than any other change in communications technology?
I agree with “larger effect than historical changes” but not “larger effect than all changes that we could speculate about” or even “larger effect sthan all changes between now and one superintelligent AIs come into existence.”
If AI is aligned, then it’s also worth noting that this effect is large but not obviously unusually disruptive, since e.g. the AI is trying to think about how to minimize it (though it may be doing that imperfectly).
As a random example, it seems plausible to me that changes to the way society is organized—what kinds of jobs people do, compulsory schooling, weaker connections to family, lower religiosity—over the last few centuries have had a larger unendorsed impact on values than AI will. I don’t see any principled reason to expect those changes to be positive while the changes from AI are negative, it seems like in expectation both of them would be positive but for the opportunity cost effect (where today we have the option to let our values and views change in whatever way we most endorse, and we foreclose this option when we let our values drift anything less than maximally-reflectively).
Not usefully. If I had to link to something on it, I might link to the Ought mission page, but I don’t have any substantive analysis to point to.
I agree with “larger effect than historical changes” but not “larger effect than all changes that we could speculate about” or even “larger effect sthan all changes between now and one superintelligent AIs come into existence.”
If AI is aligned, then it’s also worth noting that this effect is large but not obviously unusually disruptive, since e.g. the AI is trying to think about how to minimize it (though it may be doing that imperfectly).
As a random example, it seems plausible to me that changes to the way society is organized—what kinds of jobs people do, compulsory schooling, weaker connections to family, lower religiosity—over the last few centuries have had a larger unendorsed impact on values than AI will. I don’t see any principled reason to expect those changes to be positive while the changes from AI are negative, it seems like in expectation both of them would be positive but for the opportunity cost effect (where today we have the option to let our values and views change in whatever way we most endorse, and we foreclose this option when we let our values drift anything less than maximally-reflectively).