Some important things that follow from the conditional I’d say are the following:
LW becomes less important in worlds where the alignment problem is easy, and IMO this is underrated (though how much depends on the method). In particular, depending on how this happens, it’s actually plausible that LW was a net negative (though I don’t expect that.) A big reason here is to a large extent, worlds that have alignment being easy are worlds where the problem is less adversarial, and in particular more amenable to normal work, meaning that LWer methods look less useful. The cynical hypothesis that lc postulated is that we’d probably underrate the scenario’s chances, since it’s a scenario that makes us less important in saving the world.
Open source may look a lot better, and in particular, problems of AI shift to something like this, which is essentially the fact that ever more AI progress lets you essentially decouple human welfare and economic progress, meaning capitalism starts looking much less positive, because self-interest no longer is good for human welfare. and while I think this is maybe possible to avoid, over the long-term, I do think that dr_s’s post is underrated in thinking of AI risks beyond alignment ones, and unfortunately I consider this outcome plausible over the long run. Link is below:
I rarely think proposals or blog pieces criticizing capitalism work all that well in shifting my priors of capitalism being good, but I’ve got to admit that this one definitely shifted my priors to “capitalism will in the future need to probably be radically reformed or dismantled like feudalism and slavery was.”
This proposal down below thankfully is one of the best proposals I’ve seen to give people income and a way to live that doesn’t rely on so much self-interest.
I’d have more to say about this in the future, but for now, I’m starting to think that given the IMO much more plausibility of AI being safe by default than a lot of LWers think, I’m starting to focus less on AI harms due to misalignment, and more focus on the problems of AI automating away the things that give us a way to live in the form of wages.
Some important things that follow from the conditional I’d say are the following:
LW becomes less important in worlds where the alignment problem is easy, and IMO this is underrated (though how much depends on the method). In particular, depending on how this happens, it’s actually plausible that LW was a net negative (though I don’t expect that.) A big reason here is to a large extent, worlds that have alignment being easy are worlds where the problem is less adversarial, and in particular more amenable to normal work, meaning that LWer methods look less useful. The cynical hypothesis that lc postulated is that we’d probably underrate the scenario’s chances, since it’s a scenario that makes us less important in saving the world.
Open source may look a lot better, and in particular, problems of AI shift to something like this, which is essentially the fact that ever more AI progress lets you essentially decouple human welfare and economic progress, meaning capitalism starts looking much less positive, because self-interest no longer is good for human welfare. and while I think this is maybe possible to avoid, over the long-term, I do think that dr_s’s post is underrated in thinking of AI risks beyond alignment ones, and unfortunately I consider this outcome plausible over the long run. Link is below:
https://www.lesswrong.com/posts/2ujT9renJwdrcBqcE/the-benevolence-of-the-butcher
I rarely think proposals or blog pieces criticizing capitalism work all that well in shifting my priors of capitalism being good, but I’ve got to admit that this one definitely shifted my priors to “capitalism will in the future need to probably be radically reformed or dismantled like feudalism and slavery was.”
This proposal down below thankfully is one of the best proposals I’ve seen to give people income and a way to live that doesn’t rely on so much self-interest.
https://www.peoplespolicyproject.org/projects/social-wealth-fund/
I’d have more to say about this in the future, but for now, I’m starting to think that given the IMO much more plausibility of AI being safe by default than a lot of LWers think, I’m starting to focus less on AI harms due to misalignment, and more focus on the problems of AI automating away the things that give us a way to live in the form of wages.