I think it’s a bit hard to tell how influential this post has been, though my best guess is “very”. It’s clear that sometime around when this post was published there was a pretty large shift in the strategies that I and a lot of other people pursued, with “slowing down AI” becoming a much more common goal for people to pursue.
I think (most of) the arguments in this post are good. I also think that when I read an initial draft of this post (around 1.5 years ago or so), and had a very hesitant reaction to the core strategy it proposes, that I was picking up on something important, and that I do also want to award Bayes points to that part of me given how things have been playing out so far.
I do think that since I’ve seen people around me adopt strategies to slow down AI, I’ve seen it done on a basis that feels much more rhetorical, and often directly violates virtues and perspectives that I hold very dearly. I think it’s really important to understand that technological progress has been the central driving force behind humanity’s success, and that indeed this should establish a huge prior against stopping almost any kind of technological development.
In contrast to that, the majority of arguments that I’ve seen find traction for slowing down AI development are not distinguishable from arguments that apply to a much larger set of technologies which to me clearly do not pose a risk commensurable with the prior we should have against slowdown. Concerns about putting people out of jobs, destroying traditional models of romantic relationships, violating copyright law, spreading disinformation, all seem to me to be the kind of thing that if you buy it, you end up with an argument that proves too much and should end up opposed to a huge chunk of technological progress.
And I can feel the pressure in myself for these things as well. I can see how it would be easy to at least locally gain traction at slowing down AI by allying myself with people who are concerned about these things, and don’t care much about the existential risk angle. I think allying is probably even a good call, though it’s hard to be in alliances without conflating your beliefs with the beliefs if your alliance.
I think this post did engage with these considerations pretty well, though I am still hesitant and uncertain about the degree to which we can slow down AI without sacrificing enough collective sanity and epistemics, and am a bit worried now that things are in the realm of public debate and advocacy it’s too late to have a sane conversation about those sacrifices.
Overall, I think this post is among the most important posts of 2022.
I think it’s a bit hard to tell how influential this post has been, though my best guess is “very”. It’s clear that sometime around when this post was published there was a pretty large shift in the strategies that I and a lot of other people pursued, with “slowing down AI” becoming a much more common goal for people to pursue.
I think (most of) the arguments in this post are good. I also think that when I read an initial draft of this post (around 1.5 years ago or so), and had a very hesitant reaction to the core strategy it proposes, that I was picking up on something important, and that I do also want to award Bayes points to that part of me given how things have been playing out so far.
I do think that since I’ve seen people around me adopt strategies to slow down AI, I’ve seen it done on a basis that feels much more rhetorical, and often directly violates virtues and perspectives that I hold very dearly. I think it’s really important to understand that technological progress has been the central driving force behind humanity’s success, and that indeed this should establish a huge prior against stopping almost any kind of technological development.
In contrast to that, the majority of arguments that I’ve seen find traction for slowing down AI development are not distinguishable from arguments that apply to a much larger set of technologies which to me clearly do not pose a risk commensurable with the prior we should have against slowdown. Concerns about putting people out of jobs, destroying traditional models of romantic relationships, violating copyright law, spreading disinformation, all seem to me to be the kind of thing that if you buy it, you end up with an argument that proves too much and should end up opposed to a huge chunk of technological progress.
And I can feel the pressure in myself for these things as well. I can see how it would be easy to at least locally gain traction at slowing down AI by allying myself with people who are concerned about these things, and don’t care much about the existential risk angle. I think allying is probably even a good call, though it’s hard to be in alliances without conflating your beliefs with the beliefs if your alliance.
I think this post did engage with these considerations pretty well, though I am still hesitant and uncertain about the degree to which we can slow down AI without sacrificing enough collective sanity and epistemics, and am a bit worried now that things are in the realm of public debate and advocacy it’s too late to have a sane conversation about those sacrifices.
Overall, I think this post is among the most important posts of 2022.