While you nod to ‘politics is the mind-killer’, I don’t think the right lesson is being taken away, or perhaps just not with enough emphasis.
Whether one is an accelerationist, Pauser, or an advocate of some nuanced middle path, the prospects/goals of everyone are harmed if the discourse-landscape becomes politicized/polarized. All possible movement becomes more difficult.
“Well we of course don’t want that to happen, but X ppl are in power, so it makes sense to ask how X ppl tend to think and cater our arguments to them”
If your argument is taking advantage of features of {group of ppl X} qua X, then that is almost unavoidably going to run counter to some Y qua Y, (either as a direct consequence of the arguments and/or because Nuance cannot survive public exposure) and if it isn’t, then why couldn’t the argument have been made completely apolitically to begin with?
I just continue to think that any mention, literally at all, of ideology or party is courting discourse-disaster for all, again no matter what specific policy one is advocating for. Do we all remember what happened with covid masks? Or what is currently happening with discourse surrounding elon? Nuance just does not survive public exposure, and nobody is going to fix that in the few years we have left. (and this is a public document). The best way forward continues to be apolitical good arguments. Yes those arguments are going to be sent towards those who are in power at any given time, but you can do that without routing through ideology.
To touch, even in passing reference, ideology/alliance (ex: the c word included in the title of this post) is to risk the poison/mindkill spreading in a way that is basically irreversible, because to correct it (other than comments like this just calling to Stop Referencing Ideology) usually involves Referencing An Ideology. Like a bug stuck in a glue trap, it places yet another limb into the glue in a vain attempt to push itself free.
Whether one is an accelerationist, Pauser, or an advocate of some nuanced middle path, the prospects/goals of everyone are harmed if the discourse-landscape becomes politicized/polarized. ... I just continue to think that any mention, literally at all, of ideology or party is courting discourse-disaster for all, again no matter what specific policy one is advocating for. ... Like a bug stuck in a glue trap, it places yet another limb into the glue in a vain attempt to push itself free.
I would agree in a world where the proverbial bug hasn’t already made any contact with the glue trap, but this very thing has clearly already beenhappening for almost a year in a troubling direction. The political left has been fairly casually ‘Everything-Bagel-izing’ AI safety, largely in smuggling in social progressivism that has little to do with the core existential risks, and the right, as a result, is increasingly coming to view AI safety as something approximating ‘woke BS stifling rapid innovation.’ The fly is already a bit stuck.
The point we are trying to drive home here is precisely what you’re also pointing at: avoiding an AI-induced catastrophe is obviously not a partisan goal. We are watching people in DC slowly lose sight of this critical fact. This is why we’re attempting to explain here why basic AI x-risk concerns are genuinely important regardless of one’s ideological leanings. ie, genuinely important to left-leaning and right-leaning people alike. Seems like very few people have explicitly spelled out the latter case, though, which is why we thought it would be worthwhile to do so here.
While you nod to ‘politics is the mind-killer’, I don’t think the right lesson is being taken away, or perhaps just not with enough emphasis.
Whether one is an accelerationist, Pauser, or an advocate of some nuanced middle path, the prospects/goals of everyone are harmed if the discourse-landscape becomes politicized/polarized. All possible movement becomes more difficult.
“Well we of course don’t want that to happen, but X ppl are in power, so it makes sense to ask how X ppl tend to think and cater our arguments to them”
If your argument is taking advantage of features of {group of ppl X} qua X, then that is almost unavoidably going to run counter to some Y qua Y, (either as a direct consequence of the arguments and/or because Nuance cannot survive public exposure) and if it isn’t, then why couldn’t the argument have been made completely apolitically to begin with?
I just continue to think that any mention, literally at all, of ideology or party is courting discourse-disaster for all, again no matter what specific policy one is advocating for. Do we all remember what happened with covid masks? Or what is currently happening with discourse surrounding elon? Nuance just does not survive public exposure, and nobody is going to fix that in the few years we have left. (and this is a public document). The best way forward continues to be apolitical good arguments. Yes those arguments are going to be sent towards those who are in power at any given time, but you can do that without routing through ideology.
To touch, even in passing reference, ideology/alliance (ex: the c word included in the title of this post) is to risk the poison/mindkill spreading in a way that is basically irreversible, because to correct it (other than comments like this just calling to Stop Referencing Ideology) usually involves Referencing An Ideology. Like a bug stuck in a glue trap, it places yet another limb into the glue in a vain attempt to push itself free.
I would agree in a world where the proverbial bug hasn’t already made any contact with the glue trap, but this very thing has clearly already been happening for almost a year in a troubling direction. The political left has been fairly casually ‘Everything-Bagel-izing’ AI safety, largely in smuggling in social progressivism that has little to do with the core existential risks, and the right, as a result, is increasingly coming to view AI safety as something approximating ‘woke BS stifling rapid innovation.’ The fly is already a bit stuck.
The point we are trying to drive home here is precisely what you’re also pointing at: avoiding an AI-induced catastrophe is obviously not a partisan goal. We are watching people in DC slowly lose sight of this critical fact. This is why we’re attempting to explain here why basic AI x-risk concerns are genuinely important regardless of one’s ideological leanings. ie, genuinely important to left-leaning and right-leaning people alike. Seems like very few people have explicitly spelled out the latter case, though, which is why we thought it would be worthwhile to do so here.