I don’t think you use correct model of public discource. Debates with e/acc on twitter don’t have a goal to persuade e/acc. The point is to give public a live evidence that your position is well-founded, coherent and likely correct and their position is less so.
When you publicly debate (outside nice rationalist circles) with someone, it’s very hard to get acknowledgement of them being wrong because it’s low-status. But people outside the debate can quetly change their mind and make it look like it was their position all along, avoiding status-blows.
“It’s risky.” “We’re gambling with our lives on an unproven technology.” Don’t get bogged down in irrelevant philosophical debates.
It’s literally an invitation to irrelevant philosophical debates about how all technologies are risky and we are still alive and I don’t know how to get out of here without reference to probabilities and expected values.
“p(doom)” has become a shibboleth for the X-risk subculture, and an easy target of derision for anyone outside it.
This kinda misses greater picture? “Belief that here is a substantial probability of AI killing everyone” is 1000x stronger shibboleth and much easier target for derision.
Oh I agree the main goal is to convince onlookers, and I think the same ideas apply there. If you use language that’s easily mapped to concepts like “unearned confidence”, the onlooker is more likely to dismiss whatever you’re saying.
It’s literally an invitation to irrelevant philosophical debates about how all technologies are risky and we are still alive and I don’t know how to get out of here without reference to probabilities and expected values.
If that comes up, yes. But then it’s them who have brought up the fact that probability is relevant, so you’re not the one first framing it like that.
This kinda misses greater picture? “Belief that here is a substantial probability of AI killing everyone” is 1000x stronger shibboleth and much easier target for derision.
Hmm. I disagree, not sure exactly why. I think it’s something like: people focus on short phrases and commonly-used terms more than they focus on ideas. Like how the SSC post I linked gives the example of republicans being just fine with drug legalization as long as it’s framed in right-wing terms. Or how talking positively about eugenics will get you hated, but talking positively about embryo selection and laws against incest will be taken seriously. I suspect that most people don’t actually take positions on ideas at all; they take positions on specific tribal signals that happen to be associated with ideas.
Consider all the people who reject the label of “effective altruist”, but try to donate to effective charity anyway. That seems like a good thing to me; some people don’t want to be associated with the tribe for some political reason, and if they’re still trying to make the world a better place, great! We want something similar to be the case with AI risk; people may reject the labels of “doomer” or “rationalist”, but still think AI is risky, and using more complicated and varied phrases to describe that outcome will make people more open to it.
I am one of those people; I don’t consider myself EA due to its strong association with atheism, but nonetheless am very much on slowing down AGI before it kills us all.
I don’t think you use correct model of public discource. Debates with e/acc on twitter don’t have a goal to persuade e/acc. The point is to give public a live evidence that your position is well-founded, coherent and likely correct and their position is less so.
When you publicly debate (outside nice rationalist circles) with someone, it’s very hard to get acknowledgement of them being wrong because it’s low-status. But people outside the debate can quetly change their mind and make it look like it was their position all along, avoiding status-blows.
It’s literally an invitation to irrelevant philosophical debates about how all technologies are risky and we are still alive and I don’t know how to get out of here without reference to probabilities and expected values.
This kinda misses greater picture? “Belief that here is a substantial probability of AI killing everyone” is 1000x stronger shibboleth and much easier target for derision.
Oh I agree the main goal is to convince onlookers, and I think the same ideas apply there. If you use language that’s easily mapped to concepts like “unearned confidence”, the onlooker is more likely to dismiss whatever you’re saying.
If that comes up, yes. But then it’s them who have brought up the fact that probability is relevant, so you’re not the one first framing it like that.
Hmm. I disagree, not sure exactly why. I think it’s something like: people focus on short phrases and commonly-used terms more than they focus on ideas. Like how the SSC post I linked gives the example of republicans being just fine with drug legalization as long as it’s framed in right-wing terms. Or how talking positively about eugenics will get you hated, but talking positively about embryo selection and laws against incest will be taken seriously. I suspect that most people don’t actually take positions on ideas at all; they take positions on specific tribal signals that happen to be associated with ideas.
Consider all the people who reject the label of “effective altruist”, but try to donate to effective charity anyway. That seems like a good thing to me; some people don’t want to be associated with the tribe for some political reason, and if they’re still trying to make the world a better place, great! We want something similar to be the case with AI risk; people may reject the labels of “doomer” or “rationalist”, but still think AI is risky, and using more complicated and varied phrases to describe that outcome will make people more open to it.
I am one of those people; I don’t consider myself EA due to its strong association with atheism, but nonetheless am very much on slowing down AGI before it kills us all.