I removed the “productive” clause from the sentence. It wasn’t really adding much to the sentence anyway.
I agree with your description of “moving on”, but am not sure what to do with the paragraph. The paragraph is targeted towards a) people who are irrationally/compulsively fixated on acausal extortion, and b) specifically, people whose friends are already kinda sick of hearing about it.
I think it’s an important question from the inside of how to tell whether or not you’re irrationally fixated on something vs rationally orienting to something scary. I think AI is in fact at least moderately likely to kill everyone this century, and, unlike acausal extortion, I think the correct response to that one is “process/integrate the information such that you understand it and have maybe grieved or whatever, and then do whatever seems right to do afterwards.”
At first glance, the arguments for both AI doom and acausal extortion are probably similarly bewildering for many people, and it’s not clear where the boundary of “okay, I’ve thought about this enough to be roughly oriented” is. I think ideally, the OP would engage more with that question, rather than me sort of exasperatedly saying “look, man, acausal extortion isn’t that big a deal, chill out”, but I wasn’t sure how to go about it. I am interested in suggestions.
My model of the target audience was very indignant at the “moving on” suggestion that doesn’t rest on an object level argument (especially in context of discussing hypothetical friends who are not taking the concern seriously). Which is neither here nor there, since there is no object level argument available for this open question/topic. At least there is a meta argument about what’s actually productive to do. But interventions on the level of feelings is not an argument at all, it’s a separate thing that would be motivated by that argument.
the boundary of “okay, I’ve thought about this enough to be roughly oriented”
Curiosity/value demands what’s beyond currently available theory, so the cutoff is not about knowing enough, it’s pragmatics of coping with not being able to find out more with feasible effort.
“look, man, acausal extortion isn’t that big a deal, chill out”
I think a relevant argument is something like anti-prediction, there is a large space of important questions that are all objectively a big deal if there is something to be done about them, but nonetheless they are pragmatically unimportant, because we do not have an attack. Perhaps it’s unusually neglected, that’s some sort of distinction.
I updated the opening section of the post to be a bit less opinionated and more explain-rather-than-persuade-y. Probably should also update the end to match, but that’s what I had time to do in this sitting.
I removed the “productive” clause from the sentence. It wasn’t really adding much to the sentence anyway.
I agree with your description of “moving on”, but am not sure what to do with the paragraph. The paragraph is targeted towards a) people who are irrationally/compulsively fixated on acausal extortion, and b) specifically, people whose friends are already kinda sick of hearing about it.
I think it’s an important question from the inside of how to tell whether or not you’re irrationally fixated on something vs rationally orienting to something scary. I think AI is in fact at least moderately likely to kill everyone this century, and, unlike acausal extortion, I think the correct response to that one is “process/integrate the information such that you understand it and have maybe grieved or whatever, and then do whatever seems right to do afterwards.”
At first glance, the arguments for both AI doom and acausal extortion are probably similarly bewildering for many people, and it’s not clear where the boundary of “okay, I’ve thought about this enough to be roughly oriented” is. I think ideally, the OP would engage more with that question, rather than me sort of exasperatedly saying “look, man, acausal extortion isn’t that big a deal, chill out”, but I wasn’t sure how to go about it. I am interested in suggestions.
My model of the target audience was very indignant at the “moving on” suggestion that doesn’t rest on an object level argument (especially in context of discussing hypothetical friends who are not taking the concern seriously). Which is neither here nor there, since there is no object level argument available for this open question/topic. At least there is a meta argument about what’s actually productive to do. But interventions on the level of feelings is not an argument at all, it’s a separate thing that would be motivated by that argument.
Curiosity/value demands what’s beyond currently available theory, so the cutoff is not about knowing enough, it’s pragmatics of coping with not being able to find out more with feasible effort.
I think a relevant argument is something like anti-prediction, there is a large space of important questions that are all objectively a big deal if there is something to be done about them, but nonetheless they are pragmatically unimportant, because we do not have an attack. Perhaps it’s unusually neglected, that’s some sort of distinction.
I updated the opening section of the post to be a bit less opinionated and more explain-rather-than-persuade-y. Probably should also update the end to match, but that’s what I had time to do in this sitting.