This post triggers a big “NON-QUANTITATIVE ARGUMENT” alarm in my head.
I’m not super confident in my ability to assess what the quantities are, but I’m extremely confident that they matter. It seems to me like your post could be written in exactly the same way if the “wokeness” phenomenon was “half as large” (fewer people care about, or they don’t care as strongly). Or, if it was twice as large. But this can’t be good – any sensible opinion on this issue has to depend on the scope of the problem, unless you think it’s in principle inconceivable for the wokeness phenomenon to be prevalent enough to matter.
I’ve explained the two categories I’m worried about here, and while there have been some updates since (biggest one: it may be good talk about politics now if we assume AI safety is going to be politicized anyway), I still think about it in roughly those terms. Is this a framing that makes sense to you?
It very much is a non-quantitative argument—since it’s a matter of principle. The principle being not to let outside perceptions dictate the topic of conversations.
I can think of situations were the principles could be broken, or unproductive. If upholding it would make it impossible to have these discussions in the first place (because engaging would mean you get stoned, or something) and hiding is not an option (or still too risky), then it would make sense to move conversations towards the overton window.
Said otherwise, the quantity I care about is “ability to have quote rational unquote conversations” and no amount of outside woke prevalence can change that *as long as they don’t drive enough community member away*. It will be a sad day for freedom and for all of us if that ends up one day being the case.
This post triggers a big “NON-QUANTITATIVE ARGUMENT” alarm in my head.
I’m not super confident in my ability to assess what the quantities are, but I’m extremely confident that they matter. It seems to me like your post could be written in exactly the same way if the “wokeness” phenomenon was “half as large” (fewer people care about, or they don’t care as strongly). Or, if it was twice as large. But this can’t be good – any sensible opinion on this issue has to depend on the scope of the problem, unless you think it’s in principle inconceivable for the wokeness phenomenon to be prevalent enough to matter.
I’ve explained the two categories I’m worried about here, and while there have been some updates since (biggest one: it may be good talk about politics now if we assume AI safety is going to be politicized anyway), I still think about it in roughly those terms. Is this a framing that makes sense to you?
It very much is a non-quantitative argument—since it’s a matter of principle. The principle being not to let outside perceptions dictate the topic of conversations.
I can think of situations were the principles could be broken, or unproductive. If upholding it would make it impossible to have these discussions in the first place (because engaging would mean you get stoned, or something) and hiding is not an option (or still too risky), then it would make sense to move conversations towards the overton window.
Said otherwise, the quantity I care about is “ability to have quote rational unquote conversations” and no amount of outside woke prevalence can change that *as long as they don’t drive enough community member away*. It will be a sad day for freedom and for all of us if that ends up one day being the case.