anything academic (like AI safety), but not at all for politics [...] avoiding of any hot-button issues
“Politics” isn’t a separate magisterium, though; what counts as a “hot-button issue” is a function of the particular socio-psychological forces operative in the culture of a particular place and time. Groups of humans (including such groups as “corportations” or “governments”) are real things in the real physical universe and it should be possible to build predictive models of their behavior using the same general laws of cognition that apply to everything else.
To this one might reply, “Oh, sure, I’m not objecting to the study of sociology, social psychology, economics, history, &c., just politics.” This sort of works if you define “political” as “of or concerning any topic that seems likely to trigger motivated reasoning and coalition-formation among the given participants.” But I don’t see how you can make that kind of clean separation in a principled way, and that matters if you care about getting the right answer to questions that have been infused with “political” connotations in the local culture of the particular place and time in which you happen to live.
Put it this way: astronomy is not a “political” topic in Berkeley 2019. In Rome 1632, it was. The individual cognitive algorithms and collective “discourse algorithms” that can’t just get the right answer to questions that seem “political” in Berkeley 2019, would have also failed to get the right answer on heliocentrism in Rome 1632—and I really doubt they’re adequate to solve AGI alignment in Berkeley 2039.
This sort of existence argument is reasonable for hypothetical supehuman AIs, but real-world human cognition is extremely sensitive to the structure we can find or make up in the world. Sure, just saying “politics” does not provide a clear reference class, so it would be helpful to understand what you want to avoid about politics and engineer around it. My hunch is that avoiding your highly-technical definition of bad discourse that you are using to replace “politics” just leads to a lot of time spent on your politics analysis, with approximately the same topics avoided as a very simple rule of thumb.
I stopped associating or mentioning LW in real life largely because of the political (maybe some parts cultural as well) baggage of several years ago. Not even because I had any particular problem with the debate on the site or the opinions of everyone in aggregate, but because there was just too much stuff to cherry-pick from in our world of guilt by association. Too many mixed signals for people to judge me by.
“Politics” isn’t a separate magisterium, though; what counts as a “hot-button issue” is a function of the particular socio-psychological forces operative in the culture of a particular place and time. Groups of humans (including such groups as “corportations” or “governments”) are real things in the real physical universe and it should be possible to build predictive models of their behavior using the same general laws of cognition that apply to everything else.
To this one might reply, “Oh, sure, I’m not objecting to the study of sociology, social psychology, economics, history, &c., just politics.” This sort of works if you define “political” as “of or concerning any topic that seems likely to trigger motivated reasoning and coalition-formation among the given participants.” But I don’t see how you can make that kind of clean separation in a principled way, and that matters if you care about getting the right answer to questions that have been infused with “political” connotations in the local culture of the particular place and time in which you happen to live.
Put it this way: astronomy is not a “political” topic in Berkeley 2019. In Rome 1632, it was. The individual cognitive algorithms and collective “discourse algorithms” that can’t just get the right answer to questions that seem “political” in Berkeley 2019, would have also failed to get the right answer on heliocentrism in Rome 1632—and I really doubt they’re adequate to solve AGI alignment in Berkeley 2039.
This sort of existence argument is reasonable for hypothetical supehuman AIs, but real-world human cognition is extremely sensitive to the structure we can find or make up in the world. Sure, just saying “politics” does not provide a clear reference class, so it would be helpful to understand what you want to avoid about politics and engineer around it. My hunch is that avoiding your highly-technical definition of bad discourse that you are using to replace “politics” just leads to a lot of time spent on your politics analysis, with approximately the same topics avoided as a very simple rule of thumb.
I stopped associating or mentioning LW in real life largely because of the political (maybe some parts cultural as well) baggage of several years ago. Not even because I had any particular problem with the debate on the site or the opinions of everyone in aggregate, but because there was just too much stuff to cherry-pick from in our world of guilt by association. Too many mixed signals for people to judge me by.
Is this because you think technical alignment work will be a political issue in 2039?