How much of an efficiency hit do you think taking all discussion of a subject offline (“in-person”) involves?
Probably a good deal for anything academic (like AI safety), but not at all for politics. I think discussions focused on persuasion/debate/argument/etc. are pretty universally bad (e.g. not truth-tracking), and that online discussion lends itself particularly well into falling into such discussions. It is sometimes possible to avoid this failure mode, but imo basically only if the conversations are kept highly academic and avoiding of any hot-button issues (e.g. as in some online AI safety discussions, though not all). I think this is basically impossible for politics, so I suspect that not having the ability to talk about politics online won’t be much of a problem (and might even be quite helpful, since I suspect it would overall raise the level of political discourse).
anything academic (like AI safety), but not at all for politics [...] avoiding of any hot-button issues
“Politics” isn’t a separate magisterium, though; what counts as a “hot-button issue” is a function of the particular socio-psychological forces operative in the culture of a particular place and time. Groups of humans (including such groups as “corportations” or “governments”) are real things in the real physical universe and it should be possible to build predictive models of their behavior using the same general laws of cognition that apply to everything else.
To this one might reply, “Oh, sure, I’m not objecting to the study of sociology, social psychology, economics, history, &c., just politics.” This sort of works if you define “political” as “of or concerning any topic that seems likely to trigger motivated reasoning and coalition-formation among the given participants.” But I don’t see how you can make that kind of clean separation in a principled way, and that matters if you care about getting the right answer to questions that have been infused with “political” connotations in the local culture of the particular place and time in which you happen to live.
Put it this way: astronomy is not a “political” topic in Berkeley 2019. In Rome 1632, it was. The individual cognitive algorithms and collective “discourse algorithms” that can’t just get the right answer to questions that seem “political” in Berkeley 2019, would have also failed to get the right answer on heliocentrism in Rome 1632—and I really doubt they’re adequate to solve AGI alignment in Berkeley 2039.
This sort of existence argument is reasonable for hypothetical supehuman AIs, but real-world human cognition is extremely sensitive to the structure we can find or make up in the world. Sure, just saying “politics” does not provide a clear reference class, so it would be helpful to understand what you want to avoid about politics and engineer around it. My hunch is that avoiding your highly-technical definition of bad discourse that you are using to replace “politics” just leads to a lot of time spent on your politics analysis, with approximately the same topics avoided as a very simple rule of thumb.
I stopped associating or mentioning LW in real life largely because of the political (maybe some parts cultural as well) baggage of several years ago. Not even because I had any particular problem with the debate on the site or the opinions of everyone in aggregate, but because there was just too much stuff to cherry-pick from in our world of guilt by association. Too many mixed signals for people to judge me by.
It is sometimes possible to avoid this failure mode, but imo basically only if the conversations are kept highly academic and avoiding of any hot-button issues (e.g. as in some online AI safety discussions, though not all). I think this is basically impossible for politics
I disagree, and think LW can actually do ok, and probably even better with some additional safeguards around political discussions. You weren’t around yet when we had the big 2009 political debate that I referenced in the OP, but I think that one worked out pretty well in the end. And I note that (at least from my perspective) a lot of progress in that debate was made online as opposed to in person, even though presumably many parallel offline discussions were also happening.
so I suspect that not having the ability to talk about politics online won’t be much of a problem (and might even be quite helpful, since I suspect it would overall raise the level of political discourse).
Do you think just talking about politics in person is good enough for making enough intellectual progress and disseminating that widely enough to eventually solve the political problems around AI safety and x-risks? Even if I didn’t think there’s an efficiency hit relative to current ways of discussing politics online, I would be quite worried about that and trying to find ways to move beyond just talking in person...
I disagree, and think LW can actually do ok, and probably even better with some additional safeguards around political discussions. You weren’t around yet when we had the big 2009 political debate that I referenced in the OP, but I think that one worked out pretty well in the end.
Do you think having that debate online was something that needed to happen for AI safety/x-risk? Do you think it benefited AI safety at all? I’m genuinely curious. My bet would be the opposite—that it caused AI safety to be more associated with political drama that helped further taint it.
I think it was bad in the short term (it was at least a distraction, and maybe tainted AI safety by association although I don’t have any personal knowledge of that), but probably good in the long run, because it gave people a good understanding of one political phenomenon (i.e., the giving and taking of offense) which let them better navigate similar situations in the future. In other words, if the debate hadn’t happened online and the resulting understanding widely propagated through this community, there probably would have been more political drama over time because people wouldn’t have had a good understanding of the how and why of avoiding offense.
But I do agree that “taint by association” is a big problem going forward, and I’m not sure what to do about that yet. By mentioning the 2009 debate I was mainly trying to establish that if that problem could be solved or ameliorated to a large degree, then online political discussions seem to be worth having because they can be pretty productive.
Probably a good deal for anything academic (like AI safety), but not at all for politics. I think discussions focused on persuasion/debate/argument/etc. are pretty universally bad (e.g. not truth-tracking), and that online discussion lends itself particularly well into falling into such discussions. It is sometimes possible to avoid this failure mode, but imo basically only if the conversations are kept highly academic and avoiding of any hot-button issues (e.g. as in some online AI safety discussions, though not all). I think this is basically impossible for politics, so I suspect that not having the ability to talk about politics online won’t be much of a problem (and might even be quite helpful, since I suspect it would overall raise the level of political discourse).
“Politics” isn’t a separate magisterium, though; what counts as a “hot-button issue” is a function of the particular socio-psychological forces operative in the culture of a particular place and time. Groups of humans (including such groups as “corportations” or “governments”) are real things in the real physical universe and it should be possible to build predictive models of their behavior using the same general laws of cognition that apply to everything else.
To this one might reply, “Oh, sure, I’m not objecting to the study of sociology, social psychology, economics, history, &c., just politics.” This sort of works if you define “political” as “of or concerning any topic that seems likely to trigger motivated reasoning and coalition-formation among the given participants.” But I don’t see how you can make that kind of clean separation in a principled way, and that matters if you care about getting the right answer to questions that have been infused with “political” connotations in the local culture of the particular place and time in which you happen to live.
Put it this way: astronomy is not a “political” topic in Berkeley 2019. In Rome 1632, it was. The individual cognitive algorithms and collective “discourse algorithms” that can’t just get the right answer to questions that seem “political” in Berkeley 2019, would have also failed to get the right answer on heliocentrism in Rome 1632—and I really doubt they’re adequate to solve AGI alignment in Berkeley 2039.
This sort of existence argument is reasonable for hypothetical supehuman AIs, but real-world human cognition is extremely sensitive to the structure we can find or make up in the world. Sure, just saying “politics” does not provide a clear reference class, so it would be helpful to understand what you want to avoid about politics and engineer around it. My hunch is that avoiding your highly-technical definition of bad discourse that you are using to replace “politics” just leads to a lot of time spent on your politics analysis, with approximately the same topics avoided as a very simple rule of thumb.
I stopped associating or mentioning LW in real life largely because of the political (maybe some parts cultural as well) baggage of several years ago. Not even because I had any particular problem with the debate on the site or the opinions of everyone in aggregate, but because there was just too much stuff to cherry-pick from in our world of guilt by association. Too many mixed signals for people to judge me by.
Is this because you think technical alignment work will be a political issue in 2039?
I disagree, and think LW can actually do ok, and probably even better with some additional safeguards around political discussions. You weren’t around yet when we had the big 2009 political debate that I referenced in the OP, but I think that one worked out pretty well in the end. And I note that (at least from my perspective) a lot of progress in that debate was made online as opposed to in person, even though presumably many parallel offline discussions were also happening.
Do you think just talking about politics in person is good enough for making enough intellectual progress and disseminating that widely enough to eventually solve the political problems around AI safety and x-risks? Even if I didn’t think there’s an efficiency hit relative to current ways of discussing politics online, I would be quite worried about that and trying to find ways to move beyond just talking in person...
Do you think having that debate online was something that needed to happen for AI safety/x-risk? Do you think it benefited AI safety at all? I’m genuinely curious. My bet would be the opposite—that it caused AI safety to be more associated with political drama that helped further taint it.
I think it was bad in the short term (it was at least a distraction, and maybe tainted AI safety by association although I don’t have any personal knowledge of that), but probably good in the long run, because it gave people a good understanding of one political phenomenon (i.e., the giving and taking of offense) which let them better navigate similar situations in the future. In other words, if the debate hadn’t happened online and the resulting understanding widely propagated through this community, there probably would have been more political drama over time because people wouldn’t have had a good understanding of the how and why of avoiding offense.
But I do agree that “taint by association” is a big problem going forward, and I’m not sure what to do about that yet. By mentioning the 2009 debate I was mainly trying to establish that if that problem could be solved or ameliorated to a large degree, then online political discussions seem to be worth having because they can be pretty productive.
“Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk”
What safeguards?