Personally, I would like LessWrong to be a place where I can talk about AI safety and existential risk without being implicitly associated with lots of other political content that I may or may not agree with.
Good point, I agree this is probably a dealbreaker for a lot of people (maybe even me) unless we can think of some way to avoid it. I can’t help but think that we have to find a solution besides “just don’t talk about politics” though, because x-risk is inherently political and as the movement gets bigger it’s going to inevitably come into conflict with other people’s politics. (See here for an example of it starting to happen already.) If by the time that happens in full force, we’re still mostly political naivetes with little understanding of how politics works in general or what drives particular political ideas, how is that going to work out well? (ETA: This is not an entirely rhetorical question, BTW. If anyone can see how things work out well in the end despite LW never getting rid of the “don’t talk about politics” norm, I really want to hear that so I can maybe work in that direction instead.)
I can’t help but think that we have to find a solution besides “just don’t talk about politics” though, because x-risk is inherently political and as the movement gets bigger it’s going to inevitably come into conflict with other people’s politics.
My preferred solution to this problem continues to be just taking political discussions offline. I recognize that this is difficult for people not situated somewhere like the bay area where there are lots of other rationalist/effective altruist people around to talk to, but nevertheless I still think it’s the best solution.
EDITS:
See here for an example of it starting to happen already.
I also agree with Weyl’s point here that another very effective thing to do is to talk loudly and publicly about racism, sexism, etc.—though obviously as Eliezer points out that’s not always possible, as not every important subject necessarily has such a component.
This is not an entirely rhetorical question, BTW. If anyone can see how things work out well in the end despite LW never getting rid of the “don’t talk about politics” norm, I really want to hear that so I can maybe work in that direction instead.
My answer would be that we figure out how to engage with politics, but we do it offline rather than using a public forum like LW.
How much of an efficiency hit do you think taking all discussion of a subject offline (“in-person”) involves? For example if all discussions about AI safety could only be done in person (no forums, journals, conferences, blogs, etc.), how much would that slow down progress?
How much of an efficiency hit do you think taking all discussion of a subject offline (“in-person”) involves?
Probably a good deal for anything academic (like AI safety), but not at all for politics. I think discussions focused on persuasion/debate/argument/etc. are pretty universally bad (e.g. not truth-tracking), and that online discussion lends itself particularly well into falling into such discussions. It is sometimes possible to avoid this failure mode, but imo basically only if the conversations are kept highly academic and avoiding of any hot-button issues (e.g. as in some online AI safety discussions, though not all). I think this is basically impossible for politics, so I suspect that not having the ability to talk about politics online won’t be much of a problem (and might even be quite helpful, since I suspect it would overall raise the level of political discourse).
anything academic (like AI safety), but not at all for politics [...] avoiding of any hot-button issues
“Politics” isn’t a separate magisterium, though; what counts as a “hot-button issue” is a function of the particular socio-psychological forces operative in the culture of a particular place and time. Groups of humans (including such groups as “corportations” or “governments”) are real things in the real physical universe and it should be possible to build predictive models of their behavior using the same general laws of cognition that apply to everything else.
To this one might reply, “Oh, sure, I’m not objecting to the study of sociology, social psychology, economics, history, &c., just politics.” This sort of works if you define “political” as “of or concerning any topic that seems likely to trigger motivated reasoning and coalition-formation among the given participants.” But I don’t see how you can make that kind of clean separation in a principled way, and that matters if you care about getting the right answer to questions that have been infused with “political” connotations in the local culture of the particular place and time in which you happen to live.
Put it this way: astronomy is not a “political” topic in Berkeley 2019. In Rome 1632, it was. The individual cognitive algorithms and collective “discourse algorithms” that can’t just get the right answer to questions that seem “political” in Berkeley 2019, would have also failed to get the right answer on heliocentrism in Rome 1632—and I really doubt they’re adequate to solve AGI alignment in Berkeley 2039.
This sort of existence argument is reasonable for hypothetical supehuman AIs, but real-world human cognition is extremely sensitive to the structure we can find or make up in the world. Sure, just saying “politics” does not provide a clear reference class, so it would be helpful to understand what you want to avoid about politics and engineer around it. My hunch is that avoiding your highly-technical definition of bad discourse that you are using to replace “politics” just leads to a lot of time spent on your politics analysis, with approximately the same topics avoided as a very simple rule of thumb.
I stopped associating or mentioning LW in real life largely because of the political (maybe some parts cultural as well) baggage of several years ago. Not even because I had any particular problem with the debate on the site or the opinions of everyone in aggregate, but because there was just too much stuff to cherry-pick from in our world of guilt by association. Too many mixed signals for people to judge me by.
It is sometimes possible to avoid this failure mode, but imo basically only if the conversations are kept highly academic and avoiding of any hot-button issues (e.g. as in some online AI safety discussions, though not all). I think this is basically impossible for politics
I disagree, and think LW can actually do ok, and probably even better with some additional safeguards around political discussions. You weren’t around yet when we had the big 2009 political debate that I referenced in the OP, but I think that one worked out pretty well in the end. And I note that (at least from my perspective) a lot of progress in that debate was made online as opposed to in person, even though presumably many parallel offline discussions were also happening.
so I suspect that not having the ability to talk about politics online won’t be much of a problem (and might even be quite helpful, since I suspect it would overall raise the level of political discourse).
Do you think just talking about politics in person is good enough for making enough intellectual progress and disseminating that widely enough to eventually solve the political problems around AI safety and x-risks? Even if I didn’t think there’s an efficiency hit relative to current ways of discussing politics online, I would be quite worried about that and trying to find ways to move beyond just talking in person...
I disagree, and think LW can actually do ok, and probably even better with some additional safeguards around political discussions. You weren’t around yet when we had the big 2009 political debate that I referenced in the OP, but I think that one worked out pretty well in the end.
Do you think having that debate online was something that needed to happen for AI safety/x-risk? Do you think it benefited AI safety at all? I’m genuinely curious. My bet would be the opposite—that it caused AI safety to be more associated with political drama that helped further taint it.
I think it was bad in the short term (it was at least a distraction, and maybe tainted AI safety by association although I don’t have any personal knowledge of that), but probably good in the long run, because it gave people a good understanding of one political phenomenon (i.e., the giving and taking of offense) which let them better navigate similar situations in the future. In other words, if the debate hadn’t happened online and the resulting understanding widely propagated through this community, there probably would have been more political drama over time because people wouldn’t have had a good understanding of the how and why of avoiding offense.
But I do agree that “taint by association” is a big problem going forward, and I’m not sure what to do about that yet. By mentioning the 2009 debate I was mainly trying to establish that if that problem could be solved or ameliorated to a large degree, then online political discussions seem to be worth having because they can be pretty productive.
You said: If by the time that happens in full force, we’re still mostly political naivetes with little understanding of how politics works in general or what drives particular political ideas, how is that going to work out well.”
I think the debate here might rely on an unnecessary dichotomy, either I discuss politics on LW/in the rationalist community OR I will have little (or no) understanding.
Another solution would be to think of spaces to discuss politics which one can join.
I believe that we wont get a better understanding of politics by discussing it here, as its more of a form of empirical knowledge you acquire:
Some preliminary thoughts how to learn it outside LW or the rationalists community
join or work (if you like just for a few months) for a political party, a member of parliaments
go to debates from different political groups
write about Public Policy solutions and disagreements
help in a national campaign, you will learn a lot about the way of reasoning of people in politics
join other platforms to discuss politics (if interested in AI and in the EU: (EU AI Alliance)
Other forms of learning more about politics which wouldnt be political by the definition above:
learn and discuss (etc) political theory (by the definition above this is not “political”)
Might add things later. I also have a few other ideas I can share in pm,
Another solution would be to think of spaces to discuss politics which one can join.
There are spaces I can join (and have joined) to do politics or observe politics but not so much to discuss politics, because the people there lack the rationality skills or background knowledge (e.g., the basics of Bayesian epistemology, or an understanding of game theory in general and signaling in particular) to do so.
I believe that we wont get a better understanding of politics by discussing it here, as its more of a form of empirical knowledge you acquire:
I think we need both, because after observing “politics in the wild”, I need to systemize the patterns I observed, understand why things happened the way they did, predict whether the patterns/trends I saw are likely to continue, etc. And it’s much easier to do that with other people’s help than to do it alone.
Good point, I agree this is probably a dealbreaker for a lot of people (maybe even me) unless we can think of some way to avoid it. I can’t help but think that we have to find a solution besides “just don’t talk about politics” though, because x-risk is inherently political and as the movement gets bigger it’s going to inevitably come into conflict with other people’s politics. (See here for an example of it starting to happen already.) If by the time that happens in full force, we’re still mostly political naivetes with little understanding of how politics works in general or what drives particular political ideas, how is that going to work out well? (ETA: This is not an entirely rhetorical question, BTW. If anyone can see how things work out well in the end despite LW never getting rid of the “don’t talk about politics” norm, I really want to hear that so I can maybe work in that direction instead.)
My preferred solution to this problem continues to be just taking political discussions offline. I recognize that this is difficult for people not situated somewhere like the bay area where there are lots of other rationalist/effective altruist people around to talk to, but nevertheless I still think it’s the best solution.
EDITS:
I also agree with Weyl’s point here that another very effective thing to do is to talk loudly and publicly about racism, sexism, etc.—though obviously as Eliezer points out that’s not always possible, as not every important subject necessarily has such a component.
My answer would be that we figure out how to engage with politics, but we do it offline rather than using a public forum like LW.
How much of an efficiency hit do you think taking all discussion of a subject offline (“in-person”) involves? For example if all discussions about AI safety could only be done in person (no forums, journals, conferences, blogs, etc.), how much would that slow down progress?
Probably a good deal for anything academic (like AI safety), but not at all for politics. I think discussions focused on persuasion/debate/argument/etc. are pretty universally bad (e.g. not truth-tracking), and that online discussion lends itself particularly well into falling into such discussions. It is sometimes possible to avoid this failure mode, but imo basically only if the conversations are kept highly academic and avoiding of any hot-button issues (e.g. as in some online AI safety discussions, though not all). I think this is basically impossible for politics, so I suspect that not having the ability to talk about politics online won’t be much of a problem (and might even be quite helpful, since I suspect it would overall raise the level of political discourse).
“Politics” isn’t a separate magisterium, though; what counts as a “hot-button issue” is a function of the particular socio-psychological forces operative in the culture of a particular place and time. Groups of humans (including such groups as “corportations” or “governments”) are real things in the real physical universe and it should be possible to build predictive models of their behavior using the same general laws of cognition that apply to everything else.
To this one might reply, “Oh, sure, I’m not objecting to the study of sociology, social psychology, economics, history, &c., just politics.” This sort of works if you define “political” as “of or concerning any topic that seems likely to trigger motivated reasoning and coalition-formation among the given participants.” But I don’t see how you can make that kind of clean separation in a principled way, and that matters if you care about getting the right answer to questions that have been infused with “political” connotations in the local culture of the particular place and time in which you happen to live.
Put it this way: astronomy is not a “political” topic in Berkeley 2019. In Rome 1632, it was. The individual cognitive algorithms and collective “discourse algorithms” that can’t just get the right answer to questions that seem “political” in Berkeley 2019, would have also failed to get the right answer on heliocentrism in Rome 1632—and I really doubt they’re adequate to solve AGI alignment in Berkeley 2039.
This sort of existence argument is reasonable for hypothetical supehuman AIs, but real-world human cognition is extremely sensitive to the structure we can find or make up in the world. Sure, just saying “politics” does not provide a clear reference class, so it would be helpful to understand what you want to avoid about politics and engineer around it. My hunch is that avoiding your highly-technical definition of bad discourse that you are using to replace “politics” just leads to a lot of time spent on your politics analysis, with approximately the same topics avoided as a very simple rule of thumb.
I stopped associating or mentioning LW in real life largely because of the political (maybe some parts cultural as well) baggage of several years ago. Not even because I had any particular problem with the debate on the site or the opinions of everyone in aggregate, but because there was just too much stuff to cherry-pick from in our world of guilt by association. Too many mixed signals for people to judge me by.
Is this because you think technical alignment work will be a political issue in 2039?
I disagree, and think LW can actually do ok, and probably even better with some additional safeguards around political discussions. You weren’t around yet when we had the big 2009 political debate that I referenced in the OP, but I think that one worked out pretty well in the end. And I note that (at least from my perspective) a lot of progress in that debate was made online as opposed to in person, even though presumably many parallel offline discussions were also happening.
Do you think just talking about politics in person is good enough for making enough intellectual progress and disseminating that widely enough to eventually solve the political problems around AI safety and x-risks? Even if I didn’t think there’s an efficiency hit relative to current ways of discussing politics online, I would be quite worried about that and trying to find ways to move beyond just talking in person...
Do you think having that debate online was something that needed to happen for AI safety/x-risk? Do you think it benefited AI safety at all? I’m genuinely curious. My bet would be the opposite—that it caused AI safety to be more associated with political drama that helped further taint it.
I think it was bad in the short term (it was at least a distraction, and maybe tainted AI safety by association although I don’t have any personal knowledge of that), but probably good in the long run, because it gave people a good understanding of one political phenomenon (i.e., the giving and taking of offense) which let them better navigate similar situations in the future. In other words, if the debate hadn’t happened online and the resulting understanding widely propagated through this community, there probably would have been more political drama over time because people wouldn’t have had a good understanding of the how and why of avoiding offense.
But I do agree that “taint by association” is a big problem going forward, and I’m not sure what to do about that yet. By mentioning the 2009 debate I was mainly trying to establish that if that problem could be solved or ameliorated to a large degree, then online political discussions seem to be worth having because they can be pretty productive.
“Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk”
What safeguards?
You said: If by the time that happens in full force, we’re still mostly political naivetes with little understanding of how politics works in general or what drives particular political ideas, how is that going to work out well.”
I think the debate here might rely on an unnecessary dichotomy, either I discuss politics on LW/in the rationalist community OR I will have little (or no) understanding.
Another solution would be to think of spaces to discuss politics which one can join.
I believe that we wont get a better understanding of politics by discussing it here, as its more of a form of empirical knowledge you acquire:
Some preliminary thoughts how to learn it outside LW or the rationalists community
join or work (if you like just for a few months) for a political party, a member of parliaments
go to debates from different political groups
write about Public Policy solutions and disagreements
help in a national campaign, you will learn a lot about the way of reasoning of people in politics
join other platforms to discuss politics (if interested in AI and in the EU: (EU AI Alliance)
Other forms of learning more about politics which wouldnt be political by the definition above:
learn and discuss (etc) political theory (by the definition above this is not “political”)
Might add things later. I also have a few other ideas I can share in pm,
There are spaces I can join (and have joined) to do politics or observe politics but not so much to discuss politics, because the people there lack the rationality skills or background knowledge (e.g., the basics of Bayesian epistemology, or an understanding of game theory in general and signaling in particular) to do so.
I think we need both, because after observing “politics in the wild”, I need to systemize the patterns I observed, understand why things happened the way they did, predict whether the patterns/trends I saw are likely to continue, etc. And it’s much easier to do that with other people’s help than to do it alone.