0% that the tool itself will make the situation with the current comment ordering and discourse on platforms such as Twitter, Facebook, YouTube worse. It will be obvious and consistent across applications whether the tool prioritises thought-provoking, insightful, and reconciling, or bias-confirmatory, groupthink-ey, and combative comments.
For example, the tool could rank the comments by the decreasing value of Expectation[user reacts “Insightful”] * Expectation[user reacts “Changed my mind”]. If the model is trained on anything than the dataset where users deliberately coordinated to abuse the “Insightful” reaction to completely reverse its semantics (i.e., they always voted “Insightful” as if it was “Combative”, and vice versa), then either the ranking will not be much better than the status quo, or it will be better. Let alone if the model is trained on the LW data, which is high quality (though there are concerns whether an SSM trained on the LW reactions data can generalise beyond LW, as I noted in this comment, the worst case risk here is again uselessness, not harm).
Two caveats:
You can imagine there is a “dual-use technology risk” of sorts, namely that if such an SSM proves to be trainable and gives good comment ordering, someone will give it a Waluigi spin: put out a version of the tool, an ultimate “filter bubble that works across all websites” that leverages the same SSM to prioritise the most bias-confirmatory and groupthink-ey comments. Then, the cynic projection is that people will actually flock to using that tool in large numbers, therefore accelerating polarisation.
I think the risk of this is not completely negligible, but it’s a small fraction of the risk that people just won’t use [BetterDiscourse] because they are mostly interested in confirming their pre-existing beliefs. And again, if the escapism proves to be so rampant, the humanity is doomed through many other AI-enabled paths, such as AI romantic partners.
It’s also plausible that even the best discourse management in the current social network and discourse topology (hyper-connected, a lot of interactions with people whom you never met in the real life, often not contextualised by a particular physical location or issue but rather about high-level, abstract issues, such as country-level and global policy) will be worse for polarisation, than some discourse management in a very different community topology, namely, where communities are very localised. See this Kurzgesagt video where this is explained.
This doesn’t seem relevant because there is just no path back to the “old internet”. Also, the country politics should be discussed somewhere apart from the parlament, and the global politics should be discussed somewhere apart from the UN and the international political conferences.
0% that the tool itself will make the situation with the current comment ordering and discourse on platforms such as Twitter, Facebook, YouTube worse.
Thanks for the detailed answer, but I’m more interested in polarization per see than in the value of comment ordering. Indeed we could imagine that your tool feels like it behaves as well as you wanted, but that’s make the memetic world less diverse then more fragile (like monocultures tend to collapse here and then). What’d be your rough range for this larger question?
The system shall indeed create the dynamic of converging on some most reasonable positions (such as that climate change is not a hoax and is man-made, etc.), which you can read as a homogenisation of views, but also naturally keeps itself out of complete balance: when the views are sufficiently homogeneous in a community or the society at large, most of the comments will generally be low-information value to most of the readers, but in such a muted environment, any new promising theory or novel perspective will receive more attention than it would in a highly heterogeneous belief landscape. Which creates the incentive for creating such new theories or perspectives.
Thus, the discourse and the belief landscape as a whole should equilibrate themselves at some “not too homogeneous, not too heterogeneous” level.
0% that the tool itself will make the situation with the current comment ordering and discourse on platforms such as Twitter, Facebook, YouTube worse. It will be obvious and consistent across applications whether the tool prioritises thought-provoking, insightful, and reconciling, or bias-confirmatory, groupthink-ey, and combative comments.
For example, the tool could rank the comments by the decreasing value of Expectation[user reacts “Insightful”] * Expectation[user reacts “Changed my mind”]. If the model is trained on anything than the dataset where users deliberately coordinated to abuse the “Insightful” reaction to completely reverse its semantics (i.e., they always voted “Insightful” as if it was “Combative”, and vice versa), then either the ranking will not be much better than the status quo, or it will be better. Let alone if the model is trained on the LW data, which is high quality (though there are concerns whether an SSM trained on the LW reactions data can generalise beyond LW, as I noted in this comment, the worst case risk here is again uselessness, not harm).
Two caveats:
You can imagine there is a “dual-use technology risk” of sorts, namely that if such an SSM proves to be trainable and gives good comment ordering, someone will give it a Waluigi spin: put out a version of the tool, an ultimate “filter bubble that works across all websites” that leverages the same SSM to prioritise the most bias-confirmatory and groupthink-ey comments. Then, the cynic projection is that people will actually flock to using that tool in large numbers, therefore accelerating polarisation.
I think the risk of this is not completely negligible, but it’s a small fraction of the risk that people just won’t use [BetterDiscourse] because they are mostly interested in confirming their pre-existing beliefs. And again, if the escapism proves to be so rampant, the humanity is doomed through many other AI-enabled paths, such as AI romantic partners.
It’s also plausible that even the best discourse management in the current social network and discourse topology (hyper-connected, a lot of interactions with people whom you never met in the real life, often not contextualised by a particular physical location or issue but rather about high-level, abstract issues, such as country-level and global policy) will be worse for polarisation, than some discourse management in a very different community topology, namely, where communities are very localised. See this Kurzgesagt video where this is explained.
This doesn’t seem relevant because there is just no path back to the “old internet”. Also, the country politics should be discussed somewhere apart from the parlament, and the global politics should be discussed somewhere apart from the UN and the international political conferences.
Thanks for the detailed answer, but I’m more interested in polarization per see than in the value of comment ordering. Indeed we could imagine that your tool feels like it behaves as well as you wanted, but that’s make the memetic world less diverse then more fragile (like monocultures tend to collapse here and then). What’d be your rough range for this larger question?
The system shall indeed create the dynamic of converging on some most reasonable positions (such as that climate change is not a hoax and is man-made, etc.), which you can read as a homogenisation of views, but also naturally keeps itself out of complete balance: when the views are sufficiently homogeneous in a community or the society at large, most of the comments will generally be low-information value to most of the readers, but in such a muted environment, any new promising theory or novel perspective will receive more attention than it would in a highly heterogeneous belief landscape. Which creates the incentive for creating such new theories or perspectives.
Thus, the discourse and the belief landscape as a whole should equilibrate themselves at some “not too homogeneous, not too heterogeneous” level.