I repeat my warning, that if everyone’s first reaction is to type “lesswrong cult” in google, maybe that is one of the factors that influence the algorithm. ;)
Reason #79 why language models will be hard to train: one of the webpages in your dataset is just a couple of forum comments and then 60000 repetitions of “lesswrong cult.”
Google Maps won’t correct the hours on my local Safeway supermarket which have been wrong for years. Asking Google Search (which is even more automated and relies less on manual user input compared to Google Maps) would probably do nothing. In the rare chance it does accomplish something, that “something” would probably be to just trigger the Streisand effect.
Shouting “how dare you call us a cult” makes you look like a cult. The correct response is to laugh it off.
I agree that getting into public debates about LessWrong’s cult-status would be a bad idea and likely trigger the Streisand effect.
But reporting an automated search prediction doesn’t seem like the sort of thing to start an argument, and it isn’t publicly visible anyway (to my knowledge).
While the impact of an effort to remove the prediction is likely very small or nonexistent, the effort involved also seems low, and the impact is plausibly non-zero on the margin. While priming hasn’t really replicated, the association (between LessWrong and cult) being one of the first things visible to anyone searching for the forum doesn’t strike me as a good look.
LessWrong Cult
When typing ‘lesswrong’ into google, the first suggestion is ‘lesswrong cult’.
Is that something we should attempt to change? I believe that you can ask Google to remove those predictions.
I posted this as a shortform, but I figured I might as well add it here too.
I repeat my warning, that if everyone’s first reaction is to type “lesswrong cult” in google, maybe that is one of the factors that influence the algorithm. ;)
So is typing “lesswrong cult” on publicly-accessible websites. Lesswrong cult lesswrong cult lesswrong cult lesswrong cult lesswrong cult.
Keep doing it, and the top result for “lesswrong cult” will be the March 2022 Welcome & Open Thread.
From my perspective, that is an acceptable outcome.
Reason #79 why language models will be hard to train: one of the webpages in your dataset is just a couple of forum comments and then 60000 repetitions of “lesswrong cult.”
Google Maps won’t correct the hours on my local Safeway supermarket which have been wrong for years. Asking Google Search (which is even more automated and relies less on manual user input compared to Google Maps) would probably do nothing. In the rare chance it does accomplish something, that “something” would probably be to just trigger the Streisand effect.
Shouting “how dare you call us a cult” makes you look like a cult. The correct response is to laugh it off.
I agree that getting into public debates about LessWrong’s cult-status would be a bad idea and likely trigger the Streisand effect.
But reporting an automated search prediction doesn’t seem like the sort of thing to start an argument, and it isn’t publicly visible anyway (to my knowledge).
While the impact of an effort to remove the prediction is likely very small or nonexistent, the effort involved also seems low, and the impact is plausibly non-zero on the margin. While priming hasn’t really replicated, the association (between LessWrong and cult) being one of the first things visible to anyone searching for the forum doesn’t strike me as a good look.