This was a good post. I’d bookmark it, but unfortunately that functionality doesn’t exist yet.* (Though if you have any open source bookmark plugins to recommend, that’d be helpful.) I’m mostly responding to say this though:
While it wasn’t otherwise mentioned in the abstract of the paper (above), this was stated once:
This paper examines algorithmic depolarization interventions with the goal of conflict transformation: not suppressing or eliminating conflict but moving towards more constructive conflict.
I though this was worth calling out, although I am still in the process of reading that 10⁄14 page paper. (There are 4 pages of references.)
And some other commentary while I’m here:
It’s common for people to be worried about recommender systems being addictive
I imagine the recommender system is only as good as what it has to work with, content wise—and that’s before getting into ‘what does the recommender system have to go off of’, and ‘what does it do with what it has’.
Whenever I talk to someone who seems to have actually studied the topic in depth, it seems they think that there are problems with recommender systems, but they are different from what people usually imagine.
This part wasn’t elaborated on. To put it a different way:
It’s common for people to be worried about recommender systems being addictive or promoting filter bubbles etc, but as far as I can tell, they don’t have very good arguments for these worries.
Do the people ‘who know what’s going’ on (presumably) have better arguments? Do you?
*I also have a suspicion it’s not being used. I.e., past a certain number of bookmarks like 10, it’s not actually feasible to use the LW interface to access them.
Do the people ‘who know what’s going’ on (presumably) have better arguments?
Possibly, but if so, I haven’t seen them.
My current belief is “who knows if there’s a major problem with recommender systems or not”. I’m not willing to defer to them, i.e. say “there probably is a problem based on the fact that the people who’ve studied them think there’s a problem”, because as far as I can tell all of those people got interested in recommender systems because of the bad arguments and so it feels a bit suspicious / selection-effect-y that they still think there are problems. I would engage with arguments they provide and come to my own conclusions (whereas I probably would not engage with arguments from other sources).
Do you?
No. I just have anecdotal experience + armchair speculation, which I don’t expect to be much better at uncovering the truth than the arguments I’m critiquing.
This might still be good for generating ideas (if not far more accurate than brainstorming or trying to come up with a way to generate models via ‘brute force’).
But the real trick is—how do we test these sorts of ideas?
Agreed this can be useful for generating ideas (and I do tons of it myself; I have hundreds of pages of docs filled with speculation on AI; I’d probably think most of it is garbage if I went back and looked at it now).
We can test the ideas in the normal way? Run RCTs, do observational studies, collect statistics, conduct literature reviews, make predictions and check them, etc. The specific methods are going to depend on the question at hand (e.g. in my case, it was “read thousands of articles and papers on AI + AI safety”).
This was a good post. I’d bookmark it, but unfortunately that functionality doesn’t exist yet.* (Though if you have any open source bookmark plugins to recommend, that’d be helpful.) I’m mostly responding to say this though:
While it wasn’t otherwise mentioned in the abstract of the paper (above), this was stated once:
I though this was worth calling out, although I am still in the process of reading that 10⁄14 page paper. (There are 4 pages of references.)
And some other commentary while I’m here:
I imagine the recommender system is only as good as what it has to work with, content wise—and that’s before getting into ‘what does the recommender system have to go off of’, and ‘what does it do with what it has’.
This part wasn’t elaborated on. To put it a different way:
Do the people ‘who know what’s going’ on (presumably) have better arguments? Do you?
*I also have a suspicion it’s not being used. I.e., past a certain number of bookmarks like 10, it’s not actually feasible to use the LW interface to access them.
Possibly, but if so, I haven’t seen them.
My current belief is “who knows if there’s a major problem with recommender systems or not”. I’m not willing to defer to them, i.e. say “there probably is a problem based on the fact that the people who’ve studied them think there’s a problem”, because as far as I can tell all of those people got interested in recommender systems because of the bad arguments and so it feels a bit suspicious / selection-effect-y that they still think there are problems. I would engage with arguments they provide and come to my own conclusions (whereas I probably would not engage with arguments from other sources).
No. I just have anecdotal experience + armchair speculation, which I don’t expect to be much better at uncovering the truth than the arguments I’m critiquing.
This might still be good for generating ideas (if not far more accurate than brainstorming or trying to come up with a way to generate models via ‘brute force’).
But the real trick is—how do we test these sorts of ideas?
Agreed this can be useful for generating ideas (and I do tons of it myself; I have hundreds of pages of docs filled with speculation on AI; I’d probably think most of it is garbage if I went back and looked at it now).
We can test the ideas in the normal way? Run RCTs, do observational studies, collect statistics, conduct literature reviews, make predictions and check them, etc. The specific methods are going to depend on the question at hand (e.g. in my case, it was “read thousands of articles and papers on AI + AI safety”).