I think to properly combat the factors that make PageRank not work, we need to broaden our analysis. Saying it’s “link farms and other abuse” doesn’t quite get to the heart of the matter—what needs to be prevented is adversarial activity, i.e., concerted efforts to exploit (and thus undermine) the system.
Now, you say “research is a gated community with ethical standards”, and that’s… true to some extent, yes… but are you sure it’s true enough, for this purpose? And would it remain true, if such a system were implemented? (Consider, in other words, that switching to a PageRank-esque system for allocating funding would create clear incentives for adversarial action, where currently there are none!)
would create clear incentives for adversarial action, where currently there are none
Well, citation farms already exist, so we know roughly how many people are willing to do stuff like that. I still think the personalized PageRank algorithm (aka PageRank with priors, maybe initialized with a bunch of trustworthy researchers) is a good fit for solving this problem.
Google Scholar seems to recommend new papers to me based on, I think, works that I have cited in my own previous publications. The recommendations seem about as decent as feels fair to expect from our current level of AI.
One of the issues with pagerank is it needs universal ranking. If you do something like personal page rank the issues with adversarial activity are much reduced.
I think to properly combat the factors that make PageRank not work, we need to broaden our analysis. Saying it’s “link farms and other abuse” doesn’t quite get to the heart of the matter—what needs to be prevented is adversarial activity, i.e., concerted efforts to exploit (and thus undermine) the system.
Now, you say “research is a gated community with ethical standards”, and that’s… true to some extent, yes… but are you sure it’s true enough, for this purpose? And would it remain true, if such a system were implemented? (Consider, in other words, that switching to a PageRank-esque system for allocating funding would create clear incentives for adversarial action, where currently there are none!)
Well, citation farms already exist, so we know roughly how many people are willing to do stuff like that. I still think the personalized PageRank algorithm (aka PageRank with priors, maybe initialized with a bunch of trustworthy researchers) is a good fit for solving this problem.
To be precise, we have a lower bound.
Google Scholar seems to recommend new papers to me based on, I think, works that I have cited in my own previous publications. The recommendations seem about as decent as feels fair to expect from our current level of AI.
One of the issues with pagerank is it needs universal ranking. If you do something like personal page rank the issues with adversarial activity are much reduced.