It’s clear that the incentives for journals are terrible. We should be looking to fix this. We seem to have a Goodhart’s Law problem, where credibility is measured in citations, but refutations count in the wrong direction. Right now, there are a bunch of web sites that collect abstracts and metadata about citations, but none of them include commenting, voting, or any sort of explicit reputation system. As a result, discussions about papers ends up on blogs like this one, where academics are unlikely to ever see them.
Suppose we make an abstracts-and-metadata archive, along the lines of CiteSeer, but with comments and voting. This would give credibility scores, similar to impact ratings, but also accounting for votes. The reputation system could be refined somewhat beyond that (track author credibility by field and use it to weight votes, collect metadata about what’s a replication or refutation, etc.)
It’s clear that the incentives for journals are terrible. We should be looking to fix this. We seem to have a Goodhart’s Law problem, where credibility is measured in citations, but refutations count in the wrong direction. Right now, there are a bunch of web sites that collect abstracts and metadata about citations, but none of them include commenting, voting, or any sort of explicit reputation system. As a result, discussions about papers ends up on blogs like this one, where academics are unlikely to ever see them.
Suppose we make an abstracts-and-metadata archive, along the lines of CiteSeer, but with comments and voting. This would give credibility scores, similar to impact ratings, but also accounting for votes. The reputation system could be refined somewhat beyond that (track author credibility by field and use it to weight votes, collect metadata about what’s a replication or refutation, etc.)