That is why a bad review from a reviewer in a peer-reviewed journal is better than a negative or zero carma on a LW post. Reviewer is obliged to find all errors. Reader of a post just ignores it, if he finds the post not interesting enough to engage. If the reader engages in commenting, he will not search for all errors, but just would peak a point which is interesting to him. Zero-carma post provide very little knowledge about how to improve the next post, except: “something is wrong, try next time differently”.
In some forums, it is partly solved by “actionable downvotes”, where any downvote should be explained from a preset of types (like: dangerous, wrong etc), like on Longecity, or even in as a plain text.
For this reason ideas like “open reviewing” are not working very well, and we still need traditional scientific journals.
Not true. A reviewer’s main job is to give a high-level assessment on the quality of a paper. If the assessment is negative then usually they do not look for all specific errors in the paper. A detailed list of errors is more common when the reviewer recommends the journal to accept the paper (since then the author(s) can edit the paper and then publish in the journal) but still many reviewers do not do this (which is why it is common to find peer-reviewed papers with errors in them).
Yes, but even in the case of a negative review they often demonstrate the cause by pointing on the several errors, or by listing some high-level reason why they are negative and it could be used as some form of the feedback.
That is why a bad review from a reviewer in a peer-reviewed journal is better than a negative or zero carma on a LW post. Reviewer is obliged to find all errors. Reader of a post just ignores it, if he finds the post not interesting enough to engage. If the reader engages in commenting, he will not search for all errors, but just would peak a point which is interesting to him. Zero-carma post provide very little knowledge about how to improve the next post, except: “something is wrong, try next time differently”.
In some forums, it is partly solved by “actionable downvotes”, where any downvote should be explained from a preset of types (like: dangerous, wrong etc), like on Longecity, or even in as a plain text.
For this reason ideas like “open reviewing” are not working very well, and we still need traditional scientific journals.
Not true. A reviewer’s main job is to give a high-level assessment on the quality of a paper. If the assessment is negative then usually they do not look for all specific errors in the paper. A detailed list of errors is more common when the reviewer recommends the journal to accept the paper (since then the author(s) can edit the paper and then publish in the journal) but still many reviewers do not do this (which is why it is common to find peer-reviewed papers with errors in them).
At least, this is the case in math.
Yes, but even in the case of a negative review they often demonstrate the cause by pointing on the several errors, or by listing some high-level reason why they are negative and it could be used as some form of the feedback.