Hi!
I’ve been registered for a few months now, but only rarely have I commented.
Perhaps I’m overly averse to loss of karma? “If you’ve never been downvoted, you’re not commenting enough.”
Hi!
I’ve been registered for a few months now, but only rarely have I commented.
Perhaps I’m overly averse to loss of karma? “If you’ve never been downvoted, you’re not commenting enough.”
Suppose we had a G.O.D. that takes N bits of input, and uses the input as a starting-point for running a simulation. If the input contains more than one simulation-program, then it runs all of them.
Now suppose we had 2^N of these machines, each with a different input. The number of instantiations of any given simulation-program will be higher the shorter the program is (not just because a shorter bit-string is by itself more likely, but also because it can fit multiple times on one machine). Finally, if we are willing to let the number of machines shrink to zero, the same probability distribution will still hold. So a shorter program (i.e. more regular universe) is “more likely” than a longer/irregular one.
(All very speculative of course.)
How so? Could you clarify your reasoning?
My thinking is: Given that a scientist has read (or looked at) a paper, they’re more likely to cite it if it’s correct and useful than if it’s incorrect. (I’m assuming that affirmative citations are more common than “X & Y said Z but they’re wrong because...” citations.) If that were all that happened, then the number of citations a paper gets would be strongly correlated with its correctness, and we would expect it to be rare for a bad paper to get a lot of citations. However, if we take into account the fact that citations are also used by other scientists as a reading list, then a paper that has already been cited a lot will be read by a lot of people, of whom some will cite it.
So when a paper is published, there are two forces affecting the number of citations it gets. First, the “badness effect” (“This paper sounds iffy, so I won’t cite it”) pushes down the number of citations. Second, the “popularity effect” (a lot of people have read the paper, so a lot of people are potential citers) pushes up the number of citations. The magnitude of the popularity effect depends mostly on what happens soon after publication, when readership is small and thus more subject to random variation. Of course, for blatantly erroneous papers the badness effect will still predominate, but in marginal cases (like the aphasia example) the popularity effect can swamp the badness effect. Hence we would expect to see more bad papers getting widely cited; the more obviously bad they are, the stronger this suggests the popularity effect is.
I suppose one could create a computer simulation if one were interested; I would predict results similar to Simkin & Roychowdhury’s.
I am reminded of a paper by Simkin and Roychowdhury where they argued, on the basis of an analysis of misprints in scientific paper citations, that most scientists don’t actually read the papers they cite, but instead just copy the citations from other papers. From this they show that the fact that some papers are widely cited in the literature can be explained by random chance alone.
Their evidence is not without flaws—the scientists might have just copied the citations for convenience, despite having actually read the papers. Still, we can easily imagine a similar effect arising if the scientists do read the papers they cite, but use the citation lists in other papers to direct their own reading. In that case, a paper that is read and cited once is more likely to be read and cited again, so a small number of papers acquire an unusual prominence independent of their inherent worth.
If we see a significant number of instances where the conclusions of a widely-accepted paper are later debunked by a simple test, then we might begin to suspect that something like this is happening.
This seems to be another case where explicit, overt reliance on a proxy drives a wedge between the proxy and the target.
One solution is to do the CEV in secret and only later reveal this to the public. Of course, as a member of said public, I would instinctively regard with suspicion any organization that did this, and suspect that the proffered explanation (some nonsense about a hypothetical “Dr. Evil”) was a cover for something sinister.