Here is another point by @jacobjacob, which I’m copying here in order for it not to be lost in the mists of time:
Though just realised this has some problems if you expected predictors to be better than the evaluators: e.g. they’re like “one the event happens everyone will see I was right, but up until then no one will believe me, so I’ll just lose points by predicting against the evaluators” (edited)
Maybe in that case you could eventually also score the evaluators based on the final outcome… or kind of re-compensate people who were wronged the first time…
Here is another point by @jacobjacob, which I’m copying here in order for it not to be lost in the mists of time: