I start with a very low prior of AGI doom (for the purpose of this discussion, assume I defer to consensus).
You link to a prediction market (Manifold’s “Will AI wipe out humanity before the year 2100”, curretly at 13%).
Problems I see with using it for this question, in random order:
It ends in 2100 so the incentive is effectively about what people will believe a few years from now, not about the question. It is a Keynesian beauty contest. (Better than nothing.)
Even with the stated question, you win only if it resolves NO, so it is strategically correct to bet NO.
It is dynamically inconsistent, if you think that humans have power over the outcome and that such markets influence what humans do about it. Illustrative story: “The market says P(doom)=1%, ok I can relax and not work on AI safety” ⇒ everyone says that ⇒ the market says P(doom)=99% because no AI safety work ⇒ “AAAAH SOMEONE DO SOMETHING” ⇒ marker P(doom)=1% ⇒ …
(3) is not necessarily a flaw. Every prediction market is an action market unless the outcome is completely outside human influence. If there were a prediction market where a concerned group of billionaires could invest a huge sum on the “No” side of “Will humans solve how to make AGI and ASI safety to ensure continued human thriving?” (or some much better operationalization of the idea), that would be great.
You link to a prediction market (Manifold’s “Will AI wipe out humanity before the year 2100”, curretly at 13%).
Problems I see with using it for this question, in random order:
It ends in 2100 so the incentive is effectively about what people will believe a few years from now, not about the question. It is a Keynesian beauty contest. (Better than nothing.)
Even with the stated question, you win only if it resolves NO, so it is strategically correct to bet NO.
It is dynamically inconsistent, if you think that humans have power over the outcome and that such markets influence what humans do about it. Illustrative story: “The market says P(doom)=1%, ok I can relax and not work on AI safety” ⇒ everyone says that ⇒ the market says P(doom)=99% because no AI safety work ⇒ “AAAAH SOMEONE DO SOMETHING” ⇒ marker P(doom)=1% ⇒ …
(3) is not necessarily a flaw. Every prediction market is an action market unless the outcome is completely outside human influence. If there were a prediction market where a concerned group of billionaires could invest a huge sum on the “No” side of “Will humans solve how to make AGI and ASI safety to ensure continued human thriving?” (or some much better operationalization of the idea), that would be great.
I agree it’s not a flaw in the grand scheme of things. It’s a flaw for using it for consensus for reasoning.