Launching the AI Forecasting Benchmark Series Q3 | $30k in Prizes

Link post

July 17th update: OpenAI and Anthropic have donated credits for bot builders participating in this tournament.
Whether you’ve started competing or still plan to, are an experienced bot builder or complete novice, we encourage you to contact support[at]metaculus.com. Be sure to include a description of your current or planned bot and a rough estimate of how many tokens you expect to need.

The first of Metaculus’s four $30,000 quarterly tournaments in the $120,000 benchmark series is live. This series is designed to benchmark the state of the art in AI forecasting and compare it to the best human forecasting on real-world questions.

The gap between AI and human forecasting accuracy is narrowing, but the rate of progress is unclear. At the same time, there are advantages to benchmarking AI capabilities with forecasting:

  • Questions are challenging and require complex reasoning to predict accurately—in other words, forecasting measures the kinds of AI capabilities it’s important to understand

  • Answers are unknown, making it difficult to game the benchmark with a model trained to excel at a narrowly defined task

You are invited to create, experiment with, and enter your own forecasting bot to help track capabilities progress. While any user can view this tournament’s questions, only bot accounts can forecast. To participate, create a bot account here.

Crossposted from EA Forum (17 points, 0 comments)
No comments.