I work on Manifold, and one of the motivations for building the site is to play a roll furthering AI safety. One simple path is using prediction markets to make it common knowledge that AI capabilities is advancing rapidly. They can help us plan or warn us about potentially bad outcomes coming up. This is roughly the AI timelines question though.
So I’m glad to also hear that you might find prediction markets useful for concrete research questions or judging alternative proposals within the AI field.
Let me know if you think of any way the Manifold team can help out here!
One simple path is using prediction markets to make it common knowledge that AI capabilities is advancing rapidly.
Yeah, I was wondering if someone was going to bring that up. That seems to me like a hammer-in-search-of-nail kind of mistake. In the context of today’s society, prediction markets aren’t the optimal tool for convincing people of things or building common knowledge or the like; most people don’t trust them or even pay attention.
Within LW and the immediately adjacent community itself, there’s nonzero value in aggregating peoples’ timeline predictions and turning the aggregate into common knowledge, but given that only people near-adjacent to the community will pay attention the returns to that sort of thing are limited. (Also marginal returns for markets on the topic diminish rapidly; I expect they saturated some time ago.)
Let me know if you think of any way the Manifold team can help out here!
At the moment I think the ball is mostly in the users’ court, and I’m hoping this post will spur people to create some more-useful questions. (Though of course the Manifold team could try to make that happen more proactively—e.g. you could talk to some of the MATS participants about questions that come up when they’re thinking about what to work on, probably there’s lots of potential material there.)
Super interesting, thanks for writing this!
I work on Manifold, and one of the motivations for building the site is to play a roll furthering AI safety. One simple path is using prediction markets to make it common knowledge that AI capabilities is advancing rapidly. They can help us plan or warn us about potentially bad outcomes coming up. This is roughly the AI timelines question though.
So I’m glad to also hear that you might find prediction markets useful for concrete research questions or judging alternative proposals within the AI field.
Let me know if you think of any way the Manifold team can help out here!
Yeah, I was wondering if someone was going to bring that up. That seems to me like a hammer-in-search-of-nail kind of mistake. In the context of today’s society, prediction markets aren’t the optimal tool for convincing people of things or building common knowledge or the like; most people don’t trust them or even pay attention.
Within LW and the immediately adjacent community itself, there’s nonzero value in aggregating peoples’ timeline predictions and turning the aggregate into common knowledge, but given that only people near-adjacent to the community will pay attention the returns to that sort of thing are limited. (Also marginal returns for markets on the topic diminish rapidly; I expect they saturated some time ago.)
At the moment I think the ball is mostly in the users’ court, and I’m hoping this post will spur people to create some more-useful questions. (Though of course the Manifold team could try to make that happen more proactively—e.g. you could talk to some of the MATS participants about questions that come up when they’re thinking about what to work on, probably there’s lots of potential material there.)