They’re closer. Eliezer’s is mildly decision-relevant: if I thought that we’d have that sort of capability advance (independent of my own work), then I might assume other people will figure out interpretability and focus on things orthogonal/complementary to it. Bionic’s could be decision-relevant in principle but is not particularly informative for me right now; I already have a decent intuitive sense of how readily-available funding is for alignment work, and beyond that total spending forecasts are not particularly relevant to my strategic decision-making. (Availability of money is sometimes decision relevant, e.g. about 18 months ago I realized how abundant funding was for alignment research and decided to allocate more time to training people as a result.)
But neither of these currently makes me go “Oh, apparently I should do X!” or “Huh, people are really optimistic about Y, maybe I should read up on that” or “Hmm, maybe these forecasters are seeing a problem with Z that I haven’t noticed”. They don’t make me update my research-effort allocations. Maybe there are people for which one of these markets would update effort allocations, but the questions don’t really seem optimized for providing decision-relevant information.
They’re closer. Eliezer’s is mildly decision-relevant: if I thought that we’d have that sort of capability advance (independent of my own work), then I might assume other people will figure out interpretability and focus on things orthogonal/complementary to it. Bionic’s could be decision-relevant in principle but is not particularly informative for me right now; I already have a decent intuitive sense of how readily-available funding is for alignment work, and beyond that total spending forecasts are not particularly relevant to my strategic decision-making. (Availability of money is sometimes decision relevant, e.g. about 18 months ago I realized how abundant funding was for alignment research and decided to allocate more time to training people as a result.)
But neither of these currently makes me go “Oh, apparently I should do X!” or “Huh, people are really optimistic about Y, maybe I should read up on that” or “Hmm, maybe these forecasters are seeing a problem with Z that I haven’t noticed”. They don’t make me update my research-effort allocations. Maybe there are people for which one of these markets would update effort allocations, but the questions don’t really seem optimized for providing decision-relevant information.