I basically only believe the standard “weak argument” you point at here, and that puts my probability of doom given strong AI at 10-90% (“radical uncertainty” might be more appropriate).
It would indeed seem to me that either I) you are using the wrong base-rate or 2) you are making unreasonably weak updates given the observation that people are currently building AI, and it turns out it’s not that hard.
I’m personally also radically uncertain about correct base rates (given that we’re now building AI) so I don’t have a strong argument for why yours is wrong. But my guess is your argument for why yours is right doesn’t hold up.
I’m not sure how this affects my base rates. I’m already assuming like a 80% chance AGI gets built in the next decade or two (and so is Manifold, so I consider this common-knowledge)
you are using the wrong base-rate
Pretend my base rate is JUST the manifold market. That means any difference from that would have to be in the form of a valid argument with evidence that isn’t common knowledge among people voting on Manifold.
Simply asserting “you’re using the wrong base rate” without explaining what such an argument is doesn’t move the needle for me.
Fair! I’ve mostly been stating where I think your reasoning looks suspicious to me, but that does end up being points that you already said wouldn’t convince you. (I’m also not really trying to)
Relatedly, this question seems especially bad for prediction markets (which makes me consider the outcome only in an extremely weak sense). First, it is over an extremely long time span so there’s little incentive to correct. Second, and most importantly, it can only ever resolve to one side of the issue, so absent other considerations you should assume that it is heavily skewed to that side.
it can only ever resolve to one side of the issue, so absent other considerations you should assume that it is heavily skewed to that side.
Prediction markets don’t give a noticeably different answer from expert surveys, I doubt the bias is that bad. Manifold isn’t a “real money” market anyway, so I suspect most people are answering in good-faith.
It eliminates all the aspects of prediction markets that theoretically make them superior to other forms of knowledge aggregation (e.g. surveys). I agree that likely this is just acting as a (weirdly weighted) poll in this case, so the biased resolution likely doesn’t matter so much (but that also means the market itself tells you much less than a “true” prediction market would).
but that also means the market itself tells you much less than a “true” prediction market would
This doesn’t exempt you from the fact that if your prediction is wildly different from what experts predict you should be able to explain your beliefs in a few words.
I mostly try to look around to who’s saying what and why and find that the people I consider most thoughtful tend to be more concerned and take “the weak argument” or variations thereof very seriously (as do I). It seems like the “expert consensus” here (as in the poll) is best seen as some sort of evidence rather than a base rate, and one can argue how much to update on it.
That said, there’s a few people who seem less overall concerned about near-term doom and who I take seriously as thinkers on the topic. Carl Shulman being perhaps the most notable.
I mostly try to look around to who’s saying what and why and find that the people I consider most thoughtful tend to be more concerned and take “the weak argument” or variations thereof very seriously
We apparently have different tastes in “people I consider thoughtful”. “Here are some people I like and their opinions” is an argument unlikely to convince me (a stranger).
metaculus did a study where they compared prediction markets with a small number of participants to those with a large number and found that you get most of the benefit at relative small numbers (10 or so). So if you randomly sample 10 AI experts and survey their opinions, you’re doing almost as good as a full prediction market. The fact that multiple AI markets (metaculus, manifold) and surveys all agree on the same 5-10% suggests that none of these methodologies is wildly flawed.
I mean it only suggests that they’re highly correlated. I agree that it seems likely they represent the views of the average “AI expert” in this case. (I should take a look to check who was actually sampled)
My main point regarding this is that we probably shouldn’t be paying this particular prediction market too much attention in place of e.g. the survey you mention. I probably also wouldn’t give the survey too much weight compared to opinions of particularly thoughtful people, but I agree that this needs to be argued.
I basically only believe the standard “weak argument” you point at here, and that puts my probability of doom given strong AI at 10-90% (“radical uncertainty” might be more appropriate).
It would indeed seem to me that either I) you are using the wrong base-rate or 2) you are making unreasonably weak updates given the observation that people are currently building AI, and it turns out it’s not that hard.
I’m personally also radically uncertain about correct base rates (given that we’re now building AI) so I don’t have a strong argument for why yours is wrong. But my guess is your argument for why yours is right doesn’t hold up.
I’m not sure how this affects my base rates. I’m already assuming like a 80% chance AGI gets built in the next decade or two (and so is Manifold, so I consider this common-knowledge)
Pretend my base rate is JUST the manifold market. That means any difference from that would have to be in the form of a valid argument with evidence that isn’t common knowledge among people voting on Manifold.
Simply asserting “you’re using the wrong base rate” without explaining what such an argument is doesn’t move the needle for me.
Fair! I’ve mostly been stating where I think your reasoning looks suspicious to me, but that does end up being points that you already said wouldn’t convince you. (I’m also not really trying to)
Relatedly, this question seems especially bad for prediction markets (which makes me consider the outcome only in an extremely weak sense). First, it is over an extremely long time span so there’s little incentive to correct. Second, and most importantly, it can only ever resolve to one side of the issue, so absent other considerations you should assume that it is heavily skewed to that side.
Prediction markets don’t give a noticeably different answer from expert surveys, I doubt the bias is that bad. Manifold isn’t a “real money” market anyway, so I suspect most people are answering in good-faith.
It eliminates all the aspects of prediction markets that theoretically make them superior to other forms of knowledge aggregation (e.g. surveys). I agree that likely this is just acting as a (weirdly weighted) poll in this case, so the biased resolution likely doesn’t matter so much (but that also means the market itself tells you much less than a “true” prediction market would).
This doesn’t exempt you from the fact that if your prediction is wildly different from what experts predict you should be able to explain your beliefs in a few words.
I mostly try to look around to who’s saying what and why and find that the people I consider most thoughtful tend to be more concerned and take “the weak argument” or variations thereof very seriously (as do I). It seems like the “expert consensus” here (as in the poll) is best seen as some sort of evidence rather than a base rate, and one can argue how much to update on it.
That said, there’s a few people who seem less overall concerned about near-term doom and who I take seriously as thinkers on the topic. Carl Shulman being perhaps the most notable.
We apparently have different tastes in “people I consider thoughtful”. “Here are some people I like and their opinions” is an argument unlikely to convince me (a stranger).
Who do you consider thoughtful on this issue?
It’s more like “here are some people who seem to have good opinions”, and that would certainly move the needle for me.
No one. I trust prediction markets far more than any single human being.
In general, yes—but see the above (I.e. we don’t have a properly functioning prediction market on the issue).
metaculus did a study where they compared prediction markets with a small number of participants to those with a large number and found that you get most of the benefit at relative small numbers (10 or so). So if you randomly sample 10 AI experts and survey their opinions, you’re doing almost as good as a full prediction market. The fact that multiple AI markets (metaculus, manifold) and surveys all agree on the same 5-10% suggests that none of these methodologies is wildly flawed.
I mean it only suggests that they’re highly correlated. I agree that it seems likely they represent the views of the average “AI expert” in this case. (I should take a look to check who was actually sampled)
My main point regarding this is that we probably shouldn’t be paying this particular prediction market too much attention in place of e.g. the survey you mention. I probably also wouldn’t give the survey too much weight compared to opinions of particularly thoughtful people, but I agree that this needs to be argued.