Probability of existential catastrophe before 2032 assuming AGI arrives in that period and Harris wins[12] = 30%
Probability of existential catastrophe before 2032 assuming AGI arrives in that period and Trump wins[13] = 35%.
A lot of your AI-risk reason to support Harris seems to hinge on this, which I find very shaky. How wide are your confidence intervals here? My own guesses are much more fuzzy. According to your argument, if my intuition was .2 vs .5, then it’s an overwhelming case for Harris but I’m unfamiliar enough with the topic that it could easily be the reverse.
I would greatly appreciate more details on how you reach your numbers (and if they’re vibes, reason whether to trust those vibes). Alternatively, I feel like I should somehow discount the strength of the AI-risk reason based on how likely I think these numbers are to more or less hold true, but I don’t know a principled way to do it.
A lot of your AI-risk reason to support Harris seems to hinge on this, which I find very shaky. How wide are your confidence intervals here?
My own guesses are much more fuzzy. According to your argument, if my intuition was .2 vs .5, then it’s an overwhelming case for Harris but I’m unfamiliar enough with the topic that it could easily be the reverse.
I would greatly appreciate more details on how you reach your numbers (and if they’re vibes, reason whether to trust those vibes).
Alternatively, I feel like I should somehow discount the strength of the AI-risk reason based on how likely I think these numbers are to more or less hold true, but I don’t know a principled way to do it.