It occurs to me to be curious if @Zvi has thoughts on how to put stuff in terms Tyler Cowen would understand. (I’m not sure what Cowen wants. I’m personally kinda skeptical of people needing things in special formats rather than just generally going off on incredulity. But, it occurs to me Zvi’s recent twitter poll of steps along-the-way to AI doom could be converted into, like, a guesstimate model)
Thanks, that’s getting pretty close to what I’m asking for. Since posting the above, I’ve also found Katja Grace’s Argument for AI x-risk from competent malign agents and Joseph Carlsmith’s Is Power-Seeking AI an Existential Risk, both of which seem like the kind of thing you could point an analytic philosopher at and ask them which premise they deny.
Any idea if something similar is being done to cater to economists (or other social scientists)?
It occurs to me to be curious if @Zvi has thoughts on how to put stuff in terms Tyler Cowen would understand. (I’m not sure what Cowen wants. I’m personally kinda skeptical of people needing things in special formats rather than just generally going off on incredulity. But, it occurs to me Zvi’s recent twitter poll of steps along-the-way to AI doom could be converted into, like, a guesstimate model)