Other intellectual communities often become specialized in analyzing arguments only of a very specific type, and because AGI-risk arguments aren’t of that type, their members can’t easily engage with those arguments. For example:
...if you look, say, at COVID or climate change fears, in both cases, there are many models you can look at, including—and then models with data. I’m not saying you have to like those models. But the point is: there’s something you look at and then you make up your mind whether or not you like those models; and then they’re tested against data. So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we’ve been talking about this seriously, there isn’t a single model done. Period. Flat out.
So, I don’t think any idea should be dismissed. I’ve just been inviting those individuals to actually join the discourse of science. ‘Show us your models. Let us see their assumptions and let’s talk about those.’ The practice, instead, is to write these very long pieces online, which just stack arguments vertically and raise the level of anxiety. It’s a bad practice in virtually any theory of risk communication.
Has Tyler Cowen heard of the Bio Anchors by Ajeya Cotra model or the takeoffspeeds.com model by Tom Davidson or Roodman’s model of the singularity, or for that matter the earlier automation models by Robin Hanson? All of them seem to be the sort of thing he wants, I’m surprised he hasn’t heard of them. Or maybe he has and thinks they don’t count for some reason? I would be curious to know why.
They say “And then the entire world gets transformed as superintelligent AIs + robots automate the economy.” Does Tyler Cowen buy all of that? Is that not the part he disagrees with?
And then yeah for the AI kills you part there are models as well, albeit not economic growth models because economic growth is a different subject. But there are simple game theory models, for example—expected utility maximizer with mature technology + misaligned utility function = and then it kills you. And then there are things like Carlsmith’s six-step argument and Chalmers’ and so forth. What sort of thing does Tyler want, that’s different in kind from what we already have?
It occurs to me to be curious if @Zvi has thoughts on how to put stuff in terms Tyler Cowen would understand. (I’m not sure what Cowen wants. I’m personally kinda skeptical of people needing things in special formats rather than just generally going off on incredulity. But, it occurs to me Zvi’s recent twitter poll of steps along-the-way to AI doom could be converted into, like, a guesstimate model)
Other intellectual communities often become specialized in analyzing arguments only of a very specific type, and because AGI-risk arguments aren’t of that type, their members can’t easily engage with those arguments. For example:
Is work already being done to reformulate AI-risk arguments for these communities?
Has Tyler Cowen heard of the Bio Anchors by Ajeya Cotra model or the takeoffspeeds.com model by Tom Davidson or Roodman’s model of the singularity, or for that matter the earlier automation models by Robin Hanson? All of them seem to be the sort of thing he wants, I’m surprised he hasn’t heard of them. Or maybe he has and thinks they don’t count for some reason? I would be curious to know why.
I think those don’t say ‘and then the AI kills you’
They say “And then the entire world gets transformed as superintelligent AIs + robots automate the economy.” Does Tyler Cowen buy all of that? Is that not the part he disagrees with?
And then yeah for the AI kills you part there are models as well, albeit not economic growth models because economic growth is a different subject. But there are simple game theory models, for example—expected utility maximizer with mature technology + misaligned utility function = and then it kills you. And then there are things like Carlsmith’s six-step argument and Chalmers’ and so forth. What sort of thing does Tyler want, that’s different in kind from what we already have?
There’s been reasonable amounts of modeling work done in the context of managing money. E.g. https://forum.effectivealtruism.org/posts/Ne8ZS6iJJp7EpzztP/the-optimal-timing-of-spending-on-agi-safety-work-why-we
This is probably the sort of thing Tyler would want but wouldn’t know how to find.
For the case of David Chalmers, I think that’s explicitly what Robby was going for in this post: https://www.lesswrong.com/posts/QzkTfj4HGpLEdNjXX/an-artificially-structured-argument-for-expecting-agi-ruin
Thanks, that’s getting pretty close to what I’m asking for. Since posting the above, I’ve also found Katja Grace’s Argument for AI x-risk from competent malign agents and Joseph Carlsmith’s Is Power-Seeking AI an Existential Risk, both of which seem like the kind of thing you could point an analytic philosopher at and ask them which premise they deny.
Any idea if something similar is being done to cater to economists (or other social scientists)?
It occurs to me to be curious if @Zvi has thoughts on how to put stuff in terms Tyler Cowen would understand. (I’m not sure what Cowen wants. I’m personally kinda skeptical of people needing things in special formats rather than just generally going off on incredulity. But, it occurs to me Zvi’s recent twitter poll of steps along-the-way to AI doom could be converted into, like, a guesstimate model)