I laughed at your first line, so thank you for that lol. I would love to hear more about why you prefer to collect models over arguments because i don’t think I intuitively get the reasons for why this would be better—to be fair, I haven’t spent enough time thinking about it probably. Any references you like on arguments for this would be super helpful!
I agree that many (even simple) arguments can be split up into many pieces—this is a good point. I would however say that there are still more and less complicated (ie more premises with lower probabilities) arguments that should receive lower credences. I assume that you will agree with this, and maybe you can let me know precisely the way to get the AGI doom argument off the ground with the least amount of contingent propositions—that would be really helpful.
Totally agree—it’s pretty hard to determine what will be necessary and this could lead to argument sloppiness. Though, I don’t think we should throw our hands in the air, say the argument is sloppy, and ignore it (I am not saying that you are or plan to do this for the record) -- I only mean to say that it should count for something, and I leave it up to the reader to figure out what.
One separate thing i would say, though, is that the asterisk by that indicated (this was said at the beginning of the section) that it was not necessary for the proposition AI being an existential threat—it only helps the argument. This is true for many things on that list.
Yea—you’re totally right. They’re not independent propositions making it pretty complicated (I did briefly not the fact that they had to be independent and thought it was clear enough that they weren’t, but maybe not). I agree this is really difficult to estimate probabilities on the basis of this, and I recommend big error bars and less certainty!
the way to get the AGI doom argument off the ground with the least amount of contingent propositions
Well, if you’re really shooting for the least, you’ve already found the structure: just frame the argument for non-doom in terms of a lot of conjunctions (you have to build AI that we understand how to give inputs to, and also you have to solve various coordination and politics problems, and also you have to be confident that it won’t have bugs, and also you have to solve various philosophical problems about moral progress, etc.), and make lots of independent arguments for why non-doom won’t pan out. This body of arguments will touch on lots of assumptions, but very few will actually be load-bearing. (Or if you actually wrote things in Aristotelian logic, it would turn out that there were few contingent propositions, but some of those would be really long disjunctions.)
By actually making arguments you sidestep flaw #1 (symmetry), and somewhat lessen #3 (difficulty of understanding conditional probabilities). But #2 remains in full force, which is a big reason why this doesn’t lead to One Objectively Best Argument Everyone Agrees On.
(Other big reasons include the fact that arguing from less information isn’t always good—when you know more contingent facts about the world you can make better predictions—and counting number of propositions isn’t always a good measure of amount of information assumed, often leaving people disagreeing about which premises are “actually simpler”)
I laughed at your first line, so thank you for that lol. I would love to hear more about why you prefer to collect models over arguments because i don’t think I intuitively get the reasons for why this would be better—to be fair, I haven’t spent enough time thinking about it probably. Any references you like on arguments for this would be super helpful!
I agree that many (even simple) arguments can be split up into many pieces—this is a good point. I would however say that there are still more and less complicated (ie more premises with lower probabilities) arguments that should receive lower credences. I assume that you will agree with this, and maybe you can let me know precisely the way to get the AGI doom argument off the ground with the least amount of contingent propositions—that would be really helpful.
Totally agree—it’s pretty hard to determine what will be necessary and this could lead to argument sloppiness. Though, I don’t think we should throw our hands in the air, say the argument is sloppy, and ignore it (I am not saying that you are or plan to do this for the record) -- I only mean to say that it should count for something, and I leave it up to the reader to figure out what.
One separate thing i would say, though, is that the asterisk by that indicated (this was said at the beginning of the section) that it was not necessary for the proposition AI being an existential threat—it only helps the argument. This is true for many things on that list.
Yea—you’re totally right. They’re not independent propositions making it pretty complicated (I did briefly not the fact that they had to be independent and thought it was clear enough that they weren’t, but maybe not). I agree this is really difficult to estimate probabilities on the basis of this, and I recommend big error bars and less certainty!
Thanks for the helpful feedback, though!
Well, if you’re really shooting for the least, you’ve already found the structure: just frame the argument for non-doom in terms of a lot of conjunctions (you have to build AI that we understand how to give inputs to, and also you have to solve various coordination and politics problems, and also you have to be confident that it won’t have bugs, and also you have to solve various philosophical problems about moral progress, etc.), and make lots of independent arguments for why non-doom won’t pan out. This body of arguments will touch on lots of assumptions, but very few will actually be load-bearing. (Or if you actually wrote things in Aristotelian logic, it would turn out that there were few contingent propositions, but some of those would be really long disjunctions.)
By actually making arguments you sidestep flaw #1 (symmetry), and somewhat lessen #3 (difficulty of understanding conditional probabilities). But #2 remains in full force, which is a big reason why this doesn’t lead to One Objectively Best Argument Everyone Agrees On.
(Other big reasons include the fact that arguing from less information isn’t always good—when you know more contingent facts about the world you can make better predictions—and counting number of propositions isn’t always a good measure of amount of information assumed, often leaving people disagreeing about which premises are “actually simpler”)