“Long-winded arguments tend to fail” is a daring section title in your 5,000 word essay :P
In general, I think the genre “collect all the arguments I can find on only one side of a controversial topic” is bound to lead to lower-quality inclusions, and that section is probably among them. I prefer the genre “collect the best models I can find of a controversial topic and try to weigh them.”
Why is “The argument for AI risk has a lot of necessary pieces, therefore it’s unlikely” a bad argument?
Any real-world prediction can be split into an arbitrary number of conjunctive pieces.
“It will rain tomorrow” is the prediction “it will rain over my house, and the house next to that, and the house next to that, and the house across the street, and the house next to it, and the house next to that, etc.” Surely any conjunction with so many pieces is doomed to failure, right? But wait, the prediction “it won’t rain tomorrow” has the same problem.
Could you split the claim that future AI will do good things and not bad things into lots of conjunctions?
It’s easy to get sloppy about whether pieces are really necessary or not.
E.g. “The current neural net paradigm continues” isn’t strictly necessary for future AI doing bad things rather than good things. But since we’re just sort of making a long list, there’s the temptation to just slap it onto the list and not worry about the leakage of probability-mass when it turns out not to be necessary. But if each step leaks a little, and you’re making a list of a large number of steps… well, point is it’s sometimes easy to fool yourself with this argument structure.
Figuring out conditional probabilities is hard.
At the start of the list, the probabilities are, while not easy, at least straightforward to understand. E.g. your item 1 is “P(we have enough training data, somehow)”. This is hard to predict, but at least it’s clear what you mean. But every step of the list has to be conditional on all the previous steps.
(This was the flaw with the argument that “it will rain tomorrow” has low probability. Once it’s raining on the first 99 houses, adding the 100th house to the list doesn’t move the probabilities much.)
So by the time you’re at item 16, you’re not just asking P(item 16), you’re asking P(item 16, given that 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, and 15 are true), which is often an unintuitive slice of reality it’s hard to estimate probabilities about.
I laughed at your first line, so thank you for that lol. I would love to hear more about why you prefer to collect models over arguments because i don’t think I intuitively get the reasons for why this would be better—to be fair, I haven’t spent enough time thinking about it probably. Any references you like on arguments for this would be super helpful!
I agree that many (even simple) arguments can be split up into many pieces—this is a good point. I would however say that there are still more and less complicated (ie more premises with lower probabilities) arguments that should receive lower credences. I assume that you will agree with this, and maybe you can let me know precisely the way to get the AGI doom argument off the ground with the least amount of contingent propositions—that would be really helpful.
Totally agree—it’s pretty hard to determine what will be necessary and this could lead to argument sloppiness. Though, I don’t think we should throw our hands in the air, say the argument is sloppy, and ignore it (I am not saying that you are or plan to do this for the record) -- I only mean to say that it should count for something, and I leave it up to the reader to figure out what.
One separate thing i would say, though, is that the asterisk by that indicated (this was said at the beginning of the section) that it was not necessary for the proposition AI being an existential threat—it only helps the argument. This is true for many things on that list.
Yea—you’re totally right. They’re not independent propositions making it pretty complicated (I did briefly not the fact that they had to be independent and thought it was clear enough that they weren’t, but maybe not). I agree this is really difficult to estimate probabilities on the basis of this, and I recommend big error bars and less certainty!
the way to get the AGI doom argument off the ground with the least amount of contingent propositions
Well, if you’re really shooting for the least, you’ve already found the structure: just frame the argument for non-doom in terms of a lot of conjunctions (you have to build AI that we understand how to give inputs to, and also you have to solve various coordination and politics problems, and also you have to be confident that it won’t have bugs, and also you have to solve various philosophical problems about moral progress, etc.), and make lots of independent arguments for why non-doom won’t pan out. This body of arguments will touch on lots of assumptions, but very few will actually be load-bearing. (Or if you actually wrote things in Aristotelian logic, it would turn out that there were few contingent propositions, but some of those would be really long disjunctions.)
By actually making arguments you sidestep flaw #1 (symmetry), and somewhat lessen #3 (difficulty of understanding conditional probabilities). But #2 remains in full force, which is a big reason why this doesn’t lead to One Objectively Best Argument Everyone Agrees On.
(Other big reasons include the fact that arguing from less information isn’t always good—when you know more contingent facts about the world you can make better predictions—and counting number of propositions isn’t always a good measure of amount of information assumed, often leaving people disagreeing about which premises are “actually simpler”)
“Long-winded arguments tend to fail” is a daring section title in your 5,000 word essay :P
In general, I think the genre “collect all the arguments I can find on only one side of a controversial topic” is bound to lead to lower-quality inclusions, and that section is probably among them. I prefer the genre “collect the best models I can find of a controversial topic and try to weigh them.”
Why is “The argument for AI risk has a lot of necessary pieces, therefore it’s unlikely” a bad argument?
Any real-world prediction can be split into an arbitrary number of conjunctive pieces.
“It will rain tomorrow” is the prediction “it will rain over my house, and the house next to that, and the house next to that, and the house across the street, and the house next to it, and the house next to that, etc.” Surely any conjunction with so many pieces is doomed to failure, right? But wait, the prediction “it won’t rain tomorrow” has the same problem.
Could you split the claim that future AI will do good things and not bad things into lots of conjunctions?
It’s easy to get sloppy about whether pieces are really necessary or not.
E.g. “The current neural net paradigm continues” isn’t strictly necessary for future AI doing bad things rather than good things. But since we’re just sort of making a long list, there’s the temptation to just slap it onto the list and not worry about the leakage of probability-mass when it turns out not to be necessary. But if each step leaks a little, and you’re making a list of a large number of steps… well, point is it’s sometimes easy to fool yourself with this argument structure.
Figuring out conditional probabilities is hard.
At the start of the list, the probabilities are, while not easy, at least straightforward to understand. E.g. your item 1 is “P(we have enough training data, somehow)”. This is hard to predict, but at least it’s clear what you mean. But every step of the list has to be conditional on all the previous steps.
(This was the flaw with the argument that “it will rain tomorrow” has low probability. Once it’s raining on the first 99 houses, adding the 100th house to the list doesn’t move the probabilities much.)
So by the time you’re at item 16, you’re not just asking P(item 16), you’re asking P(item 16, given that 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, and 15 are true), which is often an unintuitive slice of reality it’s hard to estimate probabilities about.
I laughed at your first line, so thank you for that lol. I would love to hear more about why you prefer to collect models over arguments because i don’t think I intuitively get the reasons for why this would be better—to be fair, I haven’t spent enough time thinking about it probably. Any references you like on arguments for this would be super helpful!
I agree that many (even simple) arguments can be split up into many pieces—this is a good point. I would however say that there are still more and less complicated (ie more premises with lower probabilities) arguments that should receive lower credences. I assume that you will agree with this, and maybe you can let me know precisely the way to get the AGI doom argument off the ground with the least amount of contingent propositions—that would be really helpful.
Totally agree—it’s pretty hard to determine what will be necessary and this could lead to argument sloppiness. Though, I don’t think we should throw our hands in the air, say the argument is sloppy, and ignore it (I am not saying that you are or plan to do this for the record) -- I only mean to say that it should count for something, and I leave it up to the reader to figure out what.
One separate thing i would say, though, is that the asterisk by that indicated (this was said at the beginning of the section) that it was not necessary for the proposition AI being an existential threat—it only helps the argument. This is true for many things on that list.
Yea—you’re totally right. They’re not independent propositions making it pretty complicated (I did briefly not the fact that they had to be independent and thought it was clear enough that they weren’t, but maybe not). I agree this is really difficult to estimate probabilities on the basis of this, and I recommend big error bars and less certainty!
Thanks for the helpful feedback, though!
Well, if you’re really shooting for the least, you’ve already found the structure: just frame the argument for non-doom in terms of a lot of conjunctions (you have to build AI that we understand how to give inputs to, and also you have to solve various coordination and politics problems, and also you have to be confident that it won’t have bugs, and also you have to solve various philosophical problems about moral progress, etc.), and make lots of independent arguments for why non-doom won’t pan out. This body of arguments will touch on lots of assumptions, but very few will actually be load-bearing. (Or if you actually wrote things in Aristotelian logic, it would turn out that there were few contingent propositions, but some of those would be really long disjunctions.)
By actually making arguments you sidestep flaw #1 (symmetry), and somewhat lessen #3 (difficulty of understanding conditional probabilities). But #2 remains in full force, which is a big reason why this doesn’t lead to One Objectively Best Argument Everyone Agrees On.
(Other big reasons include the fact that arguing from less information isn’t always good—when you know more contingent facts about the world you can make better predictions—and counting number of propositions isn’t always a good measure of amount of information assumed, often leaving people disagreeing about which premises are “actually simpler”)