Argument length is substantially a function of shared premises
A stated argument could have a short length if it’s communicated between two individuals who have common knowledge of each others premises..as opposed to the “Platonic” form, where every load bearing component is made explicit, and there is noting extraneous.
But that’s a communication issue....not a truth issue. A conjunctive argument doesn’t become likelier because you don’t state some of the premises. The length of the stated argument has little to do with its likelihood.
How true an argument is, how easily it persuades another person, how easy it is to understand have little to do with each other.
The likelihood of an ideal argument depends in the likelihood of it’s load bearing premises...both how many there are, and their individual likelihoods.
Public communication, where you have no foreknowledge of shared premises, needs to keep the actual form closer to the Platonic form.
Public communication is obviously the most important kind when it comes to avoiding AI doom.
This is important, because the longer your argument, the more details that have to be true, and the more likely that you have made a mistake
Correct. The fact that you don’t have to explicitly communicate every step of an argument to a known recipient, doesnt stop the overall probability of a conjunctive argument from depending on the number, and individual likelihood, of the steps of the Platonic version, where everything necessary is stated and nothing unnecessary is stated
Argument strength is not an inverse function with respect to argument length, because not every additional “piece” of an argument is a logical conjunction which, if false, renders the entire argument false.
Correct. Stated arguments can contain elements that are explanatory, or otherwise redundant for an ideal recipient.
Nonetheless, there is a Platonic form, that does not contain redundant elements or unstated, load bearing steps.
Anyways, the trivial argument that AI doom is likely [...]s that it’s not going to have values that are friendly to humans
That’s not trivial. There’s no proof that there is such a coherent entity as “human values”, there is no proof that AIs will be value-driven agents, etc, etc. You skipped over 99% of the Platonic argument there.
This is a classic example of failing to communicate with people outside the bubble. Your assumptions about values and agency just aren’t shared by the general public or political leaders.
But that’s a communication issue....not a truth issue.
Yes, and Logan is claiming that arguments which cannot be communicated to him in no more than two sentences suffer from a conjunctive complexity burden that renders them “weak”.
That’s not trivial. There’s no proof that there is such a coherent entity as “human values”, there is no proof that AIs will be value-driven agents, etc, etc. You skipped over 99% of the Platonic argument there.
Many possible objections here, but of course spelling everything out would violate Logan’s request for a short argument. Needless to say, that request does not have anything to do with effectively tracking reality, where there is no “platonic” argument for any non-trivial claim describable in only two sentence, and yet things continue to be true in the world anyways, so reductio ad absurdum: there are no valid or useful arguments which can be made for any interesting claims. Let’s all go home now!
Yes, and Logan is claiming that arguments which cannot be communicated to him in no more than two sentences suffer from a conjunctive complexity burden that renders them “weak”.
@Logan Zoellner being wrong doesn’t make anyone else right. If the actual argument is conjunctive and complex, then all the component claims need to be high probability. That is not the case. So Logan is right for not quite the right reasons—it’s not length alone.
That’s not trivial. There’s no proof that there is such a coherent entity as “human values”, there is no proof that AIs will be value-driven agents, etc, etc. You skipped over 99% of the Platonic argument there.
Many possible objections here, but of course spelling everything out would violate Logan’s request for a short argument.
And it wouldn’t help anyway. I have read the Sequences , and there is nothing resembling a proof , or even strong argument, for the claim about coherent human values. Ditto the standard claims about utility functions, agency , etc. Reading the sequence would allow him to understand the LessWrong collective, but should not persuade him.
Whereas the same amount of time could, more reasonably, be spent learning how AI actually works.
Needless to say, that request does not have anything to do with effectively tracking reality,
Tracking reality is a thing you have to put effort into, not something you get for free, by labelling yourself a rationalist.
The original Sequences have did not track reality , because they are not evidence based—they are not derived from academic study or industry experience. Yudkowsky is proud that they are “derived from the empty string”—his way of saying that they are armchair guesswork.
His armchair guesses are based on Bayes,von Neumann rationality, utility maximisation, brute force search etc, which isnt the only way to think about AI, or particularly relevant to real world AI. But it does explain many doom arguments, since they are based on the same model—the kinds of argument that immediately start talking about values and agency. But of course that’s a problem in itself. The short doomer arguments use concepts from the Bayes/VonNeumann era in a “sleepwalking” way, out of sheer habit, given that the basis is doubtful. Current examples of AIs aren’t agents, and it’s doubtful whether they have values. It’s not irrational to base your thinking on real world examples, rather than speculation.
In addition , they haven’t been updated in the light of new developments , something else you have to do to track reality. tracking reality has a cost—you have to change your mind and admit you are wrong. If you don’t experience the discomfort of doing that, you are not tracking reality.
People other than Yudkowsky have written about AI safety from the perspective of how real world AIs work, but adding that injust makes the overall mass of information larger and more confusing.
where there is no “platonic” argument for any non-trivial claim describable in only two sentence, and yet things continue to be true
In general, I agree with you: we can’t prove with certainty that AI will kill everyone. We can only establish a significant probability (which we also can’t measure precisely).
My point is that some AI catastrophe scenarios don’t require AI motivation. For example: - A human could use narrow AI to develop a biological virus - An Earth-scale singleton AI could suffer from a catastrophic error - An AI arms race could lead to a world war
A stated argument could have a short length if it’s communicated between two individuals who have common knowledge of each others premises..as opposed to the “Platonic” form, where every load bearing component is made explicit, and there is noting extraneous.
But that’s a communication issue....not a truth issue. A conjunctive argument doesn’t become likelier because you don’t state some of the premises. The length of the stated argument has little to do with its likelihood.
How true an argument is, how easily it persuades another person, how easy it is to understand have little to do with each other.
The likelihood of an ideal argument depends in the likelihood of it’s load bearing premises...both how many there are, and their individual likelihoods.
Public communication, where you have no foreknowledge of shared premises, needs to keep the actual form closer to the Platonic form.
Public communication is obviously the most important kind when it comes to avoiding AI doom.
Correct. The fact that you don’t have to explicitly communicate every step of an argument to a known recipient, doesnt stop the overall probability of a conjunctive argument from depending on the number, and individual likelihood, of the steps of the Platonic version, where everything necessary is stated and nothing unnecessary is stated
Correct. Stated arguments can contain elements that are explanatory, or otherwise redundant for an ideal recipient.
Nonetheless, there is a Platonic form, that does not contain redundant elements or unstated, load bearing steps.
That’s not trivial. There’s no proof that there is such a coherent entity as “human values”, there is no proof that AIs will be value-driven agents, etc, etc. You skipped over 99% of the Platonic argument there.
This is a classic example of failing to communicate with people outside the bubble. Your assumptions about values and agency just aren’t shared by the general public or political leaders.
PS .
@Logan Zoellner
That’s self evidently true. So why does it have five disagreement downvotes ?
Yes, and Logan is claiming that arguments which cannot be communicated to him in no more than two sentences suffer from a conjunctive complexity burden that renders them “weak”.
Many possible objections here, but of course spelling everything out would violate Logan’s request for a short argument. Needless to say, that request does not have anything to do with effectively tracking reality, where there is no “platonic” argument for any non-trivial claim describable in only two sentence, and yet things continue to be true in the world anyways, so reductio ad absurdum: there are no valid or useful arguments which can be made for any interesting claims. Let’s all go home now!
@Logan Zoellner being wrong doesn’t make anyone else right. If the actual argument is conjunctive and complex, then all the component claims need to be high probability. That is not the case. So Logan is right for not quite the right reasons—it’s not length alone.
And it wouldn’t help anyway. I have read the Sequences , and there is nothing resembling a proof , or even strong argument, for the claim about coherent human values. Ditto the standard claims about utility functions, agency , etc. Reading the sequence would allow him to understand the LessWrong collective, but should not persuade him.
Whereas the same amount of time could, more reasonably, be spent learning how AI actually works.
Tracking reality is a thing you have to put effort into, not something you get for free, by labelling yourself a rationalist.
The original Sequences have did not track reality , because they are not evidence based—they are not derived from academic study or industry experience. Yudkowsky is proud that they are “derived from the empty string”—his way of saying that they are armchair guesswork.
His armchair guesses are based on Bayes,von Neumann rationality, utility maximisation, brute force search etc, which isnt the only way to think about AI, or particularly relevant to real world AI. But it does explain many doom arguments, since they are based on the same model—the kinds of argument that immediately start talking about values and agency. But of course that’s a problem in itself. The short doomer arguments use concepts from the Bayes/VonNeumann era in a “sleepwalking” way, out of sheer habit, given that the basis is doubtful. Current examples of AIs aren’t agents, and it’s doubtful whether they have values. It’s not irrational to base your thinking on real world examples, rather than speculation.
In addition , they haven’t been updated in the light of new developments , something else you have to do to track reality. tracking reality has a cost—you have to change your mind and admit you are wrong. If you don’t experience the discomfort of doing that, you are not tracking reality.
People other than Yudkowsky have written about AI safety from the perspective of how real world AIs work, but adding that injust makes the overall mass of information larger and more confusing.
You are confusing truth and justification.
@Tarnish
You need to say something about motivation.
@avturchin
Same problem. Yes, there’s lots of means. That’s not the weak spot. The weak spot is motivation.
@Odd anon
Same problem. You’ve done nothing to fill the gap between “ASI will happen” and “ASI will kill us all”.
In general, I agree with you: we can’t prove with certainty that AI will kill everyone. We can only establish a significant probability (which we also can’t measure precisely).
My point is that some AI catastrophe scenarios don’t require AI motivation. For example:
- A human could use narrow AI to develop a biological virus
- An Earth-scale singleton AI could suffer from a catastrophic error
- An AI arms race could lead to a world war