But that’s a communication issue....not a truth issue.
Yes, and Logan is claiming that arguments which cannot be communicated to him in no more than two sentences suffer from a conjunctive complexity burden that renders them “weak”.
That’s not trivial. There’s no proof that there is such a coherent entity as “human values”, there is no proof that AIs will be value-driven agents, etc, etc. You skipped over 99% of the Platonic argument there.
Many possible objections here, but of course spelling everything out would violate Logan’s request for a short argument. Needless to say, that request does not have anything to do with effectively tracking reality, where there is no “platonic” argument for any non-trivial claim describable in only two sentence, and yet things continue to be true in the world anyways, so reductio ad absurdum: there are no valid or useful arguments which can be made for any interesting claims. Let’s all go home now!
Yes, and Logan is claiming that arguments which cannot be communicated to him in no more than two sentences suffer from a conjunctive complexity burden that renders them “weak”.
@Logan Zoellner being wrong doesn’t make anyone else right. If the actual argument is conjunctive and complex, then all the component claims need to be high probability. That is not the case. So Logan is right for not quite the right reasons—it’s not length alone.
That’s not trivial. There’s no proof that there is such a coherent entity as “human values”, there is no proof that AIs will be value-driven agents, etc, etc. You skipped over 99% of the Platonic argument there.
Many possible objections here, but of course spelling everything out would violate Logan’s request for a short argument.
And it wouldn’t help anyway. I have read the Sequences , and there is nothing resembling a proof , or even strong argument, for the claim about coherent human values. Ditto the standard claims about utility functions, agency , etc. Reading the sequence would allow him to understand the LessWrong collective, but should not persuade him.
Whereas the same amount of time could, more reasonably, be spent learning how AI actually works.
Needless to say, that request does not have anything to do with effectively tracking reality,
Tracking reality is a thing you have to put effort into, not something you get for free, by labelling yourself a rationalist.
The original Sequences have did not track reality , because they are not evidence based—they are not derived from academic study or industry experience. Yudkowsky is proud that they are “derived from the empty string”—his way of saying that they are armchair guesswork.
His armchair guesses are based on Bayes,von Neumann rationality, utility maximisation, brute force search etc, which isnt the only way to think about AI, or particularly relevant to real world AI. But it does explain many doom arguments, since they are based on the same model—the kinds of argument that immediately start talking about values and agency. But of course that’s a problem in itself. The short doomer arguments use concepts from the Bayes/VonNeumann era in a “sleepwalking” way, out of sheer habit, given that the basis is doubtful. Current examples of AIs aren’t agents, and it’s doubtful whether they have values. It’s not irrational to base your thinking on real world examples, rather than speculation.
In addition , they haven’t been updated in the light of new developments , something else you have to do to track reality. tracking reality has a cost—you have to change your mind and admit you are wrong. If you don’t experience the discomfort of doing that, you are not tracking reality.
People other than Yudkowsky have written about AI safety from the perspective of how real world AIs work, but adding that injust makes the overall mass of information larger and more confusing.
where there is no “platonic” argument for any non-trivial claim describable in only two sentence, and yet things continue to be true
In general, I agree with you: we can’t prove with certainty that AI will kill everyone. We can only establish a significant probability (which we also can’t measure precisely).
My point is that some AI catastrophe scenarios don’t require AI motivation. For example: - A human could use narrow AI to develop a biological virus - An Earth-scale singleton AI could suffer from a catastrophic error - An AI arms race could lead to a world war
Yes, and Logan is claiming that arguments which cannot be communicated to him in no more than two sentences suffer from a conjunctive complexity burden that renders them “weak”.
Many possible objections here, but of course spelling everything out would violate Logan’s request for a short argument. Needless to say, that request does not have anything to do with effectively tracking reality, where there is no “platonic” argument for any non-trivial claim describable in only two sentence, and yet things continue to be true in the world anyways, so reductio ad absurdum: there are no valid or useful arguments which can be made for any interesting claims. Let’s all go home now!
@Logan Zoellner being wrong doesn’t make anyone else right. If the actual argument is conjunctive and complex, then all the component claims need to be high probability. That is not the case. So Logan is right for not quite the right reasons—it’s not length alone.
And it wouldn’t help anyway. I have read the Sequences , and there is nothing resembling a proof , or even strong argument, for the claim about coherent human values. Ditto the standard claims about utility functions, agency , etc. Reading the sequence would allow him to understand the LessWrong collective, but should not persuade him.
Whereas the same amount of time could, more reasonably, be spent learning how AI actually works.
Tracking reality is a thing you have to put effort into, not something you get for free, by labelling yourself a rationalist.
The original Sequences have did not track reality , because they are not evidence based—they are not derived from academic study or industry experience. Yudkowsky is proud that they are “derived from the empty string”—his way of saying that they are armchair guesswork.
His armchair guesses are based on Bayes,von Neumann rationality, utility maximisation, brute force search etc, which isnt the only way to think about AI, or particularly relevant to real world AI. But it does explain many doom arguments, since they are based on the same model—the kinds of argument that immediately start talking about values and agency. But of course that’s a problem in itself. The short doomer arguments use concepts from the Bayes/VonNeumann era in a “sleepwalking” way, out of sheer habit, given that the basis is doubtful. Current examples of AIs aren’t agents, and it’s doubtful whether they have values. It’s not irrational to base your thinking on real world examples, rather than speculation.
In addition , they haven’t been updated in the light of new developments , something else you have to do to track reality. tracking reality has a cost—you have to change your mind and admit you are wrong. If you don’t experience the discomfort of doing that, you are not tracking reality.
People other than Yudkowsky have written about AI safety from the perspective of how real world AIs work, but adding that injust makes the overall mass of information larger and more confusing.
You are confusing truth and justification.
@Tarnish
You need to say something about motivation.
@avturchin
Same problem. Yes, there’s lots of means. That’s not the weak spot. The weak spot is motivation.
@Odd anon
Same problem. You’ve done nothing to fill the gap between “ASI will happen” and “ASI will kill us all”.
In general, I agree with you: we can’t prove with certainty that AI will kill everyone. We can only establish a significant probability (which we also can’t measure precisely).
My point is that some AI catastrophe scenarios don’t require AI motivation. For example:
- A human could use narrow AI to develop a biological virus
- An Earth-scale singleton AI could suffer from a catastrophic error
- An AI arms race could lead to a world war