Intelligence has no upper limit, instead of diminishing sharply in relative utility
It seems to me that there is a large space of intermediate claims that I interpret the letter as falling into. Namely, that if there exists an upper limit to intelligence, or a point at which the utility diminishes enough to not be worth throwing more compute cycles at it, humans are not yet approaching that limit. Returns can diminish for a long time while still being worth pursuing.
you have NO EVIDENCE that AGI is hostile or is as capable as you claim or support for any of your claims.
“No evidence” is a very different thing from “have not yet directly observed the phenomenon in question”. There is, in fact, evidence from other observations. It has not yet raised the probability to [probability 1](https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities), but there does exist such a thing as weak evidence, or strong-but-inconclusive evidence. There is evidence for this claim, and evidence for the counterclaim; we find ourselves in the position of actually needing to look at and weigh the evidence in question.
For the first, the succinct argument for correctness is to consider the details of key barriers.
Imagine the machine is trying to convince a human it doesn’t know to do something in favor of the . machine. More and more intelligence you can model as allowing the machine to consider an ever wider search space of possible hidden state for the human or messages it can emit.
But none of this does more than marginally improve the pSuccess. For this task I will claim the odds of success with human intelligence are 10 percent, and with infinite intelligence, 20 percent. It takes logarithmically more compute to approach 20 percent.
Either way the machine is probably going to fail. I am claiming there are thousands of real world tasks on the way to conquering the planet with such high pFail.
The way the machine wins is to have overwhelming force, same way you win any war. And that real world force has a bunch of barriers to obtaining.
For the second, again, debates are one thing. Taking costly action (delays, nuclear war) is another. I am saying it is irrational to take costly actions without direct evidence.
It seems to me that there is a large space of intermediate claims that I interpret the letter as falling into. Namely, that if there exists an upper limit to intelligence, or a point at which the utility diminishes enough to not be worth throwing more compute cycles at it, humans are not yet approaching that limit. Returns can diminish for a long time while still being worth pursuing.
“No evidence” is a very different thing from “have not yet directly observed the phenomenon in question”. There is, in fact, evidence from other observations. It has not yet raised the probability to [probability 1](https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities), but there does exist such a thing as weak evidence, or strong-but-inconclusive evidence. There is evidence for this claim, and evidence for the counterclaim; we find ourselves in the position of actually needing to look at and weigh the evidence in question.
For the first, the succinct argument for correctness is to consider the details of key barriers.
Imagine the machine is trying to convince a human it doesn’t know to do something in favor of the . machine. More and more intelligence you can model as allowing the machine to consider an ever wider search space of possible hidden state for the human or messages it can emit.
But none of this does more than marginally improve the pSuccess. For this task I will claim the odds of success with human intelligence are 10 percent, and with infinite intelligence, 20 percent. It takes logarithmically more compute to approach 20 percent.
Either way the machine is probably going to fail. I am claiming there are thousands of real world tasks on the way to conquering the planet with such high pFail.
The way the machine wins is to have overwhelming force, same way you win any war. And that real world force has a bunch of barriers to obtaining.
For the second, again, debates are one thing. Taking costly action (delays, nuclear war) is another. I am saying it is irrational to take costly actions without direct evidence.