What’s the LW take on Curtis Yarvin’s article about AI safety?
The TL;DR is that Yarvin argues that an AGI system can only affect the virtual world, while humans can affect the virtual and physical world. The AGI/virtual world depends the physical world being able to provide it electricity (and potentially internet) to function, while the physical world is not nearly as dependent on the virtual world (though to an extent it is in practice, due to social systems and humans adopting these technologies). This produces a system analogous to slavery due to the power differential between the two groups/worlds.
He also argues that intelligence has diminishing returns, using the example of addition:
“A cat has an IQ of 14. You have an IQ of 140. A superintelligence has an IQ of 14000. You understand addition much better than the cat. The superintelligence does not understand addition much better than you.”
Personally I think this is a bad argument, as the AGI can overcome the constraints of human information processing speeds and energy levels, which I view as a substantial bottleneck to the success of humans.
As for his thoughts on the balance of power between humans and AGI systems, I think this would be true in the early days of “foom” but would be less relevant as the AGI becomes more embedded within the economic and political systems of the nation.
“A cat has an IQ of 14. You have an IQ of 140. A superintelligence has an IQ of 14000. You understand addition much better than the cat. The superintelligence does not understand addition much better than you.”
I think I’d focus more on qualitative differences rather than quantitative ones. Eg an AI system was able to solve protein folding, when humans couldn’t despite great effort; this points at a future AI being able to design synthetic life and molecular nanotechnology, which is qualitatively different from anything humans can do.
(Though, disjunctively, there are also plenty of paths through which speed alone is sufficient to take over the world, ie things that a time-dilated human would be able to do.)
AFAICT this is the crux; Yarvin seems to think that superintelligence can’t exist, which he argues through the lens of a series of considerations that would matter for an AGI that was as smart as a top-tier human, but which become minor speedbumps at most in the context as soon as intelligence advances any further than that.
(Overall, I think the linked article reinforces my preexisting impression that Curtis Yarvin is a fool.)
>Overall, I think the linked article reinforces my preexisting impression that Curtis Yarvin is a fool.
Given he was in the SMPY, I don’t think intelligence is preventing him from understanding this issue, rather he seems to have approached the issue uncritically and overconfidently. In effect, not distinguishable from a fool.
What’s the LW take on Curtis Yarvin’s article about AI safety?
The TL;DR is that Yarvin argues that an AGI system can only affect the virtual world, while humans can affect the virtual and physical world. The AGI/virtual world depends the physical world being able to provide it electricity (and potentially internet) to function, while the physical world is not nearly as dependent on the virtual world (though to an extent it is in practice, due to social systems and humans adopting these technologies). This produces a system analogous to slavery due to the power differential between the two groups/worlds.
He also argues that intelligence has diminishing returns, using the example of addition:
“A cat has an IQ of 14. You have an IQ of 140. A superintelligence has an IQ of 14000. You understand addition much better than the cat. The superintelligence does not understand addition much better than you.”
Personally I think this is a bad argument, as the AGI can overcome the constraints of human information processing speeds and energy levels, which I view as a substantial bottleneck to the success of humans.
As for his thoughts on the balance of power between humans and AGI systems, I think this would be true in the early days of “foom” but would be less relevant as the AGI becomes more embedded within the economic and political systems of the nation.
I think I’d focus more on qualitative differences rather than quantitative ones. Eg an AI system was able to solve protein folding, when humans couldn’t despite great effort; this points at a future AI being able to design synthetic life and molecular nanotechnology, which is qualitatively different from anything humans can do.
(Though, disjunctively, there are also plenty of paths through which speed alone is sufficient to take over the world, ie things that a time-dilated human would be able to do.)
AFAICT this is the crux; Yarvin seems to think that superintelligence can’t exist, which he argues through the lens of a series of considerations that would matter for an AGI that was as smart as a top-tier human, but which become minor speedbumps at most in the context as soon as intelligence advances any further than that.
(Overall, I think the linked article reinforces my preexisting impression that Curtis Yarvin is a fool.)
>Overall, I think the linked article reinforces my preexisting impression that Curtis Yarvin is a fool.
Given he was in the SMPY, I don’t think intelligence is preventing him from understanding this issue, rather he seems to have approached the issue uncritically and overconfidently. In effect, not distinguishable from a fool.
I am sure that Andrew Wiles understands addition much better than me.