Hanson seems to agree that if we get human-level agents that are cheap to run, this gets us a local takeover. I don’t think that having cheap chimp-level agents widely available at that time overturns the advantage of gaining access to cheap human-level agents. So if we grant that the capability of AIs gets increased gradually and publicly, all that a local group needs to take over the world is make the step from chimp-level state-of-the-art agents to human-level agents before any other group does that. If chimp-level agents are not that different from human-level agents, this could be a relatively simple step, that a single group is capable of making in secret.
(I understand the limitations of this classification, but we could as well see this as an analogy to how the evolution developed humans from their ancestors with just a little bit of extra capacity, and that little bit mattered a lot, given sufficient time to express its capability. The local team is not developing the human culture, it is making humans where we previously had only chimps. And then it gives the humans cheap computing power to run on and develop their powerful culture.)
I don’t have an intuition for what would happen if you ran a chimp-level intelligence very fast. The ratio Yudkowsky mentioned in the recording was 2500 years of human-in-skull thinking = 8 hours of human-in-laptop thinking. Is it completely obvious that 2500 years of chimp thinking would yield nothing interesting or dangerous?
Chimps haven’t accomplished much in the last 2500 years but that’s at least partly because they don’t pass on insights between generations. Can we stipulate 2500 years of chimp memory, too?
Hanson seems to agree that if we get human-level agents that are cheap to run, this gets us a local takeover. I don’t think that having cheap chimp-level agents widely available at that time overturns the advantage of gaining access to cheap human-level agents. So if we grant that the capability of AIs gets increased gradually and publicly, all that a local group needs to take over the world is make the step from chimp-level state-of-the-art agents to human-level agents before any other group does that. If chimp-level agents are not that different from human-level agents, this could be a relatively simple step, that a single group is capable of making in secret.
(I understand the limitations of this classification, but we could as well see this as an analogy to how the evolution developed humans from their ancestors with just a little bit of extra capacity, and that little bit mattered a lot, given sufficient time to express its capability. The local team is not developing the human culture, it is making humans where we previously had only chimps. And then it gives the humans cheap computing power to run on and develop their powerful culture.)
I don’t have an intuition for what would happen if you ran a chimp-level intelligence very fast. The ratio Yudkowsky mentioned in the recording was 2500 years of human-in-skull thinking = 8 hours of human-in-laptop thinking. Is it completely obvious that 2500 years of chimp thinking would yield nothing interesting or dangerous?
Chimps haven’t accomplished much in the last 2500 years but that’s at least partly because they don’t pass on insights between generations. Can we stipulate 2500 years of chimp memory, too?