Is the disagreement about the speed and scope of the power of intelligence recursively applied to intelligence improvement (whether ems or AIs) ?
By “speed”, I mean the equivalent of the neutron multiplication number in the Fermi analogy. Is Robin saying that whatever it is, it won’t be so large that that, if it’s higher than estimated, improvement will still be on a timescale that allows for human control (as if Fermi had been off a bit, there would still time to shove in the control rods). In particular, the improvement rate won’t be so large that it can’t be modelled with traditional economic tools. As opposed to Eliezer, who thinks that the situation is as if Fermi had actually put together an almost-bomb, and being off a bit would have resulted in a nuclear FOOM.
By “scope”, I mean that once it’s reached it’s limits, the eventual level of technology reachable. I guess in the Fermi analogy, this is the difference between the nuclear and electrochemical energy scales. Is there disagreement about what might eventually be achieved by very intelligent entities ?
My intuition is that the hard takeoff is unlikely, but the size of the potential catastrophe is so huge that Friendliness is a worthwhile study.
Is the disagreement about the speed and scope of the power of intelligence recursively applied to intelligence improvement (whether ems or AIs) ?
By “speed”, I mean the equivalent of the neutron multiplication number in the Fermi analogy. Is Robin saying that whatever it is, it won’t be so large that that, if it’s higher than estimated, improvement will still be on a timescale that allows for human control (as if Fermi had been off a bit, there would still time to shove in the control rods). In particular, the improvement rate won’t be so large that it can’t be modelled with traditional economic tools. As opposed to Eliezer, who thinks that the situation is as if Fermi had actually put together an almost-bomb, and being off a bit would have resulted in a nuclear FOOM.
By “scope”, I mean that once it’s reached it’s limits, the eventual level of technology reachable. I guess in the Fermi analogy, this is the difference between the nuclear and electrochemical energy scales. Is there disagreement about what might eventually be achieved by very intelligent entities ?
My intuition is that the hard takeoff is unlikely, but the size of the potential catastrophe is so huge that Friendliness is a worthwhile study.