What territory? This entire discussion has been about a counterfactual to guide intuition in an analogy. There is no territory here. The analogy is nukes → AGI, 1800s scientists → us, bomb that ignites the atmosphere → rapid ASI.
This makes me extraordinarily confused as to why you are even considering superintelligent 1800s scientists. What does that correspond to in the analogy?
I think you might be saying that just as superintelligent 1800s researchers could determine what sorts of at-least-city-destroying bombs are likely to be found by ordinary human research, that if we were superintelligent then we could determine whether ASI is likely to follow rapidly from AGI?
If so, I guess I agree with that but I’m not sure it actually gets us anywhere? From my reading, Rob’s point was about ordinary human 1800s scientists who didn’t know the laws of physics that govern at-least-city-destroying bombs, just as we don’t know what the universe’s rules are for AGI and ASI.
We simply don’t know whether the path from here to AGI is inherently (due to the way the universe works) jumpy or smooth, fast or slow. No amount of reasoning about aircraft, bombs, or the economy will tell us either.
We have exactly one example of AGI to point at, and that one was developed by utterly different processes. What we can say is that other primates with a fraction of our brain size don’t seem to have anywhere near that fraction of the ability to deal with complex concepts, but we don’t know why, nor whether this has any implications for AGI research.
The hypothetical 1800s scientists were making mistakes of reasoning that we could now do better than. Not even just because we know more about physics, but because know better how to operate arguments about physical law and novel phenomena.
I find this interesting enough to discuss the plausibility of on its own.
What this says by analogy about Rob’s arguments depends on what you translate and what you don’t.
On one view, it says that Rob is failing to take advantage of reasoning about intelligence that we could do now, because we know better ways of taking advantage of information than they did in 1800.
On another view, it says that Rob is only failing to take advantage of reasoning that future people would be aware of. The future people would be better than us at thinking about intelligence by as much as we are better than 1800 people at thinking about physics.
I think the first analogy is closer to right and the second is closer to wrong. You can’t just make an analogy to a time when people were ignorant and universally refute anyone who claims that we can be less ignorant.
What territory? This entire discussion has been about a counterfactual to guide intuition in an analogy. There is no territory here. The analogy is nukes → AGI, 1800s scientists → us, bomb that ignites the atmosphere → rapid ASI.
This makes me extraordinarily confused as to why you are even considering superintelligent 1800s scientists. What does that correspond to in the analogy?
I think you might be saying that just as superintelligent 1800s researchers could determine what sorts of at-least-city-destroying bombs are likely to be found by ordinary human research, that if we were superintelligent then we could determine whether ASI is likely to follow rapidly from AGI?
If so, I guess I agree with that but I’m not sure it actually gets us anywhere? From my reading, Rob’s point was about ordinary human 1800s scientists who didn’t know the laws of physics that govern at-least-city-destroying bombs, just as we don’t know what the universe’s rules are for AGI and ASI.
We simply don’t know whether the path from here to AGI is inherently (due to the way the universe works) jumpy or smooth, fast or slow. No amount of reasoning about aircraft, bombs, or the economy will tell us either.
We have exactly one example of AGI to point at, and that one was developed by utterly different processes. What we can say is that other primates with a fraction of our brain size don’t seem to have anywhere near that fraction of the ability to deal with complex concepts, but we don’t know why, nor whether this has any implications for AGI research.
I’ll spell out what I see as the point:
The hypothetical 1800s scientists were making mistakes of reasoning that we could now do better than. Not even just because we know more about physics, but because know better how to operate arguments about physical law and novel phenomena.
I find this interesting enough to discuss the plausibility of on its own.
What this says by analogy about Rob’s arguments depends on what you translate and what you don’t.
On one view, it says that Rob is failing to take advantage of reasoning about intelligence that we could do now, because we know better ways of taking advantage of information than they did in 1800.
On another view, it says that Rob is only failing to take advantage of reasoning that future people would be aware of. The future people would be better than us at thinking about intelligence by as much as we are better than 1800 people at thinking about physics.
I think the first analogy is closer to right and the second is closer to wrong. You can’t just make an analogy to a time when people were ignorant and universally refute anyone who claims that we can be less ignorant.
Okay, so we both took completely different things as being “the point”. One of the hazards of resorting to analogies.