A hypothetical inconvenient universe in which any nuclear weapon ignites a chain reaction in the environment would be entirely consistent with 1800s observations
I think this reveals to me that we’ve been confusing the map with the territory. People in the 1800s had plenty of information. If they were superintelligent they would have looked at light, and the rules of alchemy, and the color metals make when put into a flame, and figured out relativistic quantum mechanics. Given this, simple density measurements of pure metals will imply the shell structure of the nucleus.
It’s not like the information wasn’t there. The information was all there, casting shadows over their everyday world. It was just hard to figure out.
Thus, any arguments about what could have been known in 1800 have to secretly be playing by extra rules. The strictest rules are “What do we think an average 1800 natural philosopher would actually say?” In which case, sure, they wouldn’t know bupkis about nukes. The arguments I gave could be made using observations someone in 1800 might have found salient, but import a much more modern view of the character of physical law.
What territory? This entire discussion has been about a counterfactual to guide intuition in an analogy. There is no territory here. The analogy is nukes → AGI, 1800s scientists → us, bomb that ignites the atmosphere → rapid ASI.
This makes me extraordinarily confused as to why you are even considering superintelligent 1800s scientists. What does that correspond to in the analogy?
I think you might be saying that just as superintelligent 1800s researchers could determine what sorts of at-least-city-destroying bombs are likely to be found by ordinary human research, that if we were superintelligent then we could determine whether ASI is likely to follow rapidly from AGI?
If so, I guess I agree with that but I’m not sure it actually gets us anywhere? From my reading, Rob’s point was about ordinary human 1800s scientists who didn’t know the laws of physics that govern at-least-city-destroying bombs, just as we don’t know what the universe’s rules are for AGI and ASI.
We simply don’t know whether the path from here to AGI is inherently (due to the way the universe works) jumpy or smooth, fast or slow. No amount of reasoning about aircraft, bombs, or the economy will tell us either.
We have exactly one example of AGI to point at, and that one was developed by utterly different processes. What we can say is that other primates with a fraction of our brain size don’t seem to have anywhere near that fraction of the ability to deal with complex concepts, but we don’t know why, nor whether this has any implications for AGI research.
The hypothetical 1800s scientists were making mistakes of reasoning that we could now do better than. Not even just because we know more about physics, but because know better how to operate arguments about physical law and novel phenomena.
I find this interesting enough to discuss the plausibility of on its own.
What this says by analogy about Rob’s arguments depends on what you translate and what you don’t.
On one view, it says that Rob is failing to take advantage of reasoning about intelligence that we could do now, because we know better ways of taking advantage of information than they did in 1800.
On another view, it says that Rob is only failing to take advantage of reasoning that future people would be aware of. The future people would be better than us at thinking about intelligence by as much as we are better than 1800 people at thinking about physics.
I think the first analogy is closer to right and the second is closer to wrong. You can’t just make an analogy to a time when people were ignorant and universally refute anyone who claims that we can be less ignorant.
I think this reveals to me that we’ve been confusing the map with the territory. People in the 1800s had plenty of information. If they were superintelligent they would have looked at light, and the rules of alchemy, and the color metals make when put into a flame, and figured out relativistic quantum mechanics. Given this, simple density measurements of pure metals will imply the shell structure of the nucleus.
It’s not like the information wasn’t there. The information was all there, casting shadows over their everyday world. It was just hard to figure out.
Thus, any arguments about what could have been known in 1800 have to secretly be playing by extra rules. The strictest rules are “What do we think an average 1800 natural philosopher would actually say?” In which case, sure, they wouldn’t know bupkis about nukes. The arguments I gave could be made using observations someone in 1800 might have found salient, but import a much more modern view of the character of physical law.
What territory? This entire discussion has been about a counterfactual to guide intuition in an analogy. There is no territory here. The analogy is nukes → AGI, 1800s scientists → us, bomb that ignites the atmosphere → rapid ASI.
This makes me extraordinarily confused as to why you are even considering superintelligent 1800s scientists. What does that correspond to in the analogy?
I think you might be saying that just as superintelligent 1800s researchers could determine what sorts of at-least-city-destroying bombs are likely to be found by ordinary human research, that if we were superintelligent then we could determine whether ASI is likely to follow rapidly from AGI?
If so, I guess I agree with that but I’m not sure it actually gets us anywhere? From my reading, Rob’s point was about ordinary human 1800s scientists who didn’t know the laws of physics that govern at-least-city-destroying bombs, just as we don’t know what the universe’s rules are for AGI and ASI.
We simply don’t know whether the path from here to AGI is inherently (due to the way the universe works) jumpy or smooth, fast or slow. No amount of reasoning about aircraft, bombs, or the economy will tell us either.
We have exactly one example of AGI to point at, and that one was developed by utterly different processes. What we can say is that other primates with a fraction of our brain size don’t seem to have anywhere near that fraction of the ability to deal with complex concepts, but we don’t know why, nor whether this has any implications for AGI research.
I’ll spell out what I see as the point:
The hypothetical 1800s scientists were making mistakes of reasoning that we could now do better than. Not even just because we know more about physics, but because know better how to operate arguments about physical law and novel phenomena.
I find this interesting enough to discuss the plausibility of on its own.
What this says by analogy about Rob’s arguments depends on what you translate and what you don’t.
On one view, it says that Rob is failing to take advantage of reasoning about intelligence that we could do now, because we know better ways of taking advantage of information than they did in 1800.
On another view, it says that Rob is only failing to take advantage of reasoning that future people would be aware of. The future people would be better than us at thinking about intelligence by as much as we are better than 1800 people at thinking about physics.
I think the first analogy is closer to right and the second is closer to wrong. You can’t just make an analogy to a time when people were ignorant and universally refute anyone who claims that we can be less ignorant.
Okay, so we both took completely different things as being “the point”. One of the hazards of resorting to analogies.