I think you could actually predict that nukes wouldn’t destroy the planet in 1800 (or at least 1810), and that it would be large organizations rather than mad scientists who built them.
The reasoning for not destroying the earth is similar to the argument that the LHC won’t destroy the earth. The LHC is probably fine because high energy cosmic rays hit us all the time and we’re fine. Is this future bomb dangerous because it creates a chain reaction? Meteors hit us and volcanos erupt without creating chain reactions. Is this bomb super-dangerous because it collects some material? The earth is full of different concentrations of stuff, why haven’t we exploded by chance? (E.g. If x-rays from the first atom bomb were going to sterilize the planet, natural nuclear reactors would have sterilized the planet.) This reasoning isn’t airtight, but it’s still really strong.
As for project size, it needs to exert effort to get around these restrictions. It’s like the question of whether you could blow up the city by putting sand in a microwave. We can be confident that nothing bad happens even without trying it, because things have happened that are similar along a lot of dimensions, and the character of physical law is such that we would have seen a shadow of this city-blowing-up mechanism even in things that were only somewhat similar. To get to blowing up a city it’s (very likely) not sufficient to put stuff together in a somewhat new configuration but still using well-explored dimensions, you need to actually make changes in dimensions that we haven’t tried yet (like by making your bomb out of something expensive).
These arguments potentially work less well for AGI than they do for nukes, but I think the case of nukes, and Rob’s intuitions, are still pretty interesting.
I don’t think this is really strong even for nukes. We know that LHC was always going to be extremely unlikely to destroy the planet because we know how it works in great detail, and about the existence of natural particles with the same properties as those in the LHC. If there were no cosmic rays of similar energy to compare against, should our probability for world destruction be larger?
If the aliens told us in 1800 “the bomb will be based on triggering an extremely fast chain reaction” (which is absolutely true) how far upward should we revise our probability?
A hypothetical inconvenient universe in which any nuclear weapon ignites a chain reaction in the environment would be entirely consistent with 1800s observations, and neither cosmic rays nor natural nuclear reactors would rule it out. Beside which, neither of those were known to scientists in 1800, and so could not serve as evidence against the hypothesis anyway.
Of course, we’re also in the privileged position of looking back from an extreme event unprecedented in natural conditions that didn’t destroy the world. Of course anyone that has already survived one or more such events is going to be biased toward “unprecedented extreme events don’t destroy the world, just look at history”.
A hypothetical inconvenient universe in which any nuclear weapon ignites a chain reaction in the environment would be entirely consistent with 1800s observations
I think this reveals to me that we’ve been confusing the map with the territory. People in the 1800s had plenty of information. If they were superintelligent they would have looked at light, and the rules of alchemy, and the color metals make when put into a flame, and figured out relativistic quantum mechanics. Given this, simple density measurements of pure metals will imply the shell structure of the nucleus.
It’s not like the information wasn’t there. The information was all there, casting shadows over their everyday world. It was just hard to figure out.
Thus, any arguments about what could have been known in 1800 have to secretly be playing by extra rules. The strictest rules are “What do we think an average 1800 natural philosopher would actually say?” In which case, sure, they wouldn’t know bupkis about nukes. The arguments I gave could be made using observations someone in 1800 might have found salient, but import a much more modern view of the character of physical law.
What territory? This entire discussion has been about a counterfactual to guide intuition in an analogy. There is no territory here. The analogy is nukes → AGI, 1800s scientists → us, bomb that ignites the atmosphere → rapid ASI.
This makes me extraordinarily confused as to why you are even considering superintelligent 1800s scientists. What does that correspond to in the analogy?
I think you might be saying that just as superintelligent 1800s researchers could determine what sorts of at-least-city-destroying bombs are likely to be found by ordinary human research, that if we were superintelligent then we could determine whether ASI is likely to follow rapidly from AGI?
If so, I guess I agree with that but I’m not sure it actually gets us anywhere? From my reading, Rob’s point was about ordinary human 1800s scientists who didn’t know the laws of physics that govern at-least-city-destroying bombs, just as we don’t know what the universe’s rules are for AGI and ASI.
We simply don’t know whether the path from here to AGI is inherently (due to the way the universe works) jumpy or smooth, fast or slow. No amount of reasoning about aircraft, bombs, or the economy will tell us either.
We have exactly one example of AGI to point at, and that one was developed by utterly different processes. What we can say is that other primates with a fraction of our brain size don’t seem to have anywhere near that fraction of the ability to deal with complex concepts, but we don’t know why, nor whether this has any implications for AGI research.
The hypothetical 1800s scientists were making mistakes of reasoning that we could now do better than. Not even just because we know more about physics, but because know better how to operate arguments about physical law and novel phenomena.
I find this interesting enough to discuss the plausibility of on its own.
What this says by analogy about Rob’s arguments depends on what you translate and what you don’t.
On one view, it says that Rob is failing to take advantage of reasoning about intelligence that we could do now, because we know better ways of taking advantage of information than they did in 1800.
On another view, it says that Rob is only failing to take advantage of reasoning that future people would be aware of. The future people would be better than us at thinking about intelligence by as much as we are better than 1800 people at thinking about physics.
I think the first analogy is closer to right and the second is closer to wrong. You can’t just make an analogy to a time when people were ignorant and universally refute anyone who claims that we can be less ignorant.
I think you could actually predict that nukes wouldn’t destroy the planet in 1800 (or at least 1810), and that it would be large organizations rather than mad scientists who built them.
The reasoning for not destroying the earth is similar to the argument that the LHC won’t destroy the earth. The LHC is probably fine because high energy cosmic rays hit us all the time and we’re fine. Is this future bomb dangerous because it creates a chain reaction? Meteors hit us and volcanos erupt without creating chain reactions. Is this bomb super-dangerous because it collects some material? The earth is full of different concentrations of stuff, why haven’t we exploded by chance? (E.g. If x-rays from the first atom bomb were going to sterilize the planet, natural nuclear reactors would have sterilized the planet.) This reasoning isn’t airtight, but it’s still really strong.
As for project size, it needs to exert effort to get around these restrictions. It’s like the question of whether you could blow up the city by putting sand in a microwave. We can be confident that nothing bad happens even without trying it, because things have happened that are similar along a lot of dimensions, and the character of physical law is such that we would have seen a shadow of this city-blowing-up mechanism even in things that were only somewhat similar. To get to blowing up a city it’s (very likely) not sufficient to put stuff together in a somewhat new configuration but still using well-explored dimensions, you need to actually make changes in dimensions that we haven’t tried yet (like by making your bomb out of something expensive).
These arguments potentially work less well for AGI than they do for nukes, but I think the case of nukes, and Rob’s intuitions, are still pretty interesting.
I don’t think this is really strong even for nukes. We know that LHC was always going to be extremely unlikely to destroy the planet because we know how it works in great detail, and about the existence of natural particles with the same properties as those in the LHC. If there were no cosmic rays of similar energy to compare against, should our probability for world destruction be larger?
If the aliens told us in 1800 “the bomb will be based on triggering an extremely fast chain reaction” (which is absolutely true) how far upward should we revise our probability?
A hypothetical inconvenient universe in which any nuclear weapon ignites a chain reaction in the environment would be entirely consistent with 1800s observations, and neither cosmic rays nor natural nuclear reactors would rule it out. Beside which, neither of those were known to scientists in 1800, and so could not serve as evidence against the hypothesis anyway.
Of course, we’re also in the privileged position of looking back from an extreme event unprecedented in natural conditions that didn’t destroy the world. Of course anyone that has already survived one or more such events is going to be biased toward “unprecedented extreme events don’t destroy the world, just look at history”.
I think this reveals to me that we’ve been confusing the map with the territory. People in the 1800s had plenty of information. If they were superintelligent they would have looked at light, and the rules of alchemy, and the color metals make when put into a flame, and figured out relativistic quantum mechanics. Given this, simple density measurements of pure metals will imply the shell structure of the nucleus.
It’s not like the information wasn’t there. The information was all there, casting shadows over their everyday world. It was just hard to figure out.
Thus, any arguments about what could have been known in 1800 have to secretly be playing by extra rules. The strictest rules are “What do we think an average 1800 natural philosopher would actually say?” In which case, sure, they wouldn’t know bupkis about nukes. The arguments I gave could be made using observations someone in 1800 might have found salient, but import a much more modern view of the character of physical law.
What territory? This entire discussion has been about a counterfactual to guide intuition in an analogy. There is no territory here. The analogy is nukes → AGI, 1800s scientists → us, bomb that ignites the atmosphere → rapid ASI.
This makes me extraordinarily confused as to why you are even considering superintelligent 1800s scientists. What does that correspond to in the analogy?
I think you might be saying that just as superintelligent 1800s researchers could determine what sorts of at-least-city-destroying bombs are likely to be found by ordinary human research, that if we were superintelligent then we could determine whether ASI is likely to follow rapidly from AGI?
If so, I guess I agree with that but I’m not sure it actually gets us anywhere? From my reading, Rob’s point was about ordinary human 1800s scientists who didn’t know the laws of physics that govern at-least-city-destroying bombs, just as we don’t know what the universe’s rules are for AGI and ASI.
We simply don’t know whether the path from here to AGI is inherently (due to the way the universe works) jumpy or smooth, fast or slow. No amount of reasoning about aircraft, bombs, or the economy will tell us either.
We have exactly one example of AGI to point at, and that one was developed by utterly different processes. What we can say is that other primates with a fraction of our brain size don’t seem to have anywhere near that fraction of the ability to deal with complex concepts, but we don’t know why, nor whether this has any implications for AGI research.
I’ll spell out what I see as the point:
The hypothetical 1800s scientists were making mistakes of reasoning that we could now do better than. Not even just because we know more about physics, but because know better how to operate arguments about physical law and novel phenomena.
I find this interesting enough to discuss the plausibility of on its own.
What this says by analogy about Rob’s arguments depends on what you translate and what you don’t.
On one view, it says that Rob is failing to take advantage of reasoning about intelligence that we could do now, because we know better ways of taking advantage of information than they did in 1800.
On another view, it says that Rob is only failing to take advantage of reasoning that future people would be aware of. The future people would be better than us at thinking about intelligence by as much as we are better than 1800 people at thinking about physics.
I think the first analogy is closer to right and the second is closer to wrong. You can’t just make an analogy to a time when people were ignorant and universally refute anyone who claims that we can be less ignorant.
Okay, so we both took completely different things as being “the point”. One of the hazards of resorting to analogies.