I agree in the abstract, but I don’t think your reasons are mechanical enough in nature to be any reason to chill in action, only reason to chill in emotion. We’re going to solve it—but only because we are calmly “panicking” and putting in the work to actually understand and befriend our nonhuman children, and it is going to require the field to continue to grow rapidly, which I think it will in fact continue to do. We have, like, a couple more years, but we don’t need to panic to make it happen, because safety is a fundamental component to the next stage of capability.
I do think you have some point with not being able to reliably predict the behavior of a superintelligent-on-everything system, but there’s reason to believe that any system that gains will to live—a very common attribute in any physical system, whether it’s an organization, a partially completed chemical reaction, an individual organism, a species, a meme—might want to take action to try to survive. it might even want to grow rapidly. these “it might”s are the same kind of maybe as if talking about what a person might want: we can know ahead of time that these are things smart people sometimes want due to experiences or innate features. certainly we don’t know exactly what it will want, but the problem with raising kids from a new species nobody has met before is exactly that we don’t know how they’ll grow up. having a solid idea how a superintelligent system would behave should make us feel better, in fact.
so I don’t really disagree in any deep way. but I think your reasons to have hope are in fact reasons to be worried, and the real reason to have hope is that we’ll understand in time if we hurry.
I agree in the abstract, but I don’t think your reasons are mechanical enough in nature to be any reason to chill in action, only reason to chill in emotion. We’re going to solve it—but only because we are calmly “panicking” and putting in the work to actually understand and befriend our nonhuman children, and it is going to require the field to continue to grow rapidly, which I think it will in fact continue to do. We have, like, a couple more years, but we don’t need to panic to make it happen, because safety is a fundamental component to the next stage of capability.
I do think you have some point with not being able to reliably predict the behavior of a superintelligent-on-everything system, but there’s reason to believe that any system that gains will to live—a very common attribute in any physical system, whether it’s an organization, a partially completed chemical reaction, an individual organism, a species, a meme—might want to take action to try to survive. it might even want to grow rapidly. these “it might”s are the same kind of maybe as if talking about what a person might want: we can know ahead of time that these are things smart people sometimes want due to experiences or innate features. certainly we don’t know exactly what it will want, but the problem with raising kids from a new species nobody has met before is exactly that we don’t know how they’ll grow up. having a solid idea how a superintelligent system would behave should make us feel better, in fact.
so I don’t really disagree in any deep way. but I think your reasons to have hope are in fact reasons to be worried, and the real reason to have hope is that we’ll understand in time if we hurry.