Nice work. Is this meant to be persuasive (for those raising concerns of burdensome details), prescriptive (there could be safeguards ineffective against an ASI but effective against a “dumb” AI), or both?
Similarly, absent formal resolution of the alignment problem, do you think there are mitigatory avenues available against an MSA? That is, things to which we‘d devote 10% of our safety resources, if we believed that DSA is 90% likely and MSA is 10% likely, conditional on the emergence of general intelligence.
Whether there are mitigatory avenues—I would assume so, but it feels hard to know what exactly they might be. So much depends on the general landscape of both society and technology, and the exact details of how these technologies turn out to work. For instance, if it does turn out that a lot of corporations are starting to employ proto-AGI systems in running their daily business, then maybe you could tackle that with some kind of regulation. But that’s assuming a specific scenario, and even within that scenario, there are probably a lot of nuances that you’d need to get right in order to make the regulation effective (most of which I wouldn’t know about, since I’m not an expert on regulation).
Nice work. Is this meant to be persuasive (for those raising concerns of burdensome details), prescriptive (there could be safeguards ineffective against an ASI but effective against a “dumb” AI), or both?
Similarly, absent formal resolution of the alignment problem, do you think there are mitigatory avenues available against an MSA? That is, things to which we‘d devote 10% of our safety resources, if we believed that DSA is 90% likely and MSA is 10% likely, conditional on the emergence of general intelligence.
Both.
Whether there are mitigatory avenues—I would assume so, but it feels hard to know what exactly they might be. So much depends on the general landscape of both society and technology, and the exact details of how these technologies turn out to work. For instance, if it does turn out that a lot of corporations are starting to employ proto-AGI systems in running their daily business, then maybe you could tackle that with some kind of regulation. But that’s assuming a specific scenario, and even within that scenario, there are probably a lot of nuances that you’d need to get right in order to make the regulation effective (most of which I wouldn’t know about, since I’m not an expert on regulation).