It is my impression (perhaps wrong!) that the community uses this kind of example intentionally, if subconsciously, to prove a point. The implication seems to be that how an AGI attacks doesn’t matter, because we can’t possibly predict it—and by extension, that we should spend all of our brain cycles trying to figure out how to align it instead.
I think it’s more that this is actually our best guess at how a post-intelligence explosion superintelligence might attack. I do agree and have run into the problem with presenting this version to people who don’t have lots of context, and like your attempt to fix that.
My compressed version in these conversations usually runs like:
By the time this happens there may well be lots of autonomous weapons around to hack, and we will be even more dependent on our technological infrastructure. The AI would not act until it is highly confident we would not be able to pose a threat to it, it can just hide and quietly sabotage other AGI projects, so even if it’s not immediately doing hard recursive self-improvement, we’re not safe.
If they seem interested I also talk about intercepting and modifying communications on a global scale, blackmail, and other manipulations.
Thanks for your reply! I like your compressed version. That feels to me like it would land on a fair number of people. I like to think about trying to explain these concepts to my parents. My dad is a healthcare professional, very competent with machines, can do math, can fix a computer. If I told him superintelligent AI would make nanomachine weapons, he would glaze over. But I think he could imagine having our missile systems taken over by a “next-generation virus.”
My mom has no technical background or interests, so she represents my harder test. If I read her that paragraph she’d have no emotional reaction or lasting memory of the content. I worry that many of the people who are the most important to convince fall into this category.
I think it’s more that this is actually our best guess at how a post-intelligence explosion superintelligence might attack. I do agree and have run into the problem with presenting this version to people who don’t have lots of context, and like your attempt to fix that.
My compressed version in these conversations usually runs like:
If they seem interested I also talk about intercepting and modifying communications on a global scale, blackmail, and other manipulations.
Thanks for your reply! I like your compressed version. That feels to me like it would land on a fair number of people. I like to think about trying to explain these concepts to my parents. My dad is a healthcare professional, very competent with machines, can do math, can fix a computer. If I told him superintelligent AI would make nanomachine weapons, he would glaze over. But I think he could imagine having our missile systems taken over by a “next-generation virus.”
My mom has no technical background or interests, so she represents my harder test. If I read her that paragraph she’d have no emotional reaction or lasting memory of the content. I worry that many of the people who are the most important to convince fall into this category.