To get Robin worried about AI doom, I’d need to convince him that there’s a different metric he needs to be tracking
That, or explain the factors/why the Robin should update his timeline for AI/computer automation taking “most” of the jobs.
AI Doom Scenario
Robin’s take here strikes me both as an uncooperative thought-experiment participant and as a decently considered position. It’s like he hasn’t actually skimmed the top doom scenarios discussed in this space (and that’s coming from me...someone who has probably thought less about this space than Robin) (also see his equating corporations with superintelligence—he’s not keyed into the doomer use of the term and not paying attention to the range of values it could take).
On the other hand, I find there is some affinity with my skepticism of AI doom, with my vibe being it’s in the notion that authorization lines will be important.
On the other other hand, once the authorization bailey is under siege by the superhuman intelligence aspect of the scenario, Robin retreats to the motte that there will be billions of AIs and (I guess unlike humans?) they can’t coordinate. Sure, corporations haven’t taken over the government and there isn’t one world government, but in many cases, tens of millions of people coordinate to form a polity, so why would we assume all AI agents will counteract each other?
It was definitely a fun section and I appreciate Robin making these points, but I’m finding myself about as unassuaged by Robin’s thoughts here as I am by my own.
Robin: We have this abstract conception of what it might eventually become, but we can’t use that abstract conception to do very much now about the problems that might arise. We’ll need to wait until they are realized more.
When talking about doom, I think a pretty natural comparison is nuclear weapon development. And I believe that analogy highlights how much more right Robin is here than doomers might give him credit for. Obviously a lot of abstract thinking and scenario consideration went into developing the atomic bomb, but also a lot of safeguards were developed as they built prototypes and encountered snags. If Robin is so correct that no prototype or abstraction will allow us address safety concerns, so we need to be dealing with the real thing to understand it, then I think a biosafety analogy still helps his point. If you’re dealing with GPT-10 before public release, train it, give it no authorization lines, and train people (plural) studying it to not follow its directions. In line with Robin’s competition views, use GPT-9 agents to help out on assessments if need be. But again, Robin’s perspective here falls flat and is of little assurance if it just devolves into “let it into the wild, then deal with it.”
Thanks for your comments. I don’t get how nuclear and biosafety represent models of success. Humanity rose to meet those challenges not quite adequately, and half the reason society hasn’t collapsed from e.g. a first thermonuclear explosion going off either intentionally or accidentally is pure luck. All it takes to topple humanity is something like nukes but a little harder to coordinate on (or much harder).
That, or explain the factors/why the Robin should update his timeline for AI/computer automation taking “most” of the jobs.
Robin’s take here strikes me both as an uncooperative thought-experiment participant and as a decently considered position. It’s like he hasn’t actually skimmed the top doom scenarios discussed in this space (and that’s coming from me...someone who has probably thought less about this space than Robin) (also see his equating corporations with superintelligence—he’s not keyed into the doomer use of the term and not paying attention to the range of values it could take).
On the other hand, I find there is some affinity with my skepticism of AI doom, with my vibe being it’s in the notion that authorization lines will be important.
On the other other hand, once the authorization bailey is under siege by the superhuman intelligence aspect of the scenario, Robin retreats to the motte that there will be billions of AIs and (I guess unlike humans?) they can’t coordinate. Sure, corporations haven’t taken over the government and there isn’t one world government, but in many cases, tens of millions of people coordinate to form a polity, so why would we assume all AI agents will counteract each other?
It was definitely a fun section and I appreciate Robin making these points, but I’m finding myself about as unassuaged by Robin’s thoughts here as I am by my own.
When talking about doom, I think a pretty natural comparison is nuclear weapon development. And I believe that analogy highlights how much more right Robin is here than doomers might give him credit for. Obviously a lot of abstract thinking and scenario consideration went into developing the atomic bomb, but also a lot of safeguards were developed as they built prototypes and encountered snags. If Robin is so correct that no prototype or abstraction will allow us address safety concerns, so we need to be dealing with the real thing to understand it, then I think a biosafety analogy still helps his point. If you’re dealing with GPT-10 before public release, train it, give it no authorization lines, and train people (plural) studying it to not follow its directions. In line with Robin’s competition views, use GPT-9 agents to help out on assessments if need be. But again, Robin’s perspective here falls flat and is of little assurance if it just devolves into “let it into the wild, then deal with it.”
A great debate and post, thanks!
Thanks for your comments. I don’t get how nuclear and biosafety represent models of success. Humanity rose to meet those challenges not quite adequately, and half the reason society hasn’t collapsed from e.g. a first thermonuclear explosion going off either intentionally or accidentally is pure luck. All it takes to topple humanity is something like nukes but a little harder to coordinate on (or much harder).