The strongest counterargument offered was that a scope-limited AI doesn’t stop rogue unfriendly AIs from arising and destroying the world.
Maybe I misinterpreted the argument. If it means that we need an unbounded friendly AI to deal with unbounded unfriendly AI, it makes more sense. The question then comes down to how likely it is that once someone discovered AGI, others will be able to discover it as well or make use of the discovery, versus the payoff from experimenting with bounded versions of such an AGI design before running an unbounded friendly version. In other words, how much can we increase our confidence that we solved friendliness by experimenting with bounded versions, versus the risk associated with not taking over the world as soon as possible to impede unfriendly unbounded versions.
Maybe I misinterpreted the argument. If it means that we need an unbounded friendly AI to deal with unbounded unfriendly AI, it makes more sense. The question then comes down to how likely it is that once someone discovered AGI, others will be able to discover it as well or make use of the discovery, versus the payoff from experimenting with bounded versions of such an AGI design before running an unbounded friendly version. In other words, how much can we increase our confidence that we solved friendliness by experimenting with bounded versions, versus the risk associated with not taking over the world as soon as possible to impede unfriendly unbounded versions.