This article is mostly historical analogy. Historical analogy is blind to unprecedented change, and unprecedented change is routine today and in this case the author failed to address a pretty important impending predictable change in the cost of surveillance, and consequently, an unprecedented stabilization of the state’s monopoly on force and the raising of the limits of the sophistication and thoroughness of policies that can be enforced at scale.
In 10 to 20 years, when tensor processors are cheap and power-efficient, it will be common for networks of self-replenishing autonomous drones to surveil and police vast areas of land. It’s obvious that this will be deployed as soon as it’s cost effective in Gaza (so, potentially even before analog tpus), and it’s probable that Israel will try to legitimize it by presenting it as a law enforcement tool to be placed (mostly) under the control of their chosen Palestinian authorities. Hamas will cease to exist, Palestine will appear peaceful and it will rebuild. China will use that as a pretense to start using their own police swarms at home. I can’t see further ahead than that. But it’s entirely possible the practice just keeps spreading due to the obvious social benefits of simply not having violent crime any more.
And at that point it becomes possible for an increasingly correlated elite consensus to ban, utterly, more things. And with the threat of uprising totally banished, we may lose a moderating force that we didn’t know we had. We may see a lot more restrictions. I don’t know if that makes an indefinite ban on AGI a genuine risk, but it does invalidate an argument by historical analogy!
Matthew Barnett has argued that a regulatory ratchet within existing institutions might accomplish a 50 year pause in AI research
I’m also not sure that a 50 year ban isn’t a dystopia, given that 50 years (plus 10) is long enough for most of the people I love to die of old age, and me also. I think I’m not alone in considering it to be… very conceivably arguable to that that another cycle of the churning of mortality would be almost as bad an outcome as extinction and misalignment. I’m still not sure, it depends on sub-questions about evolutionary tendencies of misaligned AGI ecosystems (IE, how kind would they be to each other and how beautiful a world would they build), and questions about the preferences of humanity that a few of us are wise enough or human enough to answer. But it’s definitely not something I’d hope to see.
In 10 to 20 years, when tensor processors are cheap and power-efficient, it will be common for networks of self-replenishing autonomous drones to surveil and police vast areas of land.
The thought of making one crossed my mind, but 10 year bets about things that seem obvious to me are unappealing. To bet in them is to stake my reputation not so much on the event, but on me being able to convince the market, soon enough before the resolution date for me to exit, of something that they’re currently — for reasons I don’t understand — denying (or if they are not in denial about it, I wont make much by betting). It’s not a bet on reality, it’s a bet on the consensus reality.
It feels like the game is, I make the market, this is the first time they’ve ever heard this take. If I present it well, they bet the same way as me and I make no mana. If I present it poorly, they narcissize and bet badly, but there’s no guarantee they’ll reverse their bets long enough before the resolution date for it to make it worth it to me.
This is an odd game.
So I guess masterful play would be to present the issue in a way that convinces people that I’m wrong about it, but in a way that’s unstable and will reverse within a year.
A very odd game.
But not a meritless one. There’s probably a lot of social good to be produced by learning to clown people like that.
This article is mostly historical analogy. Historical analogy is blind to unprecedented change, and unprecedented change is routine today and in this case the author failed to address a pretty important impending predictable change in the cost of surveillance, and consequently, an unprecedented stabilization of the state’s monopoly on force and the raising of the limits of the sophistication and thoroughness of policies that can be enforced at scale.
In 10 to 20 years, when tensor processors are cheap and power-efficient, it will be common for networks of self-replenishing autonomous drones to surveil and police vast areas of land. It’s obvious that this will be deployed as soon as it’s cost effective in Gaza (so, potentially even before analog tpus), and it’s probable that Israel will try to legitimize it by presenting it as a law enforcement tool to be placed (mostly) under the control of their chosen Palestinian authorities. Hamas will cease to exist, Palestine will appear peaceful and it will rebuild. China will use that as a pretense to start using their own police swarms at home. I can’t see further ahead than that. But it’s entirely possible the practice just keeps spreading due to the obvious social benefits of simply not having violent crime any more.
And at that point it becomes possible for an increasingly correlated elite consensus to ban, utterly, more things. And with the threat of uprising totally banished, we may lose a moderating force that we didn’t know we had. We may see a lot more restrictions. I don’t know if that makes an indefinite ban on AGI a genuine risk, but it does invalidate an argument by historical analogy!
I’m also not sure that a 50 year ban isn’t a dystopia, given that 50 years (plus 10) is long enough for most of the people I love to die of old age, and me also. I think I’m not alone in considering it to be… very conceivably arguable to that that another cycle of the churning of mortality would be almost as bad an outcome as extinction and misalignment. I’m still not sure, it depends on sub-questions about evolutionary tendencies of misaligned AGI ecosystems (IE, how kind would they be to each other and how beautiful a world would they build), and questions about the preferences of humanity that a few of us are wise enough or human enough to answer. But it’s definitely not something I’d hope to see.
Is there a betting market for this?
The thought of making one crossed my mind, but 10 year bets about things that seem obvious to me are unappealing. To bet in them is to stake my reputation not so much on the event, but on me being able to convince the market, soon enough before the resolution date for me to exit, of something that they’re currently — for reasons I don’t understand — denying (or if they are not in denial about it, I wont make much by betting). It’s not a bet on reality, it’s a bet on the consensus reality.
I’m not used to that yet.
It feels like the game is, I make the market, this is the first time they’ve ever heard this take. If I present it well, they bet the same way as me and I make no mana. If I present it poorly, they narcissize and bet badly, but there’s no guarantee they’ll reverse their bets long enough before the resolution date for it to make it worth it to me.
This is an odd game.
So I guess masterful play would be to present the issue in a way that convinces people that I’m wrong about it, but in a way that’s unstable and will reverse within a year.
A very odd game.
But not a meritless one. There’s probably a lot of social good to be produced by learning to clown people like that.