Hard agree. I wrote my own post not long ago about exactly how I think that at a fundamental economic level, a society in which a large class of people exist who have no leverage and contribute no value yet are kept around seems dramatically unstable—and I’d expect those people to quickly fall into various forms of serfdom or worse, eventually, genocide. You actually make some compelling and more pragmatic arguments for just about how this would go short term.
I think people hope for a change in culture, thinking that this mindset is the product of capitalism or what have you. I think it’s the product of scarcity, which is what we evolved into for hundreds of thousands of years (or at least got used to in thousands of years of agricultural civilization: hunter-gatherers may have had it better), and so there inevitably would be some disconnect if we transitioned to a truly post-scarcity society. But the speed of technological change might mean that culture would not be able to keep up—that we would find ourselves scarcity-minded people in a post-scarcity world, and that spells trouble. We’re already somewhat like that—most worries about welfare cheats are nonsense at an object level, squabbling over tiny amounts of money that it’s actually more efficient to let go to waste rather than try to catch, yet they happen because people have this strong “punish the freeloader at all costs!” instinct because that’s what our culture has done, has had to do, with freeloaders for a long time. And even though now we could afford to let it slide more often than not with no big harm, people don’t grok that. Going into a post-AGI economy like this risks to be a sequence of “oh, that group got disempowered by AI and now they’re unemployed? Well, if they can’t move on to do something else they must be lazy bums!” from everyone who’s still been lucky until at the end only AI-owning capitalists are left and everyone else (including the developers who created the AIs) have been thrown under the bus.
most worries about welfare cheats are nonsense at an object level, squabbling over tiny amounts of money that it’s actually more efficient to let go to waste rather than try to catch
There’s a difference between “how many you actually catch” and “how many you actually catch, plus how many more are discouraged from cheating by the possibility of being caught”.
yet they happen because people have this strong “punish the freeloader at all costs!” instinct because that’s what our culture has done, has had to do, with freeloaders for a long time.
Instincts to punish people are how actual humans precommit. And it’s necessary to precommit to punishment. If you don’t. the punishment would always be uneconomical when considered after the fact, so nobody would follow up on any threats to punish. Anyone who might be discouraged by punishment would then anticipate the lack of followup, so the threat of punishment would have no effect.
Instincts to punish people are how actual humans precommit.
i think you could equally frame this as “people precommit due to an expectation of reciprocity”. like, i don’t generally follow through on my commitments to plans with friends because i fear punishment for breaking them. it’s more that i expect whatever amount i invest into the friendship will be reciprocated (approximately).
you could frame the fallout of a commitment failure as “punishment”, but if the risk of punishment exceeded the benefit of cooperation that would discourage me from pre-committing; from interacting with the thing at all. if i thought my crush would beat me should i break things off with him, then i’d simply never ask him out to begin with and we’d probably both be worse off for that.
I realise that much, but even with all that, I still think that realistically we’re vastly, vastly overcommitting resources and effort to preventing welfare fraud compared to the actual benefits. There’s a reason why some suggest that UBI might be not only a better, but a cheaper scheme, simply by virtue of removing all the bells and whistles of actually trying to double check who has the correct rights and who doesn’t. The kind of welfare fraud you’d need to worry about is “one guy pretends to be one million guys and then a lot other people imitate him”. Most benefits (I’m writing from the UK, personally, but I guess this probably applies elsewhere too) are so thin, even if everyone was a scammer (and that’s a vast overestimate no matter what), you wouldn’t lose much compared to the total size of the national budget. In practice, the mildest and most superficial of checks to root out the obvious problems would likely be all that’s needed. Instead we regularly err on the other side, with checks so expansive that they cause false negatives instead (and thus people who need the benefits go without) and also cost more than the benefit fraud they prevent.
It’s not a particularly rational system, not in a regime of fundamental abundance as we have. We could definitely afford a lot of slack before “lazy people who don’t want to work” actually became anything remotely close to a real economic problem. The reason is ideological, cultural and sometimes religious commitment to the idea that work is sacred and not working has to be discouraged and punished, not economic sense or decision theory. That still works, kinda, but in a post-AGI world people would have to either abandon those ideals really quickly or create an incredibly cruel and self-destructive system that would punish people for not doing what no one needs them to do anyway.
Hard agree. I wrote my own post not long ago about exactly how I think that at a fundamental economic level, a society in which a large class of people exist who have no leverage and contribute no value yet are kept around seems dramatically unstable—and I’d expect those people to quickly fall into various forms of serfdom or worse, eventually, genocide. You actually make some compelling and more pragmatic arguments for just about how this would go short term.
I think people hope for a change in culture, thinking that this mindset is the product of capitalism or what have you. I think it’s the product of scarcity, which is what we evolved into for hundreds of thousands of years (or at least got used to in thousands of years of agricultural civilization: hunter-gatherers may have had it better), and so there inevitably would be some disconnect if we transitioned to a truly post-scarcity society. But the speed of technological change might mean that culture would not be able to keep up—that we would find ourselves scarcity-minded people in a post-scarcity world, and that spells trouble. We’re already somewhat like that—most worries about welfare cheats are nonsense at an object level, squabbling over tiny amounts of money that it’s actually more efficient to let go to waste rather than try to catch, yet they happen because people have this strong “punish the freeloader at all costs!” instinct because that’s what our culture has done, has had to do, with freeloaders for a long time. And even though now we could afford to let it slide more often than not with no big harm, people don’t grok that. Going into a post-AGI economy like this risks to be a sequence of “oh, that group got disempowered by AI and now they’re unemployed? Well, if they can’t move on to do something else they must be lazy bums!” from everyone who’s still been lucky until at the end only AI-owning capitalists are left and everyone else (including the developers who created the AIs) have been thrown under the bus.
There’s a difference between “how many you actually catch” and “how many you actually catch, plus how many more are discouraged from cheating by the possibility of being caught”.
Instincts to punish people are how actual humans precommit. And it’s necessary to precommit to punishment. If you don’t. the punishment would always be uneconomical when considered after the fact, so nobody would follow up on any threats to punish. Anyone who might be discouraged by punishment would then anticipate the lack of followup, so the threat of punishment would have no effect.
i think you could equally frame this as “people precommit due to an expectation of reciprocity”. like, i don’t generally follow through on my commitments to plans with friends because i fear punishment for breaking them. it’s more that i expect whatever amount i invest into the friendship will be reciprocated (approximately).
you could frame the fallout of a commitment failure as “punishment”, but if the risk of punishment exceeded the benefit of cooperation that would discourage me from pre-committing; from interacting with the thing at all. if i thought my crush would beat me should i break things off with him, then i’d simply never ask him out to begin with and we’d probably both be worse off for that.
I realise that much, but even with all that, I still think that realistically we’re vastly, vastly overcommitting resources and effort to preventing welfare fraud compared to the actual benefits. There’s a reason why some suggest that UBI might be not only a better, but a cheaper scheme, simply by virtue of removing all the bells and whistles of actually trying to double check who has the correct rights and who doesn’t. The kind of welfare fraud you’d need to worry about is “one guy pretends to be one million guys and then a lot other people imitate him”. Most benefits (I’m writing from the UK, personally, but I guess this probably applies elsewhere too) are so thin, even if everyone was a scammer (and that’s a vast overestimate no matter what), you wouldn’t lose much compared to the total size of the national budget. In practice, the mildest and most superficial of checks to root out the obvious problems would likely be all that’s needed. Instead we regularly err on the other side, with checks so expansive that they cause false negatives instead (and thus people who need the benefits go without) and also cost more than the benefit fraud they prevent.
It’s not a particularly rational system, not in a regime of fundamental abundance as we have. We could definitely afford a lot of slack before “lazy people who don’t want to work” actually became anything remotely close to a real economic problem. The reason is ideological, cultural and sometimes religious commitment to the idea that work is sacred and not working has to be discouraged and punished, not economic sense or decision theory. That still works, kinda, but in a post-AGI world people would have to either abandon those ideals really quickly or create an incredibly cruel and self-destructive system that would punish people for not doing what no one needs them to do anyway.