I’m pretty sure (though not 100%) that “science doesn’t know for sure” that “benevolent government” is literally mathematically impossible. So I want to work on that! <3
Public choice theory probably comes closest to showing this. Please look into that if you haven’t already. And I’m interested to know what approach you want to work on.
Personally, I think the entire concept of government should be rederived from first principles from scratch and rebooted, as a sort of “backup fallback government” for the entire planet, with AI and blockshit
I think unfortunately this is very unlikely in the foreseeable future (absent superintelligent AI). Humans and their relationships are just too messy to fully model with our current theoretical tools, whereas existing institutions have often evolved to take more of human nature into account (e.g., academia leveraging people’s status striving to produce knowledge for the world, militaries leveraging solidarity with fellow soldiers to overcome selfishness/cowardice).
As an investor I’m keenly aware that we’re not even close to deriving the governance of a publicly held corporation from first principles. Once somebody solves that problem, I’d become much more excited about doing the same thing for government.
Public Choice Theory is a big field with lots and lots of nooks and crannies and in my surveys so far I have not found a good clean proof that benevolent government is impossible.
If you know of a good clean argument that benevolent government is mathematically impossible, it would alleviate a giant hole in my current knowledge, and help me resolve quite a few planning loops that are currently open. I would appreciate knowing the truth here for really real.
Broadly speaking, I’m pretty sure most governments over the last 10,000 years have been basically net-Evil slave empires, but the question here is sorta like: maybe this because that’s mathematically necessarily how any “government shaped economic arrangement” necessarily is, or maybe this is because of some contingent fact that just happened to be true in general in the past…
...like most people over the last 10,000 years were illiterate savages and they didn’t know any better, and that might explain the relatively “homogenously evil” character of historical governments and the way that government variation seems to be restricted to a small range of being “slightly more evil to slightly less evil”.
Or perhaps the problem is that all of human history has been human history, and there has never been a AI dictator nor AI general nor AI pope nor AI mega celebrity nor AI CEO. Not once. Not ever. And so maybe if that changed then we could “buck the trend line of generalized evil” in the future? A single inhumanly saintlike immortal leader might be all that it takes!
My hope is: despite the empirical truth that governments are evil in general, perhaps this evil has been for contingent reasons (maybe many contingent reasons (like there might be 20 independent causes of a government being non-benevolent, and you have to fix every single one of them to get the benevolent result)).
So long as it is logically possible to get a win condition, I think grit is the right virtue to emphasize in the pursuit of a win condition.
It would just be nice to even have an upper bound on how much optimization pressure would be required to generate a fully benevolent government, and I currently don’t even have this :-(
I grant, from my current subjective position, that it could be that it requires infinite optimization pressure… that is to say: it could be that “a benevolent government” is like “a perpetual motion machine”?
Applying grit, as a meta-programming choice applied to my own character structures, I remain forcefully hopeful that “a win condition is possible at all” despite the apparent empirical truth of some broadly catharist summary of the evils of nearly all governments, and darwinian evolution, and so on.
The only exceptions I’m quite certain about are the “net goodness” of sub-Dunbar social groupings among animals.
For example, a lion pride keeps a male lion around as a policy, despite the occasional mass killing of babies when a new male takes over. The cost in murdered babies is probably “worth it on net” compared to alternative policies where males are systematically driven out of a pride when they commit crimes, or females don’t even congregate into social groups.
Each pride is like a little country, and evolution would probably eliminate prides from the lion behavioral repertoire if it wasn’t net useful, so this is a sort of an existence proof of a limited and tiny government that is “clearly imperfect, but probably net good”.
((
In that case, of course, the utility function evolution has built these “emergent lion governments” to optimize for is simply “procreation”. Maybe that must be the utility function? Maybe you can’t add art or happiness or the-self-actualization-of-novel-persons-in-a-vibrant-community to that utility function and still have it work?? If someone proved it for real and got an “only one possible utility function”-result, it would fulfill some quite bleak lower level sorts of Wattsian predictions. And I can’t currently rigorously rule out this concern. So… yeah. Hopefully there can be benevolent governments AND these governments will have some budgetary discretion around preserving “politically useless but humanistically nice things”?
))
But in general, from beginnings like this small argument in favor of “lion government being net positive”, I think that it might be possible to generate a sort of “inductive proof”.
If N, then N+1: “When adding some social complexity to a ‘net worth it government’ (longer time rollout before deciding?) (more members in larger groups?) (deeper plies of tactical reasoning at each juncture by each agent?) the WORTH-KEEPING-IT-property itself can be reliably preserved, arbitrarily, forever, using only scale-free organizing principles”.
So I would say that’s close to my current best argument for hope.
If we can start with something minimally net positive, and scale it up forever, getting better and better at including more and more concerns in fair ways, then… huzzah!
And that’s why grit seems like “not an insane thing to apply” to the pursuit of a win condition where a benevolent government could exist for all of Earth.
I just don’t have the details of that proof, nor the anthropological nor ethological nor historical data at hand :-(
The strong contrasting claim would be: maybe there is an upper bound. Maybe small packs of animals (or small groups of humans, or whatever) are the limit for some reason? Maybe there are strong constraints implying definite finitudes that limit the degree to which “things can be systematically Good”?
Maybe singleton’s can’t exist indefinitely. Maybe there will always be civil wars, always be predation, always be fraud, always be abortion, always be infanticide, always be murder, always be misleading advertising, always be cannibalism, always be agents coherently and successfully pursuing unfair allocations outside of safely limited finite games… Maybe there will always be evil, woven into the very structure of governments and social processes, as has been the case since the beginning of human history.
Maybe it is like that because it MUST be like that. Maybe its like that because of math. Maybe it is like that across the entire Tegmark IV multiverse: maybe “if persons in groups, then net evil prevails”?
I have two sketches for a proof that this might be true, because it is responsible and productive to slosh back and forth between “cognitive extremes (best and worst planning cases, true and false hypotheses, etc) that are justified by the data and the ongoing attempt to reconcile the data” still.
Procedure: Try to prove X, then try to prove not-X, and then maybe spend some time considering Goedel and Turing with respect to X. Eventually some X-related-conclusion will be produced! :-)
I think I’d prefer not to talk too much about the proof sketches for the universal inevitability of evil among men.
I might be wrong about them, but also it might convince some in the audience, and that seems like it could be an infohazard? Maybe? And this response is already too large <3
But if anyone already has a proof of the inevitability of evil government, then I’d really appreciate them letting me know that they have one (possibly in private) because I’m non-trivially likely to find the proof eventually anyway, if such proofs exist to be found, and I promise to pay you at least $1000 for the proof, if proof you have. (Offer only good to the first such person. My budget is also finite.)
Public choice theory probably comes closest to showing this. Please look into that if you haven’t already. And I’m interested to know what approach you want to work on.
I think unfortunately this is very unlikely in the foreseeable future (absent superintelligent AI). Humans and their relationships are just too messy to fully model with our current theoretical tools, whereas existing institutions have often evolved to take more of human nature into account (e.g., academia leveraging people’s status striving to produce knowledge for the world, militaries leveraging solidarity with fellow soldiers to overcome selfishness/cowardice).
As an investor I’m keenly aware that we’re not even close to deriving the governance of a publicly held corporation from first principles. Once somebody solves that problem, I’d become much more excited about doing the same thing for government.
Public Choice Theory is a big field with lots and lots of nooks and crannies and in my surveys so far I have not found a good clean proof that benevolent government is impossible.
If you know of a good clean argument that benevolent government is mathematically impossible, it would alleviate a giant hole in my current knowledge, and help me resolve quite a few planning loops that are currently open. I would appreciate knowing the truth here for really real.
Broadly speaking, I’m pretty sure most governments over the last 10,000 years have been basically net-Evil slave empires, but the question here is sorta like: maybe this because that’s mathematically necessarily how any “government shaped economic arrangement” necessarily is, or maybe this is because of some contingent fact that just happened to be true in general in the past…
...like most people over the last 10,000 years were illiterate savages and they didn’t know any better, and that might explain the relatively “homogenously evil” character of historical governments and the way that government variation seems to be restricted to a small range of being “slightly more evil to slightly less evil”.
Or perhaps the problem is that all of human history has been human history, and there has never been a AI dictator nor AI general nor AI pope nor AI mega celebrity nor AI CEO. Not once. Not ever. And so maybe if that changed then we could “buck the trend line of generalized evil” in the future? A single inhumanly saintlike immortal leader might be all that it takes!
My hope is: despite the empirical truth that governments are evil in general, perhaps this evil has been for contingent reasons (maybe many contingent reasons (like there might be 20 independent causes of a government being non-benevolent, and you have to fix every single one of them to get the benevolent result)).
So long as it is logically possible to get a win condition, I think grit is the right virtue to emphasize in the pursuit of a win condition.
It would just be nice to even have an upper bound on how much optimization pressure would be required to generate a fully benevolent government, and I currently don’t even have this :-(
I grant, from my current subjective position, that it could be that it requires infinite optimization pressure… that is to say: it could be that “a benevolent government” is like “a perpetual motion machine”?
Applying grit, as a meta-programming choice applied to my own character structures, I remain forcefully hopeful that “a win condition is possible at all” despite the apparent empirical truth of some broadly catharist summary of the evils of nearly all governments, and darwinian evolution, and so on.
The only exceptions I’m quite certain about are the “net goodness” of sub-Dunbar social groupings among animals.
For example, a lion pride keeps a male lion around as a policy, despite the occasional mass killing of babies when a new male takes over. The cost in murdered babies is probably “worth it on net” compared to alternative policies where males are systematically driven out of a pride when they commit crimes, or females don’t even congregate into social groups.
Each pride is like a little country, and evolution would probably eliminate prides from the lion behavioral repertoire if it wasn’t net useful, so this is a sort of an existence proof of a limited and tiny government that is “clearly imperfect, but probably net good”.
((
In that case, of course, the utility function evolution has built these “emergent lion governments” to optimize for is simply “procreation”. Maybe that must be the utility function? Maybe you can’t add art or happiness or the-self-actualization-of-novel-persons-in-a-vibrant-community to that utility function and still have it work?? If someone proved it for real and got an “only one possible utility function”-result, it would fulfill some quite bleak lower level sorts of Wattsian predictions. And I can’t currently rigorously rule out this concern. So… yeah. Hopefully there can be benevolent governments AND these governments will have some budgetary discretion around preserving “politically useless but humanistically nice things”?
))
But in general, from beginnings like this small argument in favor of “lion government being net positive”, I think that it might be possible to generate a sort of “inductive proof”.
1. “Simple governments can be worth even non-trivial costs (like ~5% of babies murdered on average, in waves of murderous purges (or whatever the net-tolerable taxation process of the government looks like))” and also..
If N, then N+1: “When adding some social complexity to a ‘net worth it government’ (longer time rollout before deciding?) (more members in larger groups?) (deeper plies of tactical reasoning at each juncture by each agent?) the WORTH-KEEPING-IT-property itself can be reliably preserved, arbitrarily, forever, using only scale-free organizing principles”.
So I would say that’s close to my current best argument for hope.
If we can start with something minimally net positive, and scale it up forever, getting better and better at including more and more concerns in fair ways, then… huzzah!
And that’s why grit seems like “not an insane thing to apply” to the pursuit of a win condition where a benevolent government could exist for all of Earth.
I just don’t have the details of that proof, nor the anthropological nor ethological nor historical data at hand :-(
The strong contrasting claim would be: maybe there is an upper bound. Maybe small packs of animals (or small groups of humans, or whatever) are the limit for some reason? Maybe there are strong constraints implying definite finitudes that limit the degree to which “things can be systematically Good”?
Maybe singleton’s can’t exist indefinitely. Maybe there will always be civil wars, always be predation, always be fraud, always be abortion, always be infanticide, always be murder, always be misleading advertising, always be cannibalism, always be agents coherently and successfully pursuing unfair allocations outside of safely limited finite games… Maybe there will always be evil, woven into the very structure of governments and social processes, as has been the case since the beginning of human history.
Maybe it is like that because it MUST be like that. Maybe its like that because of math. Maybe it is like that across the entire Tegmark IV multiverse: maybe “if persons in groups, then net evil prevails”?
I have two sketches for a proof that this might be true, because it is responsible and productive to slosh back and forth between “cognitive extremes (best and worst planning cases, true and false hypotheses, etc) that are justified by the data and the ongoing attempt to reconcile the data” still.
Procedure: Try to prove X, then try to prove not-X, and then maybe spend some time considering Goedel and Turing with respect to X. Eventually some X-related-conclusion will be produced! :-)
I think I’d prefer not to talk too much about the proof sketches for the universal inevitability of evil among men.
I might be wrong about them, but also it might convince some in the audience, and that seems like it could be an infohazard? Maybe? And this response is already too large <3
But if anyone already has a proof of the inevitability of evil government, then I’d really appreciate them letting me know that they have one (possibly in private) because I’m non-trivially likely to find the proof eventually anyway, if such proofs exist to be found, and I promise to pay you at least $1000 for the proof, if proof you have. (Offer only good to the first such person. My budget is also finite.)