If you disagree that these policies are actually beneficial in the long run, I’m sure you can think of policies that you like that have long-run benefits and short-run costs.
And other people will disagree with those. No policy can be known to have long-run benefits until it has actually had a long run.
There is no way to make laws for 100 years hence, because we do not know what the world will be like then. Some even expect a singularity before 2120. Could anyone in 1920 make laws for the 2020s? Any attempt from back then would be regarded as laughable today. Who can say, at last we know the truth of how people should behave, and all we have to do is force them to do it? Only fanatics who would destroy the world and call it peace.
I think it’s rather unfair to classify me as a confidently underinformed fanatic. I’ve worked in federal government, the country’s largest bank, and am now an investment analyst at a large fund. High confidence usually indicates overconfidence, sure, but that correlation breaks down when someone really has thought deeply about a topic. Mathematicians, for instance, are nearly 100% confident of long-known peer-reviewed claims.
I’ve written quite extensively on optimal policy methodology. Have a read here, The Benevolent Ruler’s Handbook (Part 1): The Policy Problem. As I said in one of my other comments, “As for the 1923 question, I’d say we didn’t have a theoretical foundation for what makes a policy optimal. Given that, there is no policy I would have tried to have advocated for in this way (even though the land value tax was invented before 1879). The article that I linked you to contains my attempt to lay those theoretical foundations (or the start of it anyway, I haven’t finished it yet).”
Once you have these foundations, you can say things like “I know this policy is optimal, and will continue to be so”.
I think it’s rather unfair to classify me as a confidently underinformed fanatic.
I’m sure you’re informed. :)
We just have to ensure the policies we propose are actually good … and have large barrier to reversal
How are you going to know that 100 (or even 50) years in advance of them ever being implemented? This looks to me like solving AGI alignment by saying “we just have to ensure the AGI actually does what we want and stop anyone turning it off.”
I’m taking the outside view here and observing that knowing how society should work and making everyone do that has always worked out badly before. What would be different this time?
Why is it that mathematicians are confident about their results? It’s evident that they are highly confident. And it’s evident that they’re justified in being confident. Their results literally hold for the rest of time. They’re not going to flip in 100 years. So why is this the case?
Basically, there are a few stages of belief. Sometimes a belief is stated on its own without anything else. Sometimes a justification is given for that belief. And sometimes it’s explained why that justification is a reliable indicator of truth. Now, you may think you have infinite regress here, but that’s wrong in practice: You eventually reach a point where your justification is so trivially obvious that it almost feels silly to even list those justifications as assumptions in your argument.
Voting systems have been figured out to the level of standard mathematical detail (i.e. belief + justification). And my methodology post that I linked to you is explaining why justifications of the form they use are unambiguously correct in the world of policy. (Again that series is not finished yet, but the only roadblock is making it entertaining to read, I’ve already figured out the mathematical details.)
So to me, arguing against a voting system change is like saying “Maybe there are a finite number of primes” or “Maybe this table I’m resting my arms on right now doesn’t actually exist”. I.e. these really are things that we can, for all intents and purposes, be certain of. And if you’re not certain of these basic things, we can’t really ever discuss anything productively.
It’s not a matter of the Dunning–Kruger effect; it’s that experts understand these problems well enough. You can find professors who specialise in voting theory and ask them. Ask them “Is there any chance that replacing the current presidential voting system with any of the most promising current alternatives will be a mistake in 100,000 years?” The amount of time is totally irrelevant when you understand a problem well enough. One plus one will always equal two.
Conversely, AI safety’s whole problem is that we don’t have anything like that. We have no confidence that we can control these systems. We have proposals, we have justifications for those proposals, but we have no reason to believe that those justifications reliably lead to truth.
To be clear, I’m not saying every policy problem is solved. But some policy problems are solved. (Or in the case of voting theory, sufficiently solved as to far outperform the current system, and we know that there is no unknown system will blow our current proposals out of the water due to Arrow’s impossibility theorem.) And establishing some of those policies is difficult because of short-term incentives. This delay tactic is a way to implement that specific subset of policies, and only that subset.
Denying this would require you to think that no such policies exists. Which would commit you to say, “Hey, maybe the Saudi Arabian policy of cutting off a child-thief’s hand shouldn’t be revoked in 50 years. Who can say whether that’ll be a good policy at that point?”
We can be confident of mathematics because mathematics is precise and explicit, and exists independently of space, time, and people. Its truths are eternal and we can become arbitrarily certain of them.
This is not true of anything else.
The pure mathematics of voting systems, being mathematics, exists likewise, but its application to the physical world, like all applied mathematics, is contingent on the real world conforming to its ontology and its axioms.
“Is there any chance that replacing the current presidential voting system with any of the most promising current alternatives will be a mistake in 100,000 years?”
Even given a flourishing future for humanity, it seems vanishingly unlikely that the Presidency or the US will even exist in 100,000 years, or that anyone by then will care much what they were. I would not even bet on there being anything resembling a presidency or a political state after that passage of time, or on any positive conjecture about how our descendants would be living.
The pure mathematics of voting systems, being mathematics, exists likewise, but its application to the physical world, like all applied mathematics, is contingent on the real world conforming to its ontology and its axioms.
I never would have disputed this. But you’re being binary: basically “either we know it or we don’t”. It’s not that you’re wrong, it’s that your categories aren’t useful in practice. You’re implicitly bucketing things you’re 99.9% sure about with things that you’re 20% sure about.
In contrast, my view is that you should assign some credence to your set of assumptions being true. And given that, we can say that your credence in a valid logical argument’s conclusion must be at least as high as your credence in its assumptions. (It’s higher because, of course, there can be other sound logical arguments that support your conclusion.)
If you’re restricting your knowledge to known mathematical truths, you’re not going to make any government policies at all.
it seems vanishingly unlikely that the Presidency or the US will even exist in 100,000 years
Conceded, but that’s not a substantive issue to my argument. The electoral system of an office that doesn’t exist anymore hardly matters, does it? I only posed the 100,000-year question to illustrate my point. That underlying point still stands: We should be confident that changing the electoral system is good, no matter what the future holds. Or rather, that we should be as confident as we can be about any policy change.
On a scale of 100,000 years, it pretty much is binary. Mathematics will not change; basic physical law also (although some of it may come to be seen as limiting cases of some more general ideas); little else can be counted on on that timescale. This feels somewhat analogous.
In the short term, of course things can be a lot more variable.
your credence in a valid logical argument’s conclusion must be at least as high as your credence in its assumptions. (It’s higher because, of course, there can be other sound logical arguments that support your conclusion.)
The longer the chain of reasoning built on uncertain assumptions, the further it may drift from reality.
That underlying point still stands: We should be confident that changing the electoral system is good, no matter what the future holds. Or rather, that we should be as confident as we can be about any policy change.
And other people will disagree with those. No policy can be known to have long-run benefits until it has actually had a long run.
There is no way to make laws for 100 years hence, because we do not know what the world will be like then. Some even expect a singularity before 2120. Could anyone in 1920 make laws for the 2020s? Any attempt from back then would be regarded as laughable today. Who can say, at last we know the truth of how people should behave, and all we have to do is force them to do it? Only fanatics who would destroy the world and call it peace.
I think it’s rather unfair to classify me as a confidently underinformed fanatic. I’ve worked in federal government, the country’s largest bank, and am now an investment analyst at a large fund. High confidence usually indicates overconfidence, sure, but that correlation breaks down when someone really has thought deeply about a topic. Mathematicians, for instance, are nearly 100% confident of long-known peer-reviewed claims.
I’ve written quite extensively on optimal policy methodology. Have a read here, The Benevolent Ruler’s Handbook (Part 1): The Policy Problem. As I said in one of my other comments, “As for the 1923 question, I’d say we didn’t have a theoretical foundation for what makes a policy optimal. Given that, there is no policy I would have tried to have advocated for in this way (even though the land value tax was invented before 1879). The article that I linked you to contains my attempt to lay those theoretical foundations (or the start of it anyway, I haven’t finished it yet).”
Once you have these foundations, you can say things like “I know this policy is optimal, and will continue to be so”.
I’m sure you’re informed. :)
How are you going to know that 100 (or even 50) years in advance of them ever being implemented? This looks to me like solving AGI alignment by saying “we just have to ensure the AGI actually does what we want and stop anyone turning it off.”
I’m taking the outside view here and observing that knowing how society should work and making everyone do that has always worked out badly before. What would be different this time?
Why is it that mathematicians are confident about their results? It’s evident that they are highly confident. And it’s evident that they’re justified in being confident. Their results literally hold for the rest of time. They’re not going to flip in 100 years. So why is this the case?
Basically, there are a few stages of belief. Sometimes a belief is stated on its own without anything else. Sometimes a justification is given for that belief. And sometimes it’s explained why that justification is a reliable indicator of truth. Now, you may think you have infinite regress here, but that’s wrong in practice: You eventually reach a point where your justification is so trivially obvious that it almost feels silly to even list those justifications as assumptions in your argument.
Voting systems have been figured out to the level of standard mathematical detail (i.e. belief + justification). And my methodology post that I linked to you is explaining why justifications of the form they use are unambiguously correct in the world of policy. (Again that series is not finished yet, but the only roadblock is making it entertaining to read, I’ve already figured out the mathematical details.)
So to me, arguing against a voting system change is like saying “Maybe there are a finite number of primes” or “Maybe this table I’m resting my arms on right now doesn’t actually exist”. I.e. these really are things that we can, for all intents and purposes, be certain of. And if you’re not certain of these basic things, we can’t really ever discuss anything productively.
It’s not a matter of the Dunning–Kruger effect; it’s that experts understand these problems well enough. You can find professors who specialise in voting theory and ask them. Ask them “Is there any chance that replacing the current presidential voting system with any of the most promising current alternatives will be a mistake in 100,000 years?” The amount of time is totally irrelevant when you understand a problem well enough. One plus one will always equal two.
Conversely, AI safety’s whole problem is that we don’t have anything like that. We have no confidence that we can control these systems. We have proposals, we have justifications for those proposals, but we have no reason to believe that those justifications reliably lead to truth.
To be clear, I’m not saying every policy problem is solved. But some policy problems are solved. (Or in the case of voting theory, sufficiently solved as to far outperform the current system, and we know that there is no unknown system will blow our current proposals out of the water due to Arrow’s impossibility theorem.) And establishing some of those policies is difficult because of short-term incentives. This delay tactic is a way to implement that specific subset of policies, and only that subset.
Denying this would require you to think that no such policies exists. Which would commit you to say, “Hey, maybe the Saudi Arabian policy of cutting off a child-thief’s hand shouldn’t be revoked in 50 years. Who can say whether that’ll be a good policy at that point?”
We can be confident of mathematics because mathematics is precise and explicit, and exists independently of space, time, and people. Its truths are eternal and we can become arbitrarily certain of them.
This is not true of anything else.
The pure mathematics of voting systems, being mathematics, exists likewise, but its application to the physical world, like all applied mathematics, is contingent on the real world conforming to its ontology and its axioms.
Even given a flourishing future for humanity, it seems vanishingly unlikely that the Presidency or the US will even exist in 100,000 years, or that anyone by then will care much what they were. I would not even bet on there being anything resembling a presidency or a political state after that passage of time, or on any positive conjecture about how our descendants would be living.
I never would have disputed this. But you’re being binary: basically “either we know it or we don’t”. It’s not that you’re wrong, it’s that your categories aren’t useful in practice. You’re implicitly bucketing things you’re 99.9% sure about with things that you’re 20% sure about.
In contrast, my view is that you should assign some credence to your set of assumptions being true. And given that, we can say that your credence in a valid logical argument’s conclusion must be at least as high as your credence in its assumptions. (It’s higher because, of course, there can be other sound logical arguments that support your conclusion.)
If you’re restricting your knowledge to known mathematical truths, you’re not going to make any government policies at all.
Conceded, but that’s not a substantive issue to my argument. The electoral system of an office that doesn’t exist anymore hardly matters, does it? I only posed the 100,000-year question to illustrate my point. That underlying point still stands: We should be confident that changing the electoral system is good, no matter what the future holds. Or rather, that we should be as confident as we can be about any policy change.
On a scale of 100,000 years, it pretty much is binary. Mathematics will not change; basic physical law also (although some of it may come to be seen as limiting cases of some more general ideas); little else can be counted on on that timescale. This feels somewhat analogous.
In the short term, of course things can be a lot more variable.
The longer the chain of reasoning built on uncertain assumptions, the further it may drift from reality.
Why are you ignoring my actual point?
“As confident as we can be about any policy change” amounts to not very confident, especially so for making policy for 50 years hence.
I’ve given you quite a lot of thorough explanation as to why that position is wrong. I don’t think there’s any point discussing further.
Agreed.