Why is it that mathematicians are confident about their results? It’s evident that they are highly confident. And it’s evident that they’re justified in being confident. Their results literally hold for the rest of time. They’re not going to flip in 100 years. So why is this the case?
Basically, there are a few stages of belief. Sometimes a belief is stated on its own without anything else. Sometimes a justification is given for that belief. And sometimes it’s explained why that justification is a reliable indicator of truth. Now, you may think you have infinite regress here, but that’s wrong in practice: You eventually reach a point where your justification is so trivially obvious that it almost feels silly to even list those justifications as assumptions in your argument.
Voting systems have been figured out to the level of standard mathematical detail (i.e. belief + justification). And my methodology post that I linked to you is explaining why justifications of the form they use are unambiguously correct in the world of policy. (Again that series is not finished yet, but the only roadblock is making it entertaining to read, I’ve already figured out the mathematical details.)
So to me, arguing against a voting system change is like saying “Maybe there are a finite number of primes” or “Maybe this table I’m resting my arms on right now doesn’t actually exist”. I.e. these really are things that we can, for all intents and purposes, be certain of. And if you’re not certain of these basic things, we can’t really ever discuss anything productively.
It’s not a matter of the Dunning–Kruger effect; it’s that experts understand these problems well enough. You can find professors who specialise in voting theory and ask them. Ask them “Is there any chance that replacing the current presidential voting system with any of the most promising current alternatives will be a mistake in 100,000 years?” The amount of time is totally irrelevant when you understand a problem well enough. One plus one will always equal two.
Conversely, AI safety’s whole problem is that we don’t have anything like that. We have no confidence that we can control these systems. We have proposals, we have justifications for those proposals, but we have no reason to believe that those justifications reliably lead to truth.
To be clear, I’m not saying every policy problem is solved. But some policy problems are solved. (Or in the case of voting theory, sufficiently solved as to far outperform the current system, and we know that there is no unknown system will blow our current proposals out of the water due to Arrow’s impossibility theorem.) And establishing some of those policies is difficult because of short-term incentives. This delay tactic is a way to implement that specific subset of policies, and only that subset.
Denying this would require you to think that no such policies exists. Which would commit you to say, “Hey, maybe the Saudi Arabian policy of cutting off a child-thief’s hand shouldn’t be revoked in 50 years. Who can say whether that’ll be a good policy at that point?”
We can be confident of mathematics because mathematics is precise and explicit, and exists independently of space, time, and people. Its truths are eternal and we can become arbitrarily certain of them.
This is not true of anything else.
The pure mathematics of voting systems, being mathematics, exists likewise, but its application to the physical world, like all applied mathematics, is contingent on the real world conforming to its ontology and its axioms.
“Is there any chance that replacing the current presidential voting system with any of the most promising current alternatives will be a mistake in 100,000 years?”
Even given a flourishing future for humanity, it seems vanishingly unlikely that the Presidency or the US will even exist in 100,000 years, or that anyone by then will care much what they were. I would not even bet on there being anything resembling a presidency or a political state after that passage of time, or on any positive conjecture about how our descendants would be living.
The pure mathematics of voting systems, being mathematics, exists likewise, but its application to the physical world, like all applied mathematics, is contingent on the real world conforming to its ontology and its axioms.
I never would have disputed this. But you’re being binary: basically “either we know it or we don’t”. It’s not that you’re wrong, it’s that your categories aren’t useful in practice. You’re implicitly bucketing things you’re 99.9% sure about with things that you’re 20% sure about.
In contrast, my view is that you should assign some credence to your set of assumptions being true. And given that, we can say that your credence in a valid logical argument’s conclusion must be at least as high as your credence in its assumptions. (It’s higher because, of course, there can be other sound logical arguments that support your conclusion.)
If you’re restricting your knowledge to known mathematical truths, you’re not going to make any government policies at all.
it seems vanishingly unlikely that the Presidency or the US will even exist in 100,000 years
Conceded, but that’s not a substantive issue to my argument. The electoral system of an office that doesn’t exist anymore hardly matters, does it? I only posed the 100,000-year question to illustrate my point. That underlying point still stands: We should be confident that changing the electoral system is good, no matter what the future holds. Or rather, that we should be as confident as we can be about any policy change.
On a scale of 100,000 years, it pretty much is binary. Mathematics will not change; basic physical law also (although some of it may come to be seen as limiting cases of some more general ideas); little else can be counted on on that timescale. This feels somewhat analogous.
In the short term, of course things can be a lot more variable.
your credence in a valid logical argument’s conclusion must be at least as high as your credence in its assumptions. (It’s higher because, of course, there can be other sound logical arguments that support your conclusion.)
The longer the chain of reasoning built on uncertain assumptions, the further it may drift from reality.
That underlying point still stands: We should be confident that changing the electoral system is good, no matter what the future holds. Or rather, that we should be as confident as we can be about any policy change.
Why is it that mathematicians are confident about their results? It’s evident that they are highly confident. And it’s evident that they’re justified in being confident. Their results literally hold for the rest of time. They’re not going to flip in 100 years. So why is this the case?
Basically, there are a few stages of belief. Sometimes a belief is stated on its own without anything else. Sometimes a justification is given for that belief. And sometimes it’s explained why that justification is a reliable indicator of truth. Now, you may think you have infinite regress here, but that’s wrong in practice: You eventually reach a point where your justification is so trivially obvious that it almost feels silly to even list those justifications as assumptions in your argument.
Voting systems have been figured out to the level of standard mathematical detail (i.e. belief + justification). And my methodology post that I linked to you is explaining why justifications of the form they use are unambiguously correct in the world of policy. (Again that series is not finished yet, but the only roadblock is making it entertaining to read, I’ve already figured out the mathematical details.)
So to me, arguing against a voting system change is like saying “Maybe there are a finite number of primes” or “Maybe this table I’m resting my arms on right now doesn’t actually exist”. I.e. these really are things that we can, for all intents and purposes, be certain of. And if you’re not certain of these basic things, we can’t really ever discuss anything productively.
It’s not a matter of the Dunning–Kruger effect; it’s that experts understand these problems well enough. You can find professors who specialise in voting theory and ask them. Ask them “Is there any chance that replacing the current presidential voting system with any of the most promising current alternatives will be a mistake in 100,000 years?” The amount of time is totally irrelevant when you understand a problem well enough. One plus one will always equal two.
Conversely, AI safety’s whole problem is that we don’t have anything like that. We have no confidence that we can control these systems. We have proposals, we have justifications for those proposals, but we have no reason to believe that those justifications reliably lead to truth.
To be clear, I’m not saying every policy problem is solved. But some policy problems are solved. (Or in the case of voting theory, sufficiently solved as to far outperform the current system, and we know that there is no unknown system will blow our current proposals out of the water due to Arrow’s impossibility theorem.) And establishing some of those policies is difficult because of short-term incentives. This delay tactic is a way to implement that specific subset of policies, and only that subset.
Denying this would require you to think that no such policies exists. Which would commit you to say, “Hey, maybe the Saudi Arabian policy of cutting off a child-thief’s hand shouldn’t be revoked in 50 years. Who can say whether that’ll be a good policy at that point?”
We can be confident of mathematics because mathematics is precise and explicit, and exists independently of space, time, and people. Its truths are eternal and we can become arbitrarily certain of them.
This is not true of anything else.
The pure mathematics of voting systems, being mathematics, exists likewise, but its application to the physical world, like all applied mathematics, is contingent on the real world conforming to its ontology and its axioms.
Even given a flourishing future for humanity, it seems vanishingly unlikely that the Presidency or the US will even exist in 100,000 years, or that anyone by then will care much what they were. I would not even bet on there being anything resembling a presidency or a political state after that passage of time, or on any positive conjecture about how our descendants would be living.
I never would have disputed this. But you’re being binary: basically “either we know it or we don’t”. It’s not that you’re wrong, it’s that your categories aren’t useful in practice. You’re implicitly bucketing things you’re 99.9% sure about with things that you’re 20% sure about.
In contrast, my view is that you should assign some credence to your set of assumptions being true. And given that, we can say that your credence in a valid logical argument’s conclusion must be at least as high as your credence in its assumptions. (It’s higher because, of course, there can be other sound logical arguments that support your conclusion.)
If you’re restricting your knowledge to known mathematical truths, you’re not going to make any government policies at all.
Conceded, but that’s not a substantive issue to my argument. The electoral system of an office that doesn’t exist anymore hardly matters, does it? I only posed the 100,000-year question to illustrate my point. That underlying point still stands: We should be confident that changing the electoral system is good, no matter what the future holds. Or rather, that we should be as confident as we can be about any policy change.
On a scale of 100,000 years, it pretty much is binary. Mathematics will not change; basic physical law also (although some of it may come to be seen as limiting cases of some more general ideas); little else can be counted on on that timescale. This feels somewhat analogous.
In the short term, of course things can be a lot more variable.
The longer the chain of reasoning built on uncertain assumptions, the further it may drift from reality.
Why are you ignoring my actual point?
“As confident as we can be about any policy change” amounts to not very confident, especially so for making policy for 50 years hence.
I’ve given you quite a lot of thorough explanation as to why that position is wrong. I don’t think there’s any point discussing further.
Agreed.