We’re still teaching children arithmetic in base ten. The objectively correct base is six.
I’m serious. This is not just my opinion. The base we choose had better be a whole number, but the size of the multiplication table we have to memorize goes up roughly quadradically in the size of our base (although there are regularities that can sometimes make this easier), so it has to be a fairly small whole number. There just aren’t that many choices, and we’ve checked them all. Base six is optimal.
The smallest four primes {2, 3, 5, 7} are either divisors or nearest-neighbors of six, which give the system a lot of nice properties, like easy divisibility tests.
The single-digit addition and multiplication tables are much smaller. The 0′s and 1′s are trivial, so that leaves two 4x4 tables, but because these operations are commutative, there are only ten “facts” to memorize for each table, instead of the 36 facts per table required for decimal.
More small-denominator fractions have a short or short-repeating seximal representation than in any other small base.
You can easily count up to five-x five on your fingers, instead of just to ten.
So why aren’t we all using seximal? The decimal system is, to put it mildly, extremely entrenched. We developed it before we knew better. Path dependency. Almost all of our books and devices show numbers in decimal. Our money is decimal. All of our metric prefixes are based on powers of ten. Everybody who’s learned arithmetic already did so in decimal. If you want to talk to others about numbers, you pretty much have to do it in decimal. We’ve calibrated our numeric intuitions in decimal. Learning decimal is easier than learning decimal and seximal, so almost nobody learns seximal. Having two different bases in wide use seems like it would make us worse off, but that’s what we’d have to go through, since there’s no emperor of the world to accomplish it by fiat.
If we wrote numbers in two different bases for a while, we’d have to distinguish them somehow, but if we only tag the seximal, then it feels second-class. I think we’d be better off writing seximal using completely different symbols to minimize interference, like ٤٢ instead of 426 or 0s42.
And while we’re at it, I think that when we teach arithmetic, we should be representing negative numbers using compliments like computers do. It makes negation a little more difficult, but still pretty easy, and eliminates subtraction as a distinct concept from negation and addition, along with the entire subtraction table.
So, for example, a −2 is like saying 0−2. We’re used to having a weird unnecessary special case when going below zero, but if we mechanically use the normal subtraction rules, realizing that there are an infinite number of implicit leading zeros for these numbers, we see it’s really a case of ¯¯¯00−2. Normally when subtracting digit-by-digit, you borrow one from the next digit, e.g.100−002 becomes 108−010 then 198−100, then 098−000 or just 98. So for ¯¯¯00−2 the borrowing continues indefinitely and it becomes ¯¯¯98. Add two to this and it flips all the nines, so we’re back to zero. You can represent any negative number this way and then “subtraction” is just normal addition.
This seems super annoying when you start dealing with more abstract math: while it’s plausibly more intuitive as a transition into finite fields (thinking specifically of quadratic residues, for example), it would really really suck for graphing, functions, calculus, or any sort of coefficent-based work. It also sounds tremendously annoying for conceptualizing bases/field-adjoins/sigma notation.
Maybe it’s not better. I could be wrong. My opinion is weakly held. But I’m talking about eliminating the arithmetic of subtraction, not eliminating the algebra of negation. You’d still have a minus sign you can do algebra with, but it would be strictly unary. I don’t see high-school algebra changing very much with that. We’d have some more compact notation to represent the ...¯¯¯9, maybe I’ll use ^ for now. So you can still write −2 for algebra, it just simplifies to ^8 for when you need to do arithmetic on it. And instead of writing x−y, you write x+−y. Algebra seems about the same to me. Maybe a little easier since we lost a superfluous non-commutative operator.
In base six, in complement form, the ^ now represents ...¯¯¯5, so a number line would look like
… ^43 ^44 ^45 ^0 ^1 ^2 ^3 ^4 ^5 0 1 2 3 4 5 10 …
i.e. all the numbers increment forwards instead of flipping at zero. You can plug these x-values into a y= formula for graphing, and it would seem to work the same. Multiplication still works on complements. Computers do integer multiplies using two’s complement binary numbers.
Maybe a concrete example of why you think graphing would be more difficult would help me understand where you’re coming from.
Sorry for the lack of clarity-I’m not talking about high school algebra, I’m talking about abstract algebra. I guess if we’re writing −2 as a simplification, that’s fine, but seems to introduce a kind of meaningless extra step-I don’t quite understand the “special cases” you’re talking about, because it seems to me that you can eliminate subtraction without doing this? In fact, for anything more abstract than calculus, that’s standard-groups, for example, don’t have subtraction defined (usually) other than as the addition of the inverse.
I guess if we’re writing −2 as a simplification, that’s fine, but seems to introduce a kind of meaningless extra step
...¯¯¯97 is the simplified form of −3, not the other way around, in the same sense that 0.¯¯¯3... is the simplified form of 3−1.
Why do we think it’s consistent that we can express a multiplicative inverse without an operator, but we can’t do the same for an additive inverse? A number system with compliments can express a negative number on its own rather than requiring you to express it in terms of a positive number and an inversion operator, but you still need the operator for other reasons. ^8 seems no more superfluous of an additive inverse of 2 than 0.5 is as its multiplicative inverse. Either both are superfluous, or neither is.
it seems to me that you can eliminate subtraction without doing this? In fact, for anything more abstract than calculus, that’s standard-groups, for example, don’t have subtraction defined (usually) other than as the addition of the inverse.
That was kind of my point, as far as the algebra is concerned—subtraction, fundamentally, is a negate and add, not a primitive. But I was talking about children doing arithmetic, and they can do it the same way. Teach them how to do negation (using complements, not tacking on a sign) instead of subtraction, and you’re done. You never have to memorize the subtraction table.
Ok, I think I understand our crux here. In the fields of math I’m talking about, 3^(-1) is a far better way to express the multiplicative inverse of 3, simply because it’s not dependent on any specific representation scheme and immediately carries the relevant meaning. I don’t know enough about the pedagogy of elementary school math to opine on that.
This video makes a compelling case for base two as optimal (not just for computers, but for humans) which I had dismissed out of hand as unworkable due to the number of digits required. The more compact notation with digit groupings demonstrated gives binary all the advantages of quarternary, octal, or hexadecimal, while binary’s extreme simplicity makes many manual calculation algorithms much easier than even seximal. I’m not convinced the spoken variant is adequate, but perhaps it could be improved upon.
We’re still teaching children arithmetic in base ten. The objectively correct base is six.
I’m serious. This is not just my opinion. The base we choose had better be a whole number, but the size of the multiplication table we have to memorize goes up roughly quadradically in the size of our base (although there are regularities that can sometimes make this easier), so it has to be a fairly small whole number. There just aren’t that many choices, and we’ve checked them all. Base six is optimal.
The smallest four primes {2, 3, 5, 7} are either divisors or nearest-neighbors of six, which give the system a lot of nice properties, like easy divisibility tests.
The single-digit addition and multiplication tables are much smaller. The 0′s and 1′s are trivial, so that leaves two 4x4 tables, but because these operations are commutative, there are only ten “facts” to memorize for each table, instead of the 36 facts per table required for decimal.
More small-denominator fractions have a short or short-repeating seximal representation than in any other small base.
You can easily count up to five-x five on your fingers, instead of just to ten.
So why aren’t we all using seximal? The decimal system is, to put it mildly, extremely entrenched. We developed it before we knew better. Path dependency. Almost all of our books and devices show numbers in decimal. Our money is decimal. All of our metric prefixes are based on powers of ten. Everybody who’s learned arithmetic already did so in decimal. If you want to talk to others about numbers, you pretty much have to do it in decimal. We’ve calibrated our numeric intuitions in decimal. Learning decimal is easier than learning decimal and seximal, so almost nobody learns seximal. Having two different bases in wide use seems like it would make us worse off, but that’s what we’d have to go through, since there’s no emperor of the world to accomplish it by fiat.
If we wrote numbers in two different bases for a while, we’d have to distinguish them somehow, but if we only tag the seximal, then it feels second-class. I think we’d be better off writing seximal using completely different symbols to minimize interference, like ٤٢ instead of 426 or 0s42.
And while we’re at it, I think that when we teach arithmetic, we should be representing negative numbers using compliments like computers do. It makes negation a little more difficult, but still pretty easy, and eliminates subtraction as a distinct concept from negation and addition, along with the entire subtraction table.
So, for example, a −2 is like saying 0−2. We’re used to having a weird unnecessary special case when going below zero, but if we mechanically use the normal subtraction rules, realizing that there are an infinite number of implicit leading zeros for these numbers, we see it’s really a case of ¯¯¯00−2. Normally when subtracting digit-by-digit, you borrow one from the next digit, e.g.100−002 becomes 108−010 then 198−100, then 098−000 or just 98. So for ¯¯¯00−2 the borrowing continues indefinitely and it becomes ¯¯¯98. Add two to this and it flips all the nines, so we’re back to zero. You can represent any negative number this way and then “subtraction” is just normal addition.
This seems super annoying when you start dealing with more abstract math: while it’s plausibly more intuitive as a transition into finite fields (thinking specifically of quadratic residues, for example), it would really really suck for graphing, functions, calculus, or any sort of coefficent-based work. It also sounds tremendously annoying for conceptualizing bases/field-adjoins/sigma notation.
Maybe it’s not better. I could be wrong. My opinion is weakly held. But I’m talking about eliminating the arithmetic of subtraction, not eliminating the algebra of negation. You’d still have a minus sign you can do algebra with, but it would be strictly unary. I don’t see high-school algebra changing very much with that. We’d have some more compact notation to represent the ...¯¯¯9, maybe I’ll use ^ for now. So you can still write −2 for algebra, it just simplifies to ^8 for when you need to do arithmetic on it. And instead of writing x−y, you write x+−y. Algebra seems about the same to me. Maybe a little easier since we lost a superfluous non-commutative operator.
In base six, in complement form, the ^ now represents ...¯¯¯5, so a number line would look like
… ^43 ^44 ^45 ^0 ^1 ^2 ^3 ^4 ^5 0 1 2 3 4 5 10 …
i.e. all the numbers increment forwards instead of flipping at zero. You can plug these x-values into a y= formula for graphing, and it would seem to work the same. Multiplication still works on complements. Computers do integer multiplies using two’s complement binary numbers.
Maybe a concrete example of why you think graphing would be more difficult would help me understand where you’re coming from.
Sorry for the lack of clarity-I’m not talking about high school algebra, I’m talking about abstract algebra. I guess if we’re writing −2 as a simplification, that’s fine, but seems to introduce a kind of meaningless extra step-I don’t quite understand the “special cases” you’re talking about, because it seems to me that you can eliminate subtraction without doing this? In fact, for anything more abstract than calculus, that’s standard-groups, for example, don’t have subtraction defined (usually) other than as the addition of the inverse.
...¯¯¯97 is the simplified form of −3, not the other way around, in the same sense that 0.¯¯¯3... is the simplified form of 3−1.
Why do we think it’s consistent that we can express a multiplicative inverse without an operator, but we can’t do the same for an additive inverse? A number system with compliments can express a negative number on its own rather than requiring you to express it in terms of a positive number and an inversion operator, but you still need the operator for other reasons. ^8 seems no more superfluous of an additive inverse of 2 than 0.5 is as its multiplicative inverse. Either both are superfluous, or neither is.
That was kind of my point, as far as the algebra is concerned—subtraction, fundamentally, is a negate and add, not a primitive. But I was talking about children doing arithmetic, and they can do it the same way. Teach them how to do negation (using complements, not tacking on a sign) instead of subtraction, and you’re done. You never have to memorize the subtraction table.
Ok, I think I understand our crux here. In the fields of math I’m talking about, 3^(-1) is a far better way to express the multiplicative inverse of 3, simply because it’s not dependent on any specific representation scheme and immediately carries the relevant meaning. I don’t know enough about the pedagogy of elementary school math to opine on that.
This video makes a compelling case for base two as optimal (not just for computers, but for humans) which I had dismissed out of hand as unworkable due to the number of digits required. The more compact notation with digit groupings demonstrated gives binary all the advantages of quarternary, octal, or hexadecimal, while binary’s extreme simplicity makes many manual calculation algorithms much easier than even seximal. I’m not convinced the spoken variant is adequate, but perhaps it could be improved upon.