And while we’re at it, I think that when we teach arithmetic, we should be representing negative numbers using compliments like computers do. It makes negation a little more difficult, but still pretty easy, and eliminates subtraction as a distinct concept from negation and addition, along with the entire subtraction table.
So, for example, a −2 is like saying 0−2. We’re used to having a weird unnecessary special case when going below zero, but if we mechanically use the normal subtraction rules, realizing that there are an infinite number of implicit leading zeros for these numbers, we see it’s really a case of ¯¯¯00−2. Normally when subtracting digit-by-digit, you borrow one from the next digit, e.g.100−002 becomes 108−010 then 198−100, then 098−000 or just 98. So for ¯¯¯00−2 the borrowing continues indefinitely and it becomes ¯¯¯98. Add two to this and it flips all the nines, so we’re back to zero. You can represent any negative number this way and then “subtraction” is just normal addition.
This seems super annoying when you start dealing with more abstract math: while it’s plausibly more intuitive as a transition into finite fields (thinking specifically of quadratic residues, for example), it would really really suck for graphing, functions, calculus, or any sort of coefficent-based work. It also sounds tremendously annoying for conceptualizing bases/field-adjoins/sigma notation.
Maybe it’s not better. I could be wrong. My opinion is weakly held. But I’m talking about eliminating the arithmetic of subtraction, not eliminating the algebra of negation. You’d still have a minus sign you can do algebra with, but it would be strictly unary. I don’t see high-school algebra changing very much with that. We’d have some more compact notation to represent the ...¯¯¯9, maybe I’ll use ^ for now. So you can still write −2 for algebra, it just simplifies to ^8 for when you need to do arithmetic on it. And instead of writing x−y, you write x+−y. Algebra seems about the same to me. Maybe a little easier since we lost a superfluous non-commutative operator.
In base six, in complement form, the ^ now represents ...¯¯¯5, so a number line would look like
… ^43 ^44 ^45 ^0 ^1 ^2 ^3 ^4 ^5 0 1 2 3 4 5 10 …
i.e. all the numbers increment forwards instead of flipping at zero. You can plug these x-values into a y= formula for graphing, and it would seem to work the same. Multiplication still works on complements. Computers do integer multiplies using two’s complement binary numbers.
Maybe a concrete example of why you think graphing would be more difficult would help me understand where you’re coming from.
Sorry for the lack of clarity-I’m not talking about high school algebra, I’m talking about abstract algebra. I guess if we’re writing −2 as a simplification, that’s fine, but seems to introduce a kind of meaningless extra step-I don’t quite understand the “special cases” you’re talking about, because it seems to me that you can eliminate subtraction without doing this? In fact, for anything more abstract than calculus, that’s standard-groups, for example, don’t have subtraction defined (usually) other than as the addition of the inverse.
I guess if we’re writing −2 as a simplification, that’s fine, but seems to introduce a kind of meaningless extra step
...¯¯¯97 is the simplified form of −3, not the other way around, in the same sense that 0.¯¯¯3... is the simplified form of 3−1.
Why do we think it’s consistent that we can express a multiplicative inverse without an operator, but we can’t do the same for an additive inverse? A number system with compliments can express a negative number on its own rather than requiring you to express it in terms of a positive number and an inversion operator, but you still need the operator for other reasons. ^8 seems no more superfluous of an additive inverse of 2 than 0.5 is as its multiplicative inverse. Either both are superfluous, or neither is.
it seems to me that you can eliminate subtraction without doing this? In fact, for anything more abstract than calculus, that’s standard-groups, for example, don’t have subtraction defined (usually) other than as the addition of the inverse.
That was kind of my point, as far as the algebra is concerned—subtraction, fundamentally, is a negate and add, not a primitive. But I was talking about children doing arithmetic, and they can do it the same way. Teach them how to do negation (using complements, not tacking on a sign) instead of subtraction, and you’re done. You never have to memorize the subtraction table.
Ok, I think I understand our crux here. In the fields of math I’m talking about, 3^(-1) is a far better way to express the multiplicative inverse of 3, simply because it’s not dependent on any specific representation scheme and immediately carries the relevant meaning. I don’t know enough about the pedagogy of elementary school math to opine on that.
And while we’re at it, I think that when we teach arithmetic, we should be representing negative numbers using compliments like computers do. It makes negation a little more difficult, but still pretty easy, and eliminates subtraction as a distinct concept from negation and addition, along with the entire subtraction table.
So, for example, a −2 is like saying 0−2. We’re used to having a weird unnecessary special case when going below zero, but if we mechanically use the normal subtraction rules, realizing that there are an infinite number of implicit leading zeros for these numbers, we see it’s really a case of ¯¯¯00−2. Normally when subtracting digit-by-digit, you borrow one from the next digit, e.g.100−002 becomes 108−010 then 198−100, then 098−000 or just 98. So for ¯¯¯00−2 the borrowing continues indefinitely and it becomes ¯¯¯98. Add two to this and it flips all the nines, so we’re back to zero. You can represent any negative number this way and then “subtraction” is just normal addition.
This seems super annoying when you start dealing with more abstract math: while it’s plausibly more intuitive as a transition into finite fields (thinking specifically of quadratic residues, for example), it would really really suck for graphing, functions, calculus, or any sort of coefficent-based work. It also sounds tremendously annoying for conceptualizing bases/field-adjoins/sigma notation.
Maybe it’s not better. I could be wrong. My opinion is weakly held. But I’m talking about eliminating the arithmetic of subtraction, not eliminating the algebra of negation. You’d still have a minus sign you can do algebra with, but it would be strictly unary. I don’t see high-school algebra changing very much with that. We’d have some more compact notation to represent the ...¯¯¯9, maybe I’ll use ^ for now. So you can still write −2 for algebra, it just simplifies to ^8 for when you need to do arithmetic on it. And instead of writing x−y, you write x+−y. Algebra seems about the same to me. Maybe a little easier since we lost a superfluous non-commutative operator.
In base six, in complement form, the ^ now represents ...¯¯¯5, so a number line would look like
… ^43 ^44 ^45 ^0 ^1 ^2 ^3 ^4 ^5 0 1 2 3 4 5 10 …
i.e. all the numbers increment forwards instead of flipping at zero. You can plug these x-values into a y= formula for graphing, and it would seem to work the same. Multiplication still works on complements. Computers do integer multiplies using two’s complement binary numbers.
Maybe a concrete example of why you think graphing would be more difficult would help me understand where you’re coming from.
Sorry for the lack of clarity-I’m not talking about high school algebra, I’m talking about abstract algebra. I guess if we’re writing −2 as a simplification, that’s fine, but seems to introduce a kind of meaningless extra step-I don’t quite understand the “special cases” you’re talking about, because it seems to me that you can eliminate subtraction without doing this? In fact, for anything more abstract than calculus, that’s standard-groups, for example, don’t have subtraction defined (usually) other than as the addition of the inverse.
...¯¯¯97 is the simplified form of −3, not the other way around, in the same sense that 0.¯¯¯3... is the simplified form of 3−1.
Why do we think it’s consistent that we can express a multiplicative inverse without an operator, but we can’t do the same for an additive inverse? A number system with compliments can express a negative number on its own rather than requiring you to express it in terms of a positive number and an inversion operator, but you still need the operator for other reasons. ^8 seems no more superfluous of an additive inverse of 2 than 0.5 is as its multiplicative inverse. Either both are superfluous, or neither is.
That was kind of my point, as far as the algebra is concerned—subtraction, fundamentally, is a negate and add, not a primitive. But I was talking about children doing arithmetic, and they can do it the same way. Teach them how to do negation (using complements, not tacking on a sign) instead of subtraction, and you’re done. You never have to memorize the subtraction table.
Ok, I think I understand our crux here. In the fields of math I’m talking about, 3^(-1) is a far better way to express the multiplicative inverse of 3, simply because it’s not dependent on any specific representation scheme and immediately carries the relevant meaning. I don’t know enough about the pedagogy of elementary school math to opine on that.