The other big-endian algorithm is the one I observe myself as usually using. For “321”, it is:
Count the digits (three), and convert that into an order of magnitude (one hundred). (Running total: ??? hundred ???.)
Read the first digit, multiply it by its order of magnitude (one hundred), and add it to the total. (Running total: three hundred ???.)
Read the second digit, multiply it by its order of magnitude (ten), and add it to the total. (Running total: three hundred and twenty ???.
Read the third digit, multiply it by its order of magnitude (one), and add it to the total. (Arriving at three hundred and twenty one.)
I generally agree, except I find words like “multiply” and “add” a bit misleading to use in this context. If I read a number like 3,749,328, then it’s not like I take 3 million, and then take 7, multiply by 100,000, and get 700,000, and then perform a general-purpose addition operation and compute the subtotal of 3,700,000. First of all, “multiply by 100,000” is generally more like “Shift left by 5 (in our base-10 representation)”; but moreover, the whole operation is more like a “Set the nth digit of the number to be this”. If this were a computer working in base 2, “set nth digit” would be implemented as “mask out the nth bit of the current number [though in this case we know it’s already 0 and can skip this step], then take the input bit, shift left by n, and OR it with the current number”.
(In this context I find it a bit misleading to say that “One hundred plus twenty yields one hundred and twenty” is performing an addition operation, any more than “x plus y yields x+y” counts as performing addition. Because 100, by place-value notation, means 1 * 100, and 20 means 2 * 10, and 120 means 1 * 100 + 2 * 10, so you really are just restating the input.)
Also, I might switch the order of the first two steps in practice. “Three … [pauses to count digits] million, seven hundred forty-nine thousand, …”.
I generally agree, except I find words like “multiply” and “add” a bit misleading to use in this context. If I read a number like 3,749,328, then it’s not like I take 3 million, and then take 7, multiply by 100,000, and get 700,000, and then perform a general-purpose addition operation and compute the subtotal of 3,700,000. First of all, “multiply by 100,000” is generally more like “Shift left by 5 (in our base-10 representation)”; but moreover, the whole operation is more like a “Set the nth digit of the number to be this”. If this were a computer working in base 2, “set nth digit” would be implemented as “mask out the nth bit of the current number [though in this case we know it’s already 0 and can skip this step], then take the input bit, shift left by n, and OR it with the current number”.
(In this context I find it a bit misleading to say that “One hundred plus twenty yields one hundred and twenty” is performing an addition operation, any more than “x plus y yields x+y” counts as performing addition. Because 100, by place-value notation, means 1 * 100, and 20 means 2 * 10, and 120 means 1 * 100 + 2 * 10, so you really are just restating the input.)
Also, I might switch the order of the first two steps in practice. “Three … [pauses to count digits] million, seven hundred forty-nine thousand, …”.