Math notation is optimized for doing math, not learning math. Once you’ve internalized what P(A|B) is, you know what it means at a glance, and when you look at a large equation, you’re more interested in the structure of the whole thing than the identities of it’s constituents (Because abstracting the details away getting results based only on structure is what algebra is).
People that do novel math often invent their own notation—it’s sort of like domain-specific languages written on top of LISP.
Teaching programming languages haven’t proven to be a popular idea, e.g. MIT and Berkeley moved their famous introductory CS class from scheme to python (because python is actually used, even though it is a much less elegant language).
Also, being very frugal with token length seems to be a thing into the 1960′s, see Unix e.g. “ls -l” instead of the far more human eye friendly “list -long” I don’t exactly understand why but apparently this wasn’t really a priority until about, say, 1995 when more and more programmers said fsck Perl with its unreadably frugal letter soup and use stuff like Python, where things are expressed in actual words.
I guess there are good reasons behind it. I still don’t have to like it.
See Comment formatting/Escaping special symbols on the wiki for more details (I’ve backslash-escaped underscores _ in the text part of the link to avoid their turning surrounding texts into italics, and the closing round bracket in the URL part of the link to avoid its interpretation as the end of the URL).
Math notation is optimized for doing math, not learning math. Once you’ve internalized what P(A|B) is, you know what it means at a glance, and when you look at a large equation, you’re more interested in the structure of the whole thing than the identities of it’s constituents (Because abstracting the details away getting results based only on structure is what algebra is).
Perhaps math really needs multiple “programming languages”. One for teaching, one for higher level stuff...
People that do novel math often invent their own notation—it’s sort of like domain-specific languages written on top of LISP.
Teaching programming languages haven’t proven to be a popular idea, e.g. MIT and Berkeley moved their famous introductory CS class from scheme to python (because python is actually used, even though it is a much less elegant language).
I think historically math started with longer variables, but it wasn’t so convenient. Compare:
3x^2 + 4x + 1 = 0
with
three times the square of a value, and four times the value, and one, equals nothing
The latter may be easier to read, but the former is easier to divide by x+1.
https://en.wikipedia.org/wiki/Variable_(mathematics)#Genesis_and_evolution_of_the_concept
Also, being very frugal with token length seems to be a thing into the 1960′s, see Unix e.g. “ls -l” instead of the far more human eye friendly “list -long” I don’t exactly understand why but apparently this wasn’t really a priority until about, say, 1995 when more and more programmers said fsck Perl with its unreadably frugal letter soup and use stuff like Python, where things are expressed in actual words.
I guess there are good reasons behind it. I still don’t have to like it.
To get the link
https://en.wikipedia.org/wiki/Variable_(mathematics)#Genesis_and_evolution_of_the_concept#Genesis_and_evolution_of_the_concept)
use the following code in your comment:
See Comment formatting/Escaping special symbols on the wiki for more details (I’ve backslash-escaped underscores _ in the text part of the link to avoid their turning surrounding texts into italics, and the closing round bracket in the URL part of the link to avoid its interpretation as the end of the URL).