fit entire algorithms in ones head at once that would otherwise only be understandable in smaller chunks. Perhaps learning and expanding upon such notations could be valuable.
My first reaction was to wonder how this is any different from what already happens in pure math, theoretical physics & TCS etc. Reflecting on this led to my second reaction, which is that jargon brevity correlates with (utility x frequency) which is domain-specific (cf. Terry Tao’s remarks on useful notation), and cross-domain work requires a lot of overhead (to manage stuff like avoiding namespace collisions, but the more general version of this) and this overhead work plausibly increases superlinearly with number of domains, which would be reflected in the language as the sort of thing the late Fields medalist Bill Thurston mentioned re: formalizing math:
Mathematics as we practice it is much more formally complete and precise than other sciences, but it is much less formally complete and precise for its content than computer programs. The difference has to do not just with the amount of effort: the kind of effort is qualitatively different. In large computer programs, a tremendous proportion of effort must be spent on myriad compatibility issues: making sure that all definitions are consistent, developing “good” data structures that have useful but not cumbersome generality, deciding on the “right” generality for functions, etc. The proportion of energy spent on the working part of a large program, as distinguished from the bookkeeping part, is surprisingly small. Because of compatibility issues that almost inevitably escalate out of hand because the “right” definitions change as generality and functionality are added, computer programs usually need to be rewritten frequently, often from scratch.
In practice the folks who I’d trust most to have good opinions on how useful such notations-for-thought would be are breadth + detail folks (e.g. Gwern), people who’ve thought a lot about adjacent topics (e.g. Michael Nielsen and Bret Victor), and generalists who frequently correspond with experts (e.g. Drexler). I’d be curious to know what they think.
My first reaction was to wonder how this is any different from what already happens in pure math, theoretical physics & TCS etc. Reflecting on this led to my second reaction, which is that jargon brevity correlates with (utility x frequency) which is domain-specific (cf. Terry Tao’s remarks on useful notation), and cross-domain work requires a lot of overhead (to manage stuff like avoiding namespace collisions, but the more general version of this) and this overhead work plausibly increases superlinearly with number of domains, which would be reflected in the language as the sort of thing the late Fields medalist Bill Thurston mentioned re: formalizing math:
In practice the folks who I’d trust most to have good opinions on how useful such notations-for-thought would be are breadth + detail folks (e.g. Gwern), people who’ve thought a lot about adjacent topics (e.g. Michael Nielsen and Bret Victor), and generalists who frequently correspond with experts (e.g. Drexler). I’d be curious to know what they think.