[Thoughts ↦ speech-code ↦ text-code] just seems like a convoluted/indirect learning-path. Speech has been optimised directly (although very gradually over thousands of years) to encode thoughts, whereas most orthographies are optimised to encode sounds. The symbols are optimised only via piggy-backing on the thoughts↦speech code—like training a language-model indirectly via NTP on [the output of an architecturally-different language-model trained via NTP on human text].
(In the conlang-orthography I aspire to make with AI-assistance, graphemes don’t try to represent sounds at all. So sort of like a logogram but much more modular & compact.)
When I write with pen and paper my writing improves in quality. And it seems this is because I am slower.
Interesting.
Anecdote: When I think to myself without writing at all (eg shower, walking, waiting, lying in bed), I tend to make deeper progress on isolated idea-clusters. Whereas when I use my knowledge-network (RemNote), I often find more spontaneous+insightfwl connections between remote idea-clusters (eg evo bio, AI, economics). This is bc when I write a quick note into RemNote, I heavily prioritise finding the right tags & portalling it into related concepts. Often I simply spam related concepts at the top like this:
The links are to concepts I’ve already spotted metaphors / use-cases for, or I have a hunch that one might be there. It prompts me to either review or flesh out the connections next time I visit the note.
I think these styles complement each other very well.
[Thoughts ↦ speech-code ↦ text-code] just seems like a convoluted/indirect learning-path. Speech has been optimised directly (although very gradually over thousands of years) to encode thoughts, whereas most orthographies are optimised to encode sounds. The symbols are optimised only via piggy-backing on the thoughts↦speech code—like training a language-model indirectly via NTP on [the output of an architecturally-different language-model trained via NTP on human text].
(In the conlang-orthography I aspire to make with AI-assistance, graphemes don’t try to represent sounds at all. So sort of like a logogram but much more modular & compact.)
Interesting.
Anecdote: When I think to myself without writing at all (eg shower, walking, waiting, lying in bed), I tend to make deeper progress on isolated idea-clusters. Whereas when I use my knowledge-network (RemNote), I often find more spontaneous+insightfwl connections between remote idea-clusters (eg evo bio, AI, economics). This is bc when I write a quick note into RemNote, I heavily prioritise finding the right tags & portalling it into related concepts. Often I simply spam related concepts at the top like this:
The links are to concepts I’ve already spotted metaphors / use-cases for, or I have a hunch that one might be there. It prompts me to either review or flesh out the connections next time I visit the note.
I think these styles complement each other very well.