I think I was intentionally vague about the things you are emphasizing because I don’t have a higher-resolution picture of what’s going on. I mentioned that “random” means something like “random, biased by the weak, local filter,” but your picture of pattern-matching seems like a better description of the kind of bias that’s actually going on.
Similarly, it’s probably true that there are different levels of Babble going on, at some points you are pattern-matching with literal words, at other points you are using phrases or concepts or entire cached arguments, and I roughly defined the Babble graph to contain all of these things.
I’m inclined to think that the babble you’ve been describing is actually just thoughts, and not linguistic at all. You create thoughts by babble-and-prune and then a separate process converts the thoughts into words. I haven’t thought much about how that process works (and at first inspection I think it’s probably also structured as babble-and-prune), but I think it makes sense to think about it as separate.
If the processes of forming thoughts and phrasing them linguistically were happening at the same level, I’d expect it to be more intuitive to make syntax reflect semantics, like you see in Shakespeare where the phonetic qualities of a character’s speech reflect their personality. Instead, writing like that seems to require System 2 intervention.
But I must admit I’m biased. If I were designing a mind, I’d want to have thought generation uncoupled from sentence generation, but it doesn’t have to actually work that way.
Edit: If generating linguistic-babble happens on a separate level from generating thought-babble, then that has consequences for how to train thought-babble. Your suggestions of playing scrabble and writing haikus would train the wrong babble (nothing wrong with training linguistic-babble, that’s how you become a good writer, but I’m more interested in thought-babble). I think if you wanted to train thought-babble, you’d want to do something like freewriting or brainstorming — rapidly producing a set of related ideas without judgment.
yes, the process that converts thoughts to words is separate
however, caveat: the words are ALSO used for initialization of your concept network/tree, so these two might continue matching closely by default if you don’t do any individual work on improving them
I can’t give you a RCT for proof but I’ve had this idea for at least 7 months now (blog post) so I had lots of time to verify it
yes, training the concept network/tree directly looks completely different from training the verbal network/tree (though on some meta level the process of doing it is the same)
see this as an example of explicit non-verbal training (notes from improving my rationality-related abstract concept network) - the notes are of course in English, but it should be clear enough that this is not the point: e.g. I’m making up many of the words and phrases as I go because it doesn’t matter for the concept network/tree if my verbal language is standard or not
I think I was intentionally vague about the things you are emphasizing because I don’t have a higher-resolution picture of what’s going on. I mentioned that “random” means something like “random, biased by the weak, local filter,” but your picture of pattern-matching seems like a better description of the kind of bias that’s actually going on.
Similarly, it’s probably true that there are different levels of Babble going on, at some points you are pattern-matching with literal words, at other points you are using phrases or concepts or entire cached arguments, and I roughly defined the Babble graph to contain all of these things.
I’m inclined to think that the babble you’ve been describing is actually just thoughts, and not linguistic at all. You create thoughts by babble-and-prune and then a separate process converts the thoughts into words. I haven’t thought much about how that process works (and at first inspection I think it’s probably also structured as babble-and-prune), but I think it makes sense to think about it as separate.
If the processes of forming thoughts and phrasing them linguistically were happening at the same level, I’d expect it to be more intuitive to make syntax reflect semantics, like you see in Shakespeare where the phonetic qualities of a character’s speech reflect their personality. Instead, writing like that seems to require System 2 intervention.
But I must admit I’m biased. If I were designing a mind, I’d want to have thought generation uncoupled from sentence generation, but it doesn’t have to actually work that way.
Edit: If generating linguistic-babble happens on a separate level from generating thought-babble, then that has consequences for how to train thought-babble. Your suggestions of playing scrabble and writing haikus would train the wrong babble (nothing wrong with training linguistic-babble, that’s how you become a good writer, but I’m more interested in thought-babble). I think if you wanted to train thought-babble, you’d want to do something like freewriting or brainstorming — rapidly producing a set of related ideas without judgment.
Haha, you seem to be on track:
yes, the process that converts thoughts to words is separate
however, caveat: the words are ALSO used for initialization of your concept network/tree, so these two might continue matching closely by default if you don’t do any individual work on improving them
I can’t give you a RCT for proof but I’ve had this idea for at least 7 months now (blog post) so I had lots of time to verify it
yes, training the concept network/tree directly looks completely different from training the verbal network/tree (though on some meta level the process of doing it is the same)
see this as an example of explicit non-verbal training (notes from improving my rationality-related abstract concept network) - the notes are of course in English, but it should be clear enough that this is not the point: e.g. I’m making up many of the words and phrases as I go because it doesn’t matter for the concept network/tree if my verbal language is standard or not