I’ve been thinking about this same idea, and I thought your post captured the heart of the algorithm (and damn you for beating me to it 😉). But I think you got the algorithm slightly wrong, or simplified the idea a bit. The “babble” isn’t random, there are too many possible thoughts for random thought generation to ever arrive at something the prune filter would accept. Instead, the babble is the output of of a pattern matching process. That’s why computers have become good at producing babble: neural networks have become competent pattern matchers.
This means that the algorithm is essentially the hypothetical-deductive model from philisophy of science, most obvious when the thoughts you’re trying to come up with are explanations of phenomena: you produce an explanation by pattern matching, then prune the ones that make no goddamn sense (then if you’re doing science you take the explanations you can’t reject for making no sense and prune them again by experiment). That’s why I’ve been calling your babble-prune algorithm “psychological adbuctivism.”
Your babble’s pattern matching gets trained on what gets accepted by the prune filter, that’s why it gets better over time. But if your prune filter is so strict that it seldom accepts any of your babble’s output, your babble never improves. That’s why you must constrain the tyranny of your prune filter if you find yourself with nothing to say. If you never accept any of your babble, then you will never learn to babble better. You can learn to babble better by pattern matching off of what others say, but if your prune filter is so strict, you’re going to have a tough time finding other people who say things that pass your prune filter. You’ll think “thats a fine thing to say, but I would never say it, certainly not that way.” Moreover, listening to other people is how your prune filter is trained, so your prune filter will be getting better (that is to say, more strict) at the same time as your straggling babble generator is.
I’ve had success over the past year with making my prune filter less strict in conversational speech, and I think my babble has improved enough that I can put my prune filter back up to its original level. But I need to do the same with my writing and I find it harder to do that. With conversational speech, you have a time constraint, so if your prune filter is too strict you simply end up saying nothing — the other person will say something or leave before you come up with a sufficiently witty response. In writing, you can just take your time. If it takes you an hour toncome up with the next sentence, then you sit down and wait that god-forsaken hour out. You can get fine writing out with that process, but it’s slow. My writing is good enough that I never proofread (I could certainly still get something out of proofreading, but it isn’t compulsory, even for longer pieces of writing), but to get that degree of quality takes me forever, and I cant produce lower quality writing faster (which would be very useful for finishing my exams on time).
I think I mostly agree and tried to elaborate a lot more in the followup. Could you provide more detail about your hypothetical-deductive model and in what ways that’s different?
I’ve been thinking about this same idea, and I thought your post captured the heart of the algorithm (and damn you for beating me to it 😉). But I think you got the algorithm slightly wrong, or simplified the idea a bit. The “babble” isn’t random, there are too many possible thoughts for random thought generation to ever arrive at something the prune filter would accept. Instead, the babble is the output of of a pattern matching process. That’s why computers have become good at producing babble: neural networks have become competent pattern matchers.
This means that the algorithm is essentially the hypothetical-deductive model from philisophy of science, most obvious when the thoughts you’re trying to come up with are explanations of phenomena: you produce an explanation by pattern matching, then prune the ones that make no goddamn sense (then if you’re doing science you take the explanations you can’t reject for making no sense and prune them again by experiment). That’s why I’ve been calling your babble-prune algorithm “psychological adbuctivism.”
Your babble’s pattern matching gets trained on what gets accepted by the prune filter, that’s why it gets better over time. But if your prune filter is so strict that it seldom accepts any of your babble’s output, your babble never improves. That’s why you must constrain the tyranny of your prune filter if you find yourself with nothing to say. If you never accept any of your babble, then you will never learn to babble better. You can learn to babble better by pattern matching off of what others say, but if your prune filter is so strict, you’re going to have a tough time finding other people who say things that pass your prune filter. You’ll think “thats a fine thing to say, but I would never say it, certainly not that way.” Moreover, listening to other people is how your prune filter is trained, so your prune filter will be getting better (that is to say, more strict) at the same time as your straggling babble generator is.
I’ve had success over the past year with making my prune filter less strict in conversational speech, and I think my babble has improved enough that I can put my prune filter back up to its original level. But I need to do the same with my writing and I find it harder to do that. With conversational speech, you have a time constraint, so if your prune filter is too strict you simply end up saying nothing — the other person will say something or leave before you come up with a sufficiently witty response. In writing, you can just take your time. If it takes you an hour toncome up with the next sentence, then you sit down and wait that god-forsaken hour out. You can get fine writing out with that process, but it’s slow. My writing is good enough that I never proofread (I could certainly still get something out of proofreading, but it isn’t compulsory, even for longer pieces of writing), but to get that degree of quality takes me forever, and I cant produce lower quality writing faster (which would be very useful for finishing my exams on time).
I think I mostly agree and tried to elaborate a lot more in the followup. Could you provide more detail about your hypothetical-deductive model and in what ways that’s different?
I’ve made a reply to your followup.