We know that “AI is whatever doesn’t work yet”. We also know that people often contrast AI (or DL, or LLMs specifically) derogatorily with classic forms of software, such as regexps: why use a LLM to waste gigaflops of compute to do what a few good regexps could...?
So I am amused to discover recently, by sheer accident while looking up ‘what does the “regular” in “regular expression” mean, anyway?’, that it turns out that regexps are AI. In fact, they are not even GOFAI symbolic AI, as you immediately assumed on hearing that, but they were originally connectionist AI research! Huh?
Well, it turns out that ‘regular events’ were introduced by Kleene himself with the justification of modeling McCulloch-Pitts neural nets! (Which are then modeled by ‘regular languages’ and conveniently written down as ‘regular expressions’, abbreviated to ‘regexps’ or ‘regexes’, and then extended/bastardized in countless ways since.)
The ‘regular’ here is not well-defined, as Kleene concedes, and is a gesture towards modeling ‘regularly occurring events’ (that the neural net automaton must process and respond to). He admits “regular” is a terrible term, but no one came up with anything better, and so we’re stuck with it.
My comment was just based on a misunderstanding of this sentence:
The ‘regular’ here is not well-defined, as Kleene concedes, and is a gesture towards modeling ‘regularly occurring events’ (that the neural net automaton must process and respond to).
I think you just meant that there’s really no satisfying analogy explaining why it’s called ‘regular’. What I thought you imply is that this class wasn’t crisply characterized then or now in terms of math (it is). Thanks to your comment though, I noticed a large gap in the CS-theory understanding I thought I had. I thought that the 4 levels usually mentioned in the chomsky hierarchy are the only strict subsets for languages that are well characterized by a grammar, an automaton and a a whole lot of closure properties. Apparently the emphasis on these languages in my two stacked classes on the subject 2 years ago was a historical accident? (Looking at wikipedia, visibly pushdown languages allow intersection, so from my quick skim more natural than context-free languages). They were only discovered in 2004, so perhaps I can forgive my two classes on the subject to not have included developments 15 years in the past. Anyone has post recommendations for propagating this update?
My point is more that ‘regular’ languages form a core to the edifice because the edifice was built on it, and tailored to it. So it is circular to point to the edifice as their justification—doubtless a lot of those definitions and closure properties were carefully tailored to ‘bar monsters’, as Lakatos put it, and allow only those regular languages (in part because automata theory was such a huge industry in early CS, approaching a cult with the Chomskyites). And it should be no surprise if there’s a lot of related work building on it: if regexps weren’t useful and hadn’t been built on extensively, I probably wouldn’t be looking into the etymology 73 years later.
My point is more that ‘regular’ languages form a core to the edifice because the edifice was built on it, and tailored to it
If that was the point of the edifice, it failed successfully, because those closure properties made me notice that visibly pushdown languages are nicer than context-free languages, but still allow matching parentheses and are arguably what regexp should have been built upon.
We know that “AI is whatever doesn’t work yet”. We also know that people often contrast AI (or DL, or LLMs specifically) derogatorily with classic forms of software, such as regexps: why use a LLM to waste gigaflops of compute to do what a few good regexps could...?
So I am amused to discover recently, by sheer accident while looking up ‘what does the “regular” in “regular expression” mean, anyway?’, that it turns out that regexps are AI. In fact, they are not even GOFAI symbolic AI, as you immediately assumed on hearing that, but they were originally connectionist AI research! Huh?
Well, it turns out that ‘regular events’ were introduced by Kleene himself with the justification of modeling McCulloch-Pitts neural nets! (Which are then modeled by ‘regular languages’ and conveniently written down as ‘regular expressions’, abbreviated to ‘regexps’ or ‘regexes’, and then extended/bastardized in countless ways since.)
The ‘regular’ here is not well-defined, as Kleene concedes, and is a gesture towards modeling ‘regularly occurring events’ (that the neural net automaton must process and respond to). He admits “regular” is a terrible term, but no one came up with anything better, and so we’re stuck with it.
Aren’t regular languages really well defined as the weakest level in the Chomsky Hierarchy?
Not that it matters to any point I made there, but where did the Chomsky Hierarchy come from?
My comment was just based on a misunderstanding of this sentence:
I think you just meant that there’s really no satisfying analogy explaining why it’s called ‘regular’. What I thought you imply is that this class wasn’t crisply characterized then or now in terms of math (it is). Thanks to your comment though, I noticed a large gap in the CS-theory understanding I thought I had. I thought that the 4 levels usually mentioned in the chomsky hierarchy are the only strict subsets for languages that are well characterized by a grammar, an automaton and a a whole lot of closure properties. Apparently the emphasis on these languages in my two stacked classes on the subject 2 years ago was a historical accident? (Looking at wikipedia, visibly pushdown languages allow intersection, so from my quick skim more natural than context-free languages). They were only discovered in 2004, so perhaps I can forgive my two classes on the subject to not have included developments 15 years in the past. Anyone has post recommendations for propagating this update?
My point is more that ‘regular’ languages form a core to the edifice because the edifice was built on it, and tailored to it. So it is circular to point to the edifice as their justification—doubtless a lot of those definitions and closure properties were carefully tailored to ‘bar monsters’, as Lakatos put it, and allow only those regular languages (in part because automata theory was such a huge industry in early CS, approaching a cult with the Chomskyites). And it should be no surprise if there’s a lot of related work building on it: if regexps weren’t useful and hadn’t been built on extensively, I probably wouldn’t be looking into the etymology 73 years later.
If that was the point of the edifice, it failed successfully, because those closure properties made me notice that visibly pushdown languages are nicer than context-free languages, but still allow matching parentheses and are arguably what regexp should have been built upon.