Stanislas Dehaene’s and Laurent Cohen’s (2007) Cultural Recycling of Cortical Maps has an interesting argument about how the ability to read might have developed by taking over visual circuits specialized for biologically more relevant tasks, and how this may constrain different writing systems:
According to the neuronal recycling hypothesis, cortical
biases constraint visual word recognition to a specific anatomical
site, but they may even have exerted a powerful
constraint, during the evolution of writing systems, on the
very form that these systems take, thus reducing the span
of cross-cultural variations. Consistent with this view,
Changizi and collaborators have recently demonstrated
two remarkable cross-cultural universals in the visual
properties of writing systems (Changizi and Shimojo,
2005; Changizi et al., 2006). First, in all alphabets, letters
are consistently composed of an average of about three
strokes per character (Changizi and Shimojo, 2005). This
number may be tentatively related to the visual system’s
hierarchical organization, where increases in the complexity
of the neurons’ preferred features are accompanied by
a 2- to 3-fold increase in receptive field size (Rolls, 2000).
Inferotemporal neurons are thought to gain their sensitivity
to complex shapes by pooling over neurons coding for
simpler shapes at the immediately earlier level (Brincat
and Connor, 2004; Serre et al., 2007). Assuming that this
pooling occurs within a radius of about three receptive
fields, elementary letter shape would only be recognized
as combinations of about three simpler strokes, thus accounting
for Changizi’s ‘‘magic number’’ (Changizi and
Shimojo, 2005). This account might be extended to other
levels of the word recognition system (Dehaene, 2007a;
Dehaene et al., 2005). Upstream of the single-letter level,
the elementary strokes used in the world’s writing systems
may themselves be composed of approximately three line
segments. Downstream of it, it may be suggested that
writing makes frequent use of combinations of two to
four letters as morphemes (prefixes, suffixes, or word
roots). Chinese characters also typically combine two to
four functional subelements (Ding et al., 2004). These predictions,
however, still await quantitative confirmation.
A second cross-cultural universal is that, in all writing
systems, topological intersections of contours (e.g., T, Y,
L, D) recur with a universal frequency distribution (Changizi
et al., 2006). Remarkably, these intersections are not
typically observed in random images, but occur with the
same frequency in natural images (Changizi et al., 2006).
Many of these intersections signal ‘‘nonaccidental properties’’
that denote important and invariant connection and
occlusion relations (Biederman, 1987) and are already encoded
in monkey infero-temporal cortex (Kayaert et al.,
2005). Thus, the suggestion is that, while the occipitotemporal
cortex could not evolve for reading, the shapes
used by our writing systems were submitted to a cultural
evolution for faster learnability by matching the elementary
intersections already used in any primate visual system for
object and scene recognition.
This is relevant for discussions about superintelligent AI in that it helps reinforce the case that there are cognitive constraints in our brains that are hard (if not impossible) to overcome, and that a mind which could custom-tailor new cognitive modules for specific skills, unburdened by the need to recycle previously-evolved neural circuitry, could become qualitatively better at them than humans are.
Stanislas Dehaene’s and Laurent Cohen’s (2007) Cultural Recycling of Cortical Maps has an interesting argument about how the ability to read might have developed by taking over visual circuits specialized for biologically more relevant tasks, and how this may constrain different writing systems:
This is relevant for discussions about superintelligent AI in that it helps reinforce the case that there are cognitive constraints in our brains that are hard (if not impossible) to overcome, and that a mind which could custom-tailor new cognitive modules for specific skills, unburdened by the need to recycle previously-evolved neural circuitry, could become qualitatively better at them than humans are.