@Épiphanie Gédéon this is great, very complementary/related to what we’ve been developing for the Gaia Network. I’m particularly thrilled to see the focus on simplicity and incrementalism, as well as the willingness to roll up one’s sleeves and write code (often sorely lacking in LW). And I’m glad that you are taking the map/territory problem seriously; I wholeheartedly agree with the following: “Most safe-by-design approaches seem to rely heavily on formal proofs. While formal proofs offer hard guarantees, they are often unreliable because their model of reality needs to be extremely close to reality itself and very detailed to provide assurance.”
A few additional thoughts:
To scale this approach, one will want to have “structural regularizers” towards modularity, interoperability and parsimony. Two of those we have strong opinions on are:
A preference for reusing shared building blocks and building bottom-up. As a decentralized architecture, we implement this preference in terms of credit assignment, specifically free energy flow accounting.
Constraints on the types of admissible model code. We have strongly advocated for probabilistic causal models expressed as probabilistic programs. This enables both a shared statistical notion of model grounding (effectively backing the free energy flow accounting as approximate Bayesian inference of higher-order model structure) and a shared basis for defining and evaluating policy spaces (instantly turning any descriptive model into a usable substrate for model-based RL / active inference).
Learning models from data is super powerful as far as it goes, but it’s sometimes necessary—and often orders of magnitude more efficient—to leverage prior knowledge. Two simple and powerful ways to do it, which we have successfully experimented with, are:
LLM-driven model extraction from scientific literature and other sources of causal knowledge. This is crucial to bootstrap the component library. (See also our friends at system.com.)
Collaborative modeling by LLM-assisted human expert groups. This fits and enhances the “pull request” framework perfectly.
Scaling this to multiple (human or LLM) contributors will require a higher-order model economy of some sort. While one can get away with an implicit, top-down resource economy in the context of a closed contributor group, opening up will require something like a market economy. The free energy flow accounting described above is a suitable primitive for this.
I’d be keen to find ways to collaborate.
Also @Roman Leventov FYI
Latecomer, but as this relates to some of my prior work on self- and other-modeling, I thought I’d comment… The consistently high task accuracy displayed on Figure D suggests that even your smallest neural network is significantly over-capacity/over-parameterized for the test dataset. Excess capacity seems to be the only way the model can take on the expensive self-modeling task (*) without losing accuracy on the main task. Indeed, this would suggest that the explanation for the regularization benefit of self-modeling here is precisely that it soaks up the excess capacity, avoiding overfitting. But obviously, you can have too much of a good thing—as the experiments with fewer hidden layers show, the attention weight can take over the model’s focus and destroy accuracy. So it seems that, if you up the problem complexity/network size knob, the “maximum allowable attention weight” that doesn’t compromise accuracy will tend to zero. On the other hand, one can think of simpler tasks than fully predicting all of a layer’s activations—for example, predicting the activation signs, the maximum-minimum range, the mean activation, etc. I want to say these seem more meaningful anyway, and a way to avoid Borges’s “Map of the Empire whose size was that of the Empire”, no?
* BTW: Unless I missed it, the paper did not report the accuracy of the self-modeling task, only of the primary task, right? I must imagine it was far from perfect, as perfect self-modeling is only possible in trivial edge cases, right?