The cognitive theory is beyond me, but the math looks interesting. I need to exert more thought on this, but I would submit an open Question for the community: might there be a way to calculate error bounds on outputs conditioned on “world models” based on the models’ predictive accuracy and/or complexity? If this were possible, it would be strong support for mathematical insight into the “meta model”.
This sort of seems like the topic of my recent post—let me know if it sparks your imagination, and/or if there are any easy ways I could improve it :)
The cognitive theory is beyond me, but the math looks interesting. I need to exert more thought on this, but I would submit an open Question for the community: might there be a way to calculate error bounds on outputs conditioned on “world models” based on the models’ predictive accuracy and/or complexity? If this were possible, it would be strong support for mathematical insight into the “meta model”.
This sort of seems like the topic of my recent post—let me know if it sparks your imagination, and/or if there are any easy ways I could improve it :)
Thank you—I have this, and some dense Hutter yet to read.