If you can prove anything interesting about a system, that system is too simple to be interesting. Logic can’t handle uncertainty, and doesn’t scale at all to describing/modelling systems as complex as societies, brains, AIs, etc.
AIXI is simple, and if our universe happened to allow turing machines to calculate endlessly behind cartesian barriers, it could be interesting in the sense of actually working.
If you can prove anything interesting about a system, that system is too simple to be interesting. Logic can’t handle uncertainty, and doesn’t scale at all to describing/modelling systems as complex as societies, brains, AIs, etc.
AIXI is simple, and if our universe happened to allow turing machines to calculate endlessly behind cartesian barriers, it could be interesting in the sense of actually working.
We have wildly different definitions of interesting, at least in the context of my original statement. :)