The ontology problem has nothing to do with computing power, except that limited computing power means you use fewer ontologies. The number might still be large, and for a smart AI not fixable in advance; we didn’t know about quantum fields just recently, and new approximations and models are being invented all the time. If your last paragraph isn’t talking about evolution, I don’t know what it’s talking about.
Downvoting the whole thing as probable nonsense, though my judgment here is influenced by numerous downvoted troll comments that poster has made previously.
The ontology problem has nothing to do with computing power, except that limited computing power means you use fewer ontologies. The number might still be large, and for a smart AI not fixable in advance; we didn’t know about quantum fields just recently, and new approximations and models are being invented all the time. If your last paragraph isn’t talking about evolution, I don’t know what it’s talking about.
Limited computing power means that the ontologies have to be processed approximately (can’t simulate everything at level of quarks all way from the big bang), likely in some sort of multi level model which can go down to level of quarks but also has to be able to go up to level of paperclips, i.e. would have to be able to establish relations between ontologies of different level of detail. It is not inconceivable that e.g. Newtonian mechanics would be part of any multi level ontology, no matter what it has at microscopic level. Note that while I am very skeptical about the AI risk, this is an argument slightly in favour of the risk.
The ontology problem has nothing to do with computing power, except that limited computing power means you use fewer ontologies. The number might still be large, and for a smart AI not fixable in advance; we didn’t know about quantum fields just recently, and new approximations and models are being invented all the time. If your last paragraph isn’t talking about evolution, I don’t know what it’s talking about.
Downvoting the whole thing as probable nonsense, though my judgment here is influenced by numerous downvoted troll comments that poster has made previously.
Limited computing power means that the ontologies have to be processed approximately (can’t simulate everything at level of quarks all way from the big bang), likely in some sort of multi level model which can go down to level of quarks but also has to be able to go up to level of paperclips, i.e. would have to be able to establish relations between ontologies of different level of detail. It is not inconceivable that e.g. Newtonian mechanics would be part of any multi level ontology, no matter what it has at microscopic level. Note that while I am very skeptical about the AI risk, this is an argument slightly in favour of the risk.