It is an interesting way of looking at the maximal potential of AIs. It could be that Oracle Machines are possible in this universe, but an AI built by humans cannot self-improve to that point because of the bound you are describing.
I feel that the phrasing “we have reached the upper bound on complexity” and later “can rise many orders of magnitude” gives a potentially misleading intuition about how limiting this bound is. Do you agree that this bound does not prevent us from building “paperclipping” AIs?
It is an interesting way of looking at the maximal potential of AIs. It could be that Oracle Machines are possible in this universe, but an AI built by humans cannot self-improve to that point because of the bound you are describing.
I feel that the phrasing “we have reached the upper bound on complexity” and later “can rise many orders of magnitude” gives a potentially misleading intuition about how limiting this bound is. Do you agree that this bound does not prevent us from building “paperclipping” AIs?