I think it not unlikely that if we have a successful intelligence explosion and subsequently discover a way to build something 4^^^^4-sized, then we will figure out a way to grow into it, one step at a time. This 4^^^^4-sized supertranshuman mind then should be able to discriminate “interesting” from “boring” 3^^^3-sized things. If you could convince the 4^^^^4-sized thing to write down a list of all nonboring 3^^^3-sized things in its spare time, then you would have a formal way to say what an “interesting 3^^^3-sized thing” is, with description length (the description length of humanity = the description length of our actual universe) + (the additional description length to give humanity access to a 4^^^^4-sized computer—which isn’t much because access to a universal Turing machine would do the job and more).
Thus, I don’t think that it needs a 3^^^3-sized description length to pick out interesting 3^^^3-sized minds.
I think it not unlikely that if we have a successful intelligence explosion and subsequently discover a way to build something 4^^^^4-sized, then we will figure out a way to grow into it, one step at a time. This 4^^^^4-sized supertranshuman mind then should be able to discriminate “interesting” from “boring” 3^^^3-sized things. If you could convince the 4^^^^4-sized thing to write down a list of all nonboring 3^^^3-sized things in its spare time, then you would have a formal way to say what an “interesting 3^^^3-sized thing” is, with description length (the description length of humanity = the description length of our actual universe) + (the additional description length to give humanity access to a 4^^^^4-sized computer—which isn’t much because access to a universal Turing machine would do the job and more).
Thus, I don’t think that it needs a 3^^^3-sized description length to pick out interesting 3^^^3-sized minds.