Would it do good to use something like sentience quotient, the quantity of bits per second per kg of matter a system can process, to assess the efficiency of a system ?
Of two systems having the same preferences, and the same sentience quotient, but whose optimization power isn’t the same, one must then have a more efficient, smarter way of optimizing than the other ?
As for cross domain optimization, I don’t see offhand how to mathematically charaterize different domains—and it is possible to define arbitrary domains anyway I think -
but if you have a nonrandom u, and are niverse, or environment, and are adapted to it, then if following your preferences you want to use all the information available locally in your environment, in your past light cone, you can only predict the course of your actions faster than will the universe given the implementation of physical laws upon matter if you can non destructively compress the information describing that environment; I guess, this works in any universe that has not reached maximal entropy, and the less entropy in that universe, the faster your speed for predicting future events will be compared to the speed of the universe implementing future events.
If you can’t do that, then you have to use destructive compression to simplify your information about the environment into something you can manageably use to compute the future state of the universe following your actions, faster than the universe itself would implement them. There’s a tradeoff between speed, simplicity, and precision, error rate in this case.
Would it do good to use something like sentience quotient, the quantity of bits per second per kg of matter a system can process, to assess the efficiency of a system ?
Of two systems having the same preferences, and the same sentience quotient, but whose optimization power isn’t the same, one must then have a more efficient, smarter way of optimizing than the other ?
As for cross domain optimization, I don’t see offhand how to mathematically charaterize different domains—and it is possible to define arbitrary domains anyway I think -
but if you have a nonrandom u, and are niverse, or environment, and are adapted to it, then if following your preferences you want to use all the information available locally in your environment, in your past light cone, you can only predict the course of your actions faster than will the universe given the implementation of physical laws upon matter if you can non destructively compress the information describing that environment; I guess, this works in any universe that has not reached maximal entropy, and the less entropy in that universe, the faster your speed for predicting future events will be compared to the speed of the universe implementing future events.
If you can’t do that, then you have to use destructive compression to simplify your information about the environment into something you can manageably use to compute the future state of the universe following your actions, faster than the universe itself would implement them. There’s a tradeoff between speed, simplicity, and precision, error rate in this case.
Just my immediate thoughts.