You could make f take a tuple of the expectation and the variance if you wanted to and provide ranges for both of them.
FWIW I don’t find these discussions likely to be useful. I feel they are sweeping what U is (how it is built, and thus how it changes, is it maximised?) under the carpet and missing some things because of it.
To explain what I mean if U refers in someway to a humans existence/happiness you have to be able to classify primary sense data cameras etc as :
humans from non-human dolls/animatronics
dead humans from one that is in a coma (what does it mean to be alive?)
a free happy human from one that is in prison trying to put on a brave face (what is eudamonia)
Encoding solutions to a bunch of philosophical questions just doesn’t seem like a realistic way to do things!
We do not start with Super Intelligence we start with nothing and have to build an AGI. We cannot rely on an pre-human being smart enough to self-improve without mucking up U (I’m not sure we can rely on superhuman either). So we have to do it. The more content we have to put in an unchanging U the harder it will be to make it (and make it bug free).
It seems to preclude all the kind of learning that humans do in this field. We do things differently so there is obviously something that is being missed.
You could make f take a tuple of the expectation and the variance if you wanted to and provide ranges for both of them.
FWIW I don’t find these discussions likely to be useful. I feel they are sweeping what U is (how it is built, and thus how it changes, is it maximised?) under the carpet and missing some things because of it.
To explain what I mean if U refers in someway to a humans existence/happiness you have to be able to classify primary sense data cameras etc as :
humans from non-human dolls/animatronics
dead humans from one that is in a coma (what does it mean to be alive?)
a free happy human from one that is in prison trying to put on a brave face (what is eudamonia)
Encoding solutions to a bunch of philosophical questions just doesn’t seem like a realistic way to do things!
We do not start with Super Intelligence we start with nothing and have to build an AGI. We cannot rely on an pre-human being smart enough to self-improve without mucking up U (I’m not sure we can rely on superhuman either). So we have to do it. The more content we have to put in an unchanging U the harder it will be to make it (and make it bug free).
It seems to preclude all the kind of learning that humans do in this field. We do things differently so there is obviously something that is being missed.
This is intended to be a counterexample to some naive assumptions about self-modifying systems.