The monotonicity principle is only a consequence of IB if you assume (as IBP does) Knightian uncertainty that allows for anything to be simulated somewhere. IBP assumes this essentially because it leads to natural mathematical properties of the behavior of hypotheses under ontology refinement, which I conjecture to be important for learning, but we still don’t know whether they are truly necessary.
I reiterate that I am not calling to immediately build a sovereign AI based on the 1st set of philosophical assumptions that came to my mind. I am only pointing directions for investigation, which ultimately might leads to us to a state of such confidence in our philosophical assumptions, that even a sovereign AI becomes reasonable (and if they won’t, we will probably still learn much). Right now, we are not close to such a level of confidence.
There are additional approaches to either explaining or avoiding the monotonicity principle, for example the idea of transcartesian agents I mentioned here.
Some quick comments:
The monotonicity principle is only a consequence of IB if you assume (as IBP does) Knightian uncertainty that allows for anything to be simulated somewhere. IBP assumes this essentially because it leads to natural mathematical properties of the behavior of hypotheses under ontology refinement, which I conjecture to be important for learning, but we still don’t know whether they are truly necessary.
I reiterate that I am not calling to immediately build a sovereign AI based on the 1st set of philosophical assumptions that came to my mind. I am only pointing directions for investigation, which ultimately might leads to us to a state of such confidence in our philosophical assumptions, that even a sovereign AI becomes reasonable (and if they won’t, we will probably still learn much). Right now, we are not close to such a level of confidence.
There are additional approaches to either explaining or avoiding the monotonicity principle, for example the idea of transcartesian agents I mentioned here.