Is there any particular reason an AI wouldn’t be able to self-modify with regards to its prior/algorithm for deciding prior probabilities? A basic Solomonoff prior should include a non-negligible chance that it itself isn’t perfect for finding priors, if I’m not mistaken. That doesn’t answer the question as such, but it isn’t obvious to me that it’s necessary to answer this one to develop a Friendly AI.
A basic Solomonoff prior should include a non-negligible chance that it itself isn’t perfect for finding priors, if I’m not mistaken.
You are mistaken. A prior isn’t something that can be mistaken per se. The closest it can get is assigning a low probability to something that is true. However, any prior system will say that the probability it gives of something being true is exactly equal to the probability of it being true, therefore it is well-calibrated. It will occasionally give low probabilities for things that are true, but only to the extent that unlikely things sometimes happen.
Is there any particular reason an AI wouldn’t be able to self-modify with regards to its prior/algorithm for deciding prior probabilities? A basic Solomonoff prior should include a non-negligible chance that it itself isn’t perfect for finding priors, if I’m not mistaken. That doesn’t answer the question as such, but it isn’t obvious to me that it’s necessary to answer this one to develop a Friendly AI.
You are mistaken. A prior isn’t something that can be mistaken per se. The closest it can get is assigning a low probability to something that is true. However, any prior system will say that the probability it gives of something being true is exactly equal to the probability of it being true, therefore it is well-calibrated. It will occasionally give low probabilities for things that are true, but only to the extent that unlikely things sometimes happen.