The reasoning which could cause us to remove our minimal utility situations from the AI’s utility function are the ones which cause the AI to change its utility function. Resistance to blackmail and cosmic ray errors. And It suffers from the same problem. If the universe decides to give our AI a choice between an existential catastrophe and a hyper-existential catastrophe, it won’t care. This works on the individual level too. If there is someone severely ill and begging for death, This AI won’t give it to them. (non-zero chance of mind starting to enjoy self again.) Of course, how much any of this is a problem depends on how likely reality is to hand you such a bad position.
Donald Hobson
Karma: 0
Suppose you have a program P that outputs a probability distribution over the next possible bit. With a fairly small constant amount of extra code, you can make an algorithm P’ that takes in an arbitrary bitstream and decompresses them according to P. In other words, there are data compression algorithms that can use the probabilities to compress the environmental sequence. Now consider the set of programs that look like P’(b) for some bitstring b. The number of extra bits needed to encode b is the same as the number of bits P would loose by not being able to predict every step perfectly. In other words the set of all P’(b) deterministic programs combine to give basically the same probability distribution as P. (up to constant fiddle factors that depend on choice of TM)
In short, you can compress the random noise in the environment, and put it in your predictive program. This gives a deterministic program. And this costs only a constant more bits than admitting your uncertainty.