Yeah, because it’s just a narrow real-world AI without philosophical tendencies… I’m actually not sure. A more precise argument would help, something like “all sufficiently powerful AIs will try to become or create consistent maximizers of expected utility, for such-and-such reasons”.
Does a pair of consistent optimizers with different goals have a tendency to become a consistent optimizer?
The problem with powerful non-optimizers seems to be that the “powerful” property already presupposes optimization power, and so at least one optimizer-like thing is present in the system. If it’s powerful enough and is not contained, it’s going to eat all the other tendencies of its environment, and so optimization for its goal will be all that remains. Unless there is another optimizer able to defend its non-conformity from the optimizer in question, in which case the two of them might constitute what counts as not-a-consistent-optimizer, maybe?
Yeah, because it’s just a narrow real-world AI without philosophical tendencies… I’m actually not sure. A more precise argument would help, something like “all sufficiently powerful AIs will try to become or create consistent maximizers of expected utility, for such-and-such reasons”.
Does a pair of consistent optimizers with different goals have a tendency to become a consistent optimizer?
The problem with powerful non-optimizers seems to be that the “powerful” property already presupposes optimization power, and so at least one optimizer-like thing is present in the system. If it’s powerful enough and is not contained, it’s going to eat all the other tendencies of its environment, and so optimization for its goal will be all that remains. Unless there is another optimizer able to defend its non-conformity from the optimizer in question, in which case the two of them might constitute what counts as not-a-consistent-optimizer, maybe?