For what it’s worth, the intended answers are 1) no 2) no 3) yes 4) no 5) the evaluation function and the opening book stay the same, there’s just a bit of logic squished above them that kicks in only when the bishop is threatened, not on any move before that.
Yeah, game-theoretic considerations make the problem funny, but the intent wasn’t to convert an almost-consistent utility maximizer into another almost-consistent utility maximizer with a different utility function that somehow values keeping the bishop safe. The intent was to add a hack that throws consistency to the wind, and observe that the AI doesn’t rebel against the hack. After all, there’s no law saying you must build only consistent AIs.
My guess is that’s what most folks probably mean when they talk about “hardwiring” stuff into the AI. They don’t mean changing the AI’s utility function over the real world, they mean changing the AI’s code so it’s no longer best described as maximizing such a function. That might make the AI stupid in some respects and manipulable by humans, which may or may not be a bad thing :-) Of course your actual goals (whatever they are) would be better served by a genuine expected utility maximizer, but building that could be harder and more dangerous. Or at least that’s how the reasoning is supposed to go, I think.
The intent was to add a hack that throws consistency to the wind, and observe that the AI doesn’t rebel against the hack.
Why doesn’t the AI reason “if I remove this hack, I’ll be more likely to win?” Because this is just a narrow chess AI and the programmer never gave it general reasoning abilities?
Why doesn’t the AI reason “if I remove this hack, I’ll be more likely to win?”
More interesting question is why it (if made capable of such reflection) would not take it a little step further and ponder what happens if it removes enemy’s queen from it’s internal board, which would also make it more likely to win, with its internal definition of win which is defined in terms of internal board.
Or why would anyone go through the bother of implementing possibly irreducible notion of what ‘win’ really means in the real world, given that this would simultaneously waste computing power on unnecessary explorations and make AI dangerous / uncontrollable.
Thing is, you don’t need to imagine the world dying to avoid making pointless likely impossible accomplishments.
Yeah, because it’s just a narrow real-world AI without philosophical tendencies… I’m actually not sure. A more precise argument would help, something like “all sufficiently powerful AIs will try to become or create consistent maximizers of expected utility, for such-and-such reasons”.
Does a pair of consistent optimizers with different goals have a tendency to become a consistent optimizer?
The problem with powerful non-optimizers seems to be that the “powerful” property already presupposes optimization power, and so at least one optimizer-like thing is present in the system. If it’s powerful enough and is not contained, it’s going to eat all the other tendencies of its environment, and so optimization for its goal will be all that remains. Unless there is another optimizer able to defend its non-conformity from the optimizer in question, in which case the two of them might constitute what counts as not-a-consistent-optimizer, maybe?
Option 3? Doesn’t work very well. You’re assuming the opponent doesn’t want to threaten the bishop, which means you yank it to a place where it would be safe if the opponent doesn’t want to threaten it, but if the opponent clues in, it’s then trivial for them to threaten the bishop again (to gain more advantage as you try to defend), which you weren’t expecting them to do, because that’s not how your search tree was structured. Kasparov would kick hell out of thus-hardwired Deep Blue as soon as he realized what was happening.
It’s that whole “see the consequences of the math” thing...
Either your comment is in violent agreement agreement with mine (“that might make the AI stupid in some respects and manipulable by humans”), or I don’t understand what you’re trying to say...
For what it’s worth, the intended answers are 1) no 2) no 3) yes 4) no 5) the evaluation function and the opening book stay the same, there’s just a bit of logic squished above them that kicks in only when the bishop is threatened, not on any move before that.
Yeah, game-theoretic considerations make the problem funny, but the intent wasn’t to convert an almost-consistent utility maximizer into another almost-consistent utility maximizer with a different utility function that somehow values keeping the bishop safe. The intent was to add a hack that throws consistency to the wind, and observe that the AI doesn’t rebel against the hack. After all, there’s no law saying you must build only consistent AIs.
My guess is that’s what most folks probably mean when they talk about “hardwiring” stuff into the AI. They don’t mean changing the AI’s utility function over the real world, they mean changing the AI’s code so it’s no longer best described as maximizing such a function. That might make the AI stupid in some respects and manipulable by humans, which may or may not be a bad thing :-) Of course your actual goals (whatever they are) would be better served by a genuine expected utility maximizer, but building that could be harder and more dangerous. Or at least that’s how the reasoning is supposed to go, I think.
Why doesn’t the AI reason “if I remove this hack, I’ll be more likely to win?” Because this is just a narrow chess AI and the programmer never gave it general reasoning abilities?
More interesting question is why it (if made capable of such reflection) would not take it a little step further and ponder what happens if it removes enemy’s queen from it’s internal board, which would also make it more likely to win, with its internal definition of win which is defined in terms of internal board.
Or why would anyone go through the bother of implementing possibly irreducible notion of what ‘win’ really means in the real world, given that this would simultaneously waste computing power on unnecessary explorations and make AI dangerous / uncontrollable.
Thing is, you don’t need to imagine the world dying to avoid making pointless likely impossible accomplishments.
Yeah, because it’s just a narrow real-world AI without philosophical tendencies… I’m actually not sure. A more precise argument would help, something like “all sufficiently powerful AIs will try to become or create consistent maximizers of expected utility, for such-and-such reasons”.
Does a pair of consistent optimizers with different goals have a tendency to become a consistent optimizer?
The problem with powerful non-optimizers seems to be that the “powerful” property already presupposes optimization power, and so at least one optimizer-like thing is present in the system. If it’s powerful enough and is not contained, it’s going to eat all the other tendencies of its environment, and so optimization for its goal will be all that remains. Unless there is another optimizer able to defend its non-conformity from the optimizer in question, in which case the two of them might constitute what counts as not-a-consistent-optimizer, maybe?
Option 3? Doesn’t work very well. You’re assuming the opponent doesn’t want to threaten the bishop, which means you yank it to a place where it would be safe if the opponent doesn’t want to threaten it, but if the opponent clues in, it’s then trivial for them to threaten the bishop again (to gain more advantage as you try to defend), which you weren’t expecting them to do, because that’s not how your search tree was structured. Kasparov would kick hell out of thus-hardwired Deep Blue as soon as he realized what was happening.
It’s that whole “see the consequences of the math” thing...
Either your comment is in violent agreement agreement with mine (“that might make the AI stupid in some respects and manipulable by humans”), or I don’t understand what you’re trying to say...
Probably violent agreement.