Oh absolutely, it’s far too simplistic for that. In fact it’s nowhere near adequate for a sufficiently advanced agent capable of changing itself, but was merely chosen for the purpose of illustration; it seems I somewhat miscommunicated the intent of this puzzle.
See my point on satisficers wanting to be maximisers: http://lesswrong.com/lw/854/satisficers_want_to_become_maximisers/
“Achieve ‘G’ with ‘p’ probability” does not seem to be a stable goal for an AI capable of changing itself.
Oh absolutely, it’s far too simplistic for that. In fact it’s nowhere near adequate for a sufficiently advanced agent capable of changing itself, but was merely chosen for the purpose of illustration; it seems I somewhat miscommunicated the intent of this puzzle.