A satisficer is not motivated to continue to satisfice. It is motivated to take an action that is a satisficing action, and 1) and 2) are equally satisficing.
I know what you’re trying to do, I think. I tried to produce a “continuously satisficing agent” or “future satisficing agent”, but couldn’t get it to work out.
Option 1) already satisfies. Taking option 1) brings the expected utility up above the threshold, so the satisficer is done.
If you add the extra requirement that the AI must never let the expected utility fall below the threshold in future, then the AI will simply blind itself or turn itself off, once the satisficing level is reached; then its expected utility will never fall, as no extra information ever arrives.
A satisficer is not motivated to continue to satisfice. It is motivated to take an action that is a satisficing action, and 1) and 2) are equally satisficing.
I know what you’re trying to do, I think. I tried to produce a “continuously satisficing agent” or “future satisficing agent”, but couldn’t get it to work out.
Surey option 1 has a 10% chance of failing to satisfy.
Option 1) already satisfies. Taking option 1) brings the expected utility up above the threshold, so the satisficer is done.
If you add the extra requirement that the AI must never let the expected utility fall below the threshold in future, then the AI will simply blind itself or turn itself off, once the satisficing level is reached; then its expected utility will never fall, as no extra information ever arrives.
Sorry—a failure to reread the question on my part :-(