Then state that. It’s an inverse-of-time-until-satisfaction-is-complete maximiser.
The way you defined satisfaction doesn’t really work with that. The satisficer might just decide that it has a 90% chance of producing 10 paperclips, and thus its goal is complete. There is some chance of it failing in its goal later on, but this is likely to be made up by the fact that it probably will satisfy its goals with some extra. Especially if it could self-modify.
Except much more likely to come up; a maximiser facing many exactly balanced strategies in the real world is a rare occurance.
Well, usually you want satisfaction rapidly—and then things are very similar again.
Then state that. It’s an inverse-of-time-until-satisfaction-is-complete maximiser.
The way you defined satisfaction doesn’t really work with that. The satisficer might just decide that it has a 90% chance of producing 10 paperclips, and thus its goal is complete. There is some chance of it failing in its goal later on, but this is likely to be made up by the fact that it probably will satisfy its goals with some extra. Especially if it could self-modify.