So you’re defining a satisficing agent as an agent with utility function f that it wants to maximize, but that acts like its trying to maximize minimum(f, a constant)? In that case, sure, turning itself into an agent that actually tries to maximize f will make it better at maximizing f. This is a fairly trivial case of the general fact that making yourself better at maximizing your utility tends to increase your utility.
However, if the satisficing agent with utility function f acts exactly like a maximizing agent with utility function min(f, constant), then it will not attempt to turn itself into a maximizing agent with utility function f, because that would tend to decrease the expected value of min(f, constant), which, by stipulation, it will not do.
So you’re defining a satisficing agent as an agent with utility function f that it wants to maximize, but that acts like its trying to maximize minimum(f, a constant)? In that case, sure, turning itself into an agent that actually tries to maximize f will make it better at maximizing f. This is a fairly trivial case of the general fact that making yourself better at maximizing your utility tends to increase your utility. However, if the satisficing agent with utility function f acts exactly like a maximizing agent with utility function min(f, constant), then it will not attempt to turn itself into a maximizing agent with utility function f, because that would tend to decrease the expected value of min(f, constant), which, by stipulation, it will not do.