Something which might not buy ample time can still be a pivotal act. From the Arbital page that you link to:
Example 3: Suppose a behaviorist genie is restricted from modeling human minds in any great detail, but is still able to build and deploy molecular nanotechnology. Moreover, the AI is able to understand the instruction, “Build a device for scanning human brains and running them at high speed with minimum simulation error”, and is able to work out a way to do this without simulating whole human brains as test cases. The genie is then used to upload a set of, say, fifty human researchers, and run them at 10,000-to-1 speeds.
This accomplishment would not of itself save the world or destroy it—the researchers inside the simulation would still need to solve the alignment problem, and might not succeed in doing so.
But it would (positively) upset the gameboard and change the major determinants of winning, compared to the default scenario where the fifty researchers are in an equal-speed arms race with the rest of the world, and don’t have practically-unlimited time to check their work. The event where the genie was used to upload the researchers and run them at high speeds would be a critical event, a hinge where the optimum strategy was drastically different before versus after that pivotal act.
The Limited AI (LAI) scenario in this post is equivalent to this example and therefore qualifies as a Pivotal Act under the Arbital Guarded Definition. Additionally, looking at your specific quote, the LAI would “drastically increase the probability of a win”.
Something which might not buy ample time can still be a pivotal act. From the Arbital page that you link to:
The Limited AI (LAI) scenario in this post is equivalent to this example and therefore qualifies as a Pivotal Act under the Arbital Guarded Definition. Additionally, looking at your specific quote, the LAI would “drastically increase the probability of a win”.