We don’t actually know the machine works more than once, do we? It creates “a” duplicate of you “when” you pull the lever. That doesn’t necessarily imply that it outputs additional duplicates if you keep pulling the lever. Maybe it has a limited store of raw materials to make the duplicates from, who knows.
Besides, I was just munchkinning myself out of a situation where a sentient individual has to die (i.e. a version of myself). Creating an army up there may have its uses but does not relate to the solving of the initial problem. Unless we are proposing the army make a human ladder? Seems unpleasant.
Most fictional characters are optimised to make for entertaining stories, hence why “generalizing from fictional evidence” is usually a failure-mode. The HPMOR Harry and the Comet King were optimized by two rationalists as examples of rationalist heroes — and are active in allegorical situations engineered to say something that rationalists would find to be “of worth” about real world problems.
They are appealing precisely because they encode assumptions about what a real-world, rationalist “hero” ought to be like. Or at least, that’s the hope. So, they can be pointed to as “theses” about the real world by Yudkowsky and Alexander, no different from blog posts that happen to be written as allegorical stories, and if people found the ideas encoded in those characters more convincing than the ideas encoded in the present April Fools’ Day post, that’s fair enough.
Not necessarily correct on the object-level, but, if it’s wrong, it’s a different kind of error from garden-variety “generalizing from fictional evidence”.