(It follows that an artificial intelligence just a tiny bit smarter than Einstein and von Neumann would be as much more productive than them as they are in relation to other mathematician/physicists).
I don’t think this necessarily does follow. I think it only follows that such an AI would be much more productive than average in a population of Einsteins.
This seems to be almost equivalent to irreversibly forming a majority voting bloc. The only difference is how they interact with the (fake) randomization: by creating a subagent, it effectively (perfectly) correlates all the future random outputs. (In general, I think this will change the outcomes unless agents’ (cardinal) preferences about different decisions are independent).
The randomization trick still potentially helps here: it would be in each representative’s interest to agree not to vote for such proposals, prior to knowing which such proposals will come up and in which order they’re voted on. However, depending on what fraction of its potential value an agent expects to be able to achieve through negotiations, I think that some agents would not sign such an agreement if they know they will have the chance to try to lock their opponents out before they might get locked out.
Actually, there seems to be a more general issue with ordering and incompatible combinations of choices - splitting that into a different comment.