Given a non-trivial population to start with, it will be possible to find people that will consent to copying given absolutely minimal (quite possibly none at all) assurances for what happens to their copy. The obvious cases would be egoists that have personal value systems that make them not identify with such copies; you could probably already find many of those today.
In the resulting low-wage environment, it will likewise be possible to find people who will consent to extensive modification/experimentation of their minds given minimal assurances for what happens afterwards (something on the order of “we guarantee you will not be left in abject pain” will likely suffice) if the alternative is starvation. Given this, why you do believe the idea of selection for donation-eagerness to be fanciful?
This scenario is rather different than the one suggested by TedHowardNZ, and has a better chance of working. However:
One of the issues is that less efficient CUs have to defend their resources against more efficient CUs (who spend more of their resources on work/competition). Depending on the precise structure of your society, those attacks may e.g. be military, algorithmic (information security), memetic or political. You’d need a setup that allows the less efficient CUs to maintain their resource share indefinitely. I question that we know how to set this up.
The word “general” is tricky here. Note that CUs that spend most of their resources on instantiating busy EMs will probably end up with more human-like population per CU, and so (counting in human-like entities) may end up dominating the population of their society unless they are rare compared to low-population, high-subjective-wealth CUs. This society may end up not unlike the current one in wealth distribution, where a very few human-scale entities are extremely wealthy, but the vast majority of them are not.