It will probably be easier to make self reproducing robots in a lab instead of on the moon. After all, in a laboratory, you can control variables such as the composition of minerals, energy sources, and hazards much better than you can just by sending the robots to the moon. But by the time we are able to have self-reproducing robots, we probably would have made reversible computers already.
But if your and Eliezer’s predictions come true, you will need to not only get superhuman AGI running before we have energy efficient reversible computation that is profitable for many purposes, but you will also need this superhuman AGI to be able to reproduce itself and take over the world without anyone noticing before it is too late.
Are you sure that your superhuman AGI will be able to figure out how to take over the world without even having efficient reversible hardware in the first place? It is one thing for a superhuman AGI to take over the world without anyone being able to do anything about it, but it is another thing entirely for that superhuman AGI to begin with limited and inefficient hardware resources.
P.S. You are also making an assumption that superhuman AGI will not have any use for a currency. In order for this assumption to be reasonable, there must be only a few (like 3 or 4) instances of superhuman AGI where they all know each other. This also seems unlikely. Obtaining currency is one of those instrumentally convergent goals that all these superhuman AGIs with goals would have.
It will probably be easier to make self reproducing robots in a lab instead of on the moon. After all, in a laboratory, you can control variables such as the composition of minerals, energy sources, and hazards much better than you can just by sending the robots to the moon. But by the time we are able to have self-reproducing robots, we probably would have made reversible computers already.
But if your and Eliezer’s predictions come true, you will need to not only get superhuman AGI running before we have energy efficient reversible computation that is profitable for many purposes, but you will also need this superhuman AGI to be able to reproduce itself and take over the world without anyone noticing before it is too late.
Are you sure that your superhuman AGI will be able to figure out how to take over the world without even having efficient reversible hardware in the first place? It is one thing for a superhuman AGI to take over the world without anyone being able to do anything about it, but it is another thing entirely for that superhuman AGI to begin with limited and inefficient hardware resources.
P.S. You are also making an assumption that superhuman AGI will not have any use for a currency. In order for this assumption to be reasonable, there must be only a few (like 3 or 4) instances of superhuman AGI where they all know each other. This also seems unlikely. Obtaining currency is one of those instrumentally convergent goals that all these superhuman AGIs with goals would have.