I cannot really see the purpose of putting a computer network on the moon [to create superintelligence]
Probably the scenario involved von Neumann machines too—a whole lunar industrial ecology of self-reproducing robots. This was someone from Russia in the first half of the 1990s, who grew up without Internet and with Earth as a geopolitical battlefield. Given that context, it makes visionary sense to imagine pursuing one’s posthuman technolibertarian dreams in space. But he adjusted to the Internet era soon enough.
if we have AGI before energy efficient reversible computation, do you think that the AGI will not use cryptocurrency mining to accelerate the development of reversible computing hardware?
You may be aware that Robin Hanson and Eliezer Yudkowsky have debated a few times over differing scenarios for the AI future. One of the differences is that Robin envisages a kind of pluralism and gradualism, a society and an economy where humans and human uploads and autonomous AIs are interacting as peers for quite some time. On the other hand, Eliezer predicts that the AGI era yields a superintelligent agent quite rapidly, one which, in the words of Bill Joy, “doesn’t need us”.
I think an AGI using a crypto bootstrap to develop reversible hardware, really only makes sense in a future like Robin’s. In Eliezer’s scenario, the AI just directly appropriates whatever resources it needs for its plans.
It will probably be easier to make self reproducing robots in a lab instead of on the moon. After all, in a laboratory, you can control variables such as the composition of minerals, energy sources, and hazards much better than you can just by sending the robots to the moon. But by the time we are able to have self-reproducing robots, we probably would have made reversible computers already.
But if your and Eliezer’s predictions come true, you will need to not only get superhuman AGI running before we have energy efficient reversible computation that is profitable for many purposes, but you will also need this superhuman AGI to be able to reproduce itself and take over the world without anyone noticing before it is too late.
Are you sure that your superhuman AGI will be able to figure out how to take over the world without even having efficient reversible hardware in the first place? It is one thing for a superhuman AGI to take over the world without anyone being able to do anything about it, but it is another thing entirely for that superhuman AGI to begin with limited and inefficient hardware resources.
P.S. You are also making an assumption that superhuman AGI will not have any use for a currency. In order for this assumption to be reasonable, there must be only a few (like 3 or 4) instances of superhuman AGI where they all know each other. This also seems unlikely. Obtaining currency is one of those instrumentally convergent goals that all these superhuman AGIs with goals would have.
Probably the scenario involved von Neumann machines too—a whole lunar industrial ecology of self-reproducing robots. This was someone from Russia in the first half of the 1990s, who grew up without Internet and with Earth as a geopolitical battlefield. Given that context, it makes visionary sense to imagine pursuing one’s posthuman technolibertarian dreams in space. But he adjusted to the Internet era soon enough.
You may be aware that Robin Hanson and Eliezer Yudkowsky have debated a few times over differing scenarios for the AI future. One of the differences is that Robin envisages a kind of pluralism and gradualism, a society and an economy where humans and human uploads and autonomous AIs are interacting as peers for quite some time. On the other hand, Eliezer predicts that the AGI era yields a superintelligent agent quite rapidly, one which, in the words of Bill Joy, “doesn’t need us”.
I think an AGI using a crypto bootstrap to develop reversible hardware, really only makes sense in a future like Robin’s. In Eliezer’s scenario, the AI just directly appropriates whatever resources it needs for its plans.
It will probably be easier to make self reproducing robots in a lab instead of on the moon. After all, in a laboratory, you can control variables such as the composition of minerals, energy sources, and hazards much better than you can just by sending the robots to the moon. But by the time we are able to have self-reproducing robots, we probably would have made reversible computers already.
But if your and Eliezer’s predictions come true, you will need to not only get superhuman AGI running before we have energy efficient reversible computation that is profitable for many purposes, but you will also need this superhuman AGI to be able to reproduce itself and take over the world without anyone noticing before it is too late.
Are you sure that your superhuman AGI will be able to figure out how to take over the world without even having efficient reversible hardware in the first place? It is one thing for a superhuman AGI to take over the world without anyone being able to do anything about it, but it is another thing entirely for that superhuman AGI to begin with limited and inefficient hardware resources.
P.S. You are also making an assumption that superhuman AGI will not have any use for a currency. In order for this assumption to be reasonable, there must be only a few (like 3 or 4) instances of superhuman AGI where they all know each other. This also seems unlikely. Obtaining currency is one of those instrumentally convergent goals that all these superhuman AGIs with goals would have.