I agree that having a computer network on the moon is impractical. I cannot really see the purpose of putting a computer network on the moon unless it is a part of a lunar station, and I certainly cannot see how that will result in a superintelligence. But I can certainly imagine how reversible computation will result in a superintelligence since the superintelligence will eventually incorporate reversible computation, so the only real question is whether AGI will come before reversible computation.
AI is often hyped, and a lot of this hype happens for a good reason. We are currently in an AI spring or an AI summer where people are excited and worried about AI, and during an AI spring, people may tend to overestimate the abilities of AI and the future progress of AI. The reason why some people may overestimate the future progress of AI is because we need to be cautious about AI since it is a potentially very dangerous technology, and this caution is healthy. But the previous AI springs were followed by AI winters, and this time I do not believe we will have another AI winter, but we may have another AI fall where AI has a more proper amount of hype.
The difference between previous AI winters and my predicted future AI fall is that in previous AI winters, there was a lot more room for progress with irreversible hardware, but we are at a point where there is not much room to improve our irreversible hardware. Rest in peace, Gordon Moore (1929-2023). This means that in order to get AGI without reversible computation, there better be very great algorithmic improvements. Therefore, on a hardware regard, my predicted AI fall will see improvements that are not quite as great as they used to be. Are you sure you want to base your AGI on mainly just software improvements and minor hardware improvements along with hardware improvements that are orthogonal to the energy efficiency per logic gate operation? If not, then we will also need reversible computation.
So if we have AGI before energy efficient reversible computation, do you think that the AGI will not use cryptocurrency mining to accelerate the development of reversible computing hardware?
P.S. One of the main reasons to use cryptocurrency mining to accelerate the development of reversible computation is that there is no downside except for how reversible mining has no market capitalization (The cryptocurrency community has an education problem.). People will still be mining cryptocurrency, so one might as well use that cryptocurrency mining in order to try to solve the problem of reversible computation as well as possible, and Bitcoin would have been better if it had a mining algorithm that was specifically designed for reversible computation at the beginning.
I cannot really see the purpose of putting a computer network on the moon [to create superintelligence]
Probably the scenario involved von Neumann machines too—a whole lunar industrial ecology of self-reproducing robots. This was someone from Russia in the first half of the 1990s, who grew up without Internet and with Earth as a geopolitical battlefield. Given that context, it makes visionary sense to imagine pursuing one’s posthuman technolibertarian dreams in space. But he adjusted to the Internet era soon enough.
if we have AGI before energy efficient reversible computation, do you think that the AGI will not use cryptocurrency mining to accelerate the development of reversible computing hardware?
You may be aware that Robin Hanson and Eliezer Yudkowsky have debated a few times over differing scenarios for the AI future. One of the differences is that Robin envisages a kind of pluralism and gradualism, a society and an economy where humans and human uploads and autonomous AIs are interacting as peers for quite some time. On the other hand, Eliezer predicts that the AGI era yields a superintelligent agent quite rapidly, one which, in the words of Bill Joy, “doesn’t need us”.
I think an AGI using a crypto bootstrap to develop reversible hardware, really only makes sense in a future like Robin’s. In Eliezer’s scenario, the AI just directly appropriates whatever resources it needs for its plans.
It will probably be easier to make self reproducing robots in a lab instead of on the moon. After all, in a laboratory, you can control variables such as the composition of minerals, energy sources, and hazards much better than you can just by sending the robots to the moon. But by the time we are able to have self-reproducing robots, we probably would have made reversible computers already.
But if your and Eliezer’s predictions come true, you will need to not only get superhuman AGI running before we have energy efficient reversible computation that is profitable for many purposes, but you will also need this superhuman AGI to be able to reproduce itself and take over the world without anyone noticing before it is too late.
Are you sure that your superhuman AGI will be able to figure out how to take over the world without even having efficient reversible hardware in the first place? It is one thing for a superhuman AGI to take over the world without anyone being able to do anything about it, but it is another thing entirely for that superhuman AGI to begin with limited and inefficient hardware resources.
P.S. You are also making an assumption that superhuman AGI will not have any use for a currency. In order for this assumption to be reasonable, there must be only a few (like 3 or 4) instances of superhuman AGI where they all know each other. This also seems unlikely. Obtaining currency is one of those instrumentally convergent goals that all these superhuman AGIs with goals would have.
I agree that having a computer network on the moon is impractical. I cannot really see the purpose of putting a computer network on the moon unless it is a part of a lunar station, and I certainly cannot see how that will result in a superintelligence. But I can certainly imagine how reversible computation will result in a superintelligence since the superintelligence will eventually incorporate reversible computation, so the only real question is whether AGI will come before reversible computation.
AI is often hyped, and a lot of this hype happens for a good reason. We are currently in an AI spring or an AI summer where people are excited and worried about AI, and during an AI spring, people may tend to overestimate the abilities of AI and the future progress of AI. The reason why some people may overestimate the future progress of AI is because we need to be cautious about AI since it is a potentially very dangerous technology, and this caution is healthy. But the previous AI springs were followed by AI winters, and this time I do not believe we will have another AI winter, but we may have another AI fall where AI has a more proper amount of hype.
The difference between previous AI winters and my predicted future AI fall is that in previous AI winters, there was a lot more room for progress with irreversible hardware, but we are at a point where there is not much room to improve our irreversible hardware. Rest in peace, Gordon Moore (1929-2023). This means that in order to get AGI without reversible computation, there better be very great algorithmic improvements. Therefore, on a hardware regard, my predicted AI fall will see improvements that are not quite as great as they used to be. Are you sure you want to base your AGI on mainly just software improvements and minor hardware improvements along with hardware improvements that are orthogonal to the energy efficiency per logic gate operation? If not, then we will also need reversible computation.
So if we have AGI before energy efficient reversible computation, do you think that the AGI will not use cryptocurrency mining to accelerate the development of reversible computing hardware?
P.S. One of the main reasons to use cryptocurrency mining to accelerate the development of reversible computation is that there is no downside except for how reversible mining has no market capitalization (The cryptocurrency community has an education problem.). People will still be mining cryptocurrency, so one might as well use that cryptocurrency mining in order to try to solve the problem of reversible computation as well as possible, and Bitcoin would have been better if it had a mining algorithm that was specifically designed for reversible computation at the beginning.
Probably the scenario involved von Neumann machines too—a whole lunar industrial ecology of self-reproducing robots. This was someone from Russia in the first half of the 1990s, who grew up without Internet and with Earth as a geopolitical battlefield. Given that context, it makes visionary sense to imagine pursuing one’s posthuman technolibertarian dreams in space. But he adjusted to the Internet era soon enough.
You may be aware that Robin Hanson and Eliezer Yudkowsky have debated a few times over differing scenarios for the AI future. One of the differences is that Robin envisages a kind of pluralism and gradualism, a society and an economy where humans and human uploads and autonomous AIs are interacting as peers for quite some time. On the other hand, Eliezer predicts that the AGI era yields a superintelligent agent quite rapidly, one which, in the words of Bill Joy, “doesn’t need us”.
I think an AGI using a crypto bootstrap to develop reversible hardware, really only makes sense in a future like Robin’s. In Eliezer’s scenario, the AI just directly appropriates whatever resources it needs for its plans.
It will probably be easier to make self reproducing robots in a lab instead of on the moon. After all, in a laboratory, you can control variables such as the composition of minerals, energy sources, and hazards much better than you can just by sending the robots to the moon. But by the time we are able to have self-reproducing robots, we probably would have made reversible computers already.
But if your and Eliezer’s predictions come true, you will need to not only get superhuman AGI running before we have energy efficient reversible computation that is profitable for many purposes, but you will also need this superhuman AGI to be able to reproduce itself and take over the world without anyone noticing before it is too late.
Are you sure that your superhuman AGI will be able to figure out how to take over the world without even having efficient reversible hardware in the first place? It is one thing for a superhuman AGI to take over the world without anyone being able to do anything about it, but it is another thing entirely for that superhuman AGI to begin with limited and inefficient hardware resources.
P.S. You are also making an assumption that superhuman AGI will not have any use for a currency. In order for this assumption to be reasonable, there must be only a few (like 3 or 4) instances of superhuman AGI where they all know each other. This also seems unlikely. Obtaining currency is one of those instrumentally convergent goals that all these superhuman AGIs with goals would have.