I think the Conjecture employees may be a little bit biased. Are these people well informed about the limits of our current hardware and the difficulties in increasing the performance of our hardware as the energy efficiency per logic gate operation approaches Landauer’s limit? Do these people expect for AGI to come about when it is clear that Moore’s law is slowing down and about to stop without reversible computation? Do these people expect for AGI to come about using irreversible hardware or reversible hardware? I want to know their predictions about the rise of reversible computing hardware. Also, does this AGI mean that we will have human level performance in pretty much all tasks at the energy efficiency of the human brain or is it asking about AGI at 10 gigawatts of power?
This survey is only informative if the people being surveyed are well aware of the the limitations on computation imposed by Landauer’s limit and the possible ways of getting around this limitation (reversible computation).
While I am critical of this survey, I do like how this survey exposes the real dangers of human extinction from advanced AI and how people need to start taking AI safety more seriously. We should fund more AI safety research right now.
P.S. One of the greatest threats (and one that AI may exacerbate) is the threat of pathogens such as viruses arising from bio-safety level 4 laboratories. To mitigate this threat, I have proposed for all bio-safety level 4 labs to publicly post cryptographic timestamps of all of their records. To do this, they need to take very meticulous records in the first place. After those timestamps are posted, we can make sure that those timestamps make their way to public blockchains. But I am still the only person talking about this strategy to mitigate a very serious threat. Yes. We should work on AI safety, but there are other threats that we can mitigate if we just took the bare minimum amount of effort. But the bare minimum is probably too much to ask for.
I think the Conjecture employees may be a little bit biased. Are these people well informed about the limits of our current hardware and the difficulties in increasing the performance of our hardware as the energy efficiency per logic gate operation approaches Landauer’s limit? Do these people expect for AGI to come about when it is clear that Moore’s law is slowing down and about to stop without reversible computation? Do these people expect for AGI to come about using irreversible hardware or reversible hardware? I want to know their predictions about the rise of reversible computing hardware. Also, does this AGI mean that we will have human level performance in pretty much all tasks at the energy efficiency of the human brain or is it asking about AGI at 10 gigawatts of power?
This survey is only informative if the people being surveyed are well aware of the the limitations on computation imposed by Landauer’s limit and the possible ways of getting around this limitation (reversible computation).
While I am critical of this survey, I do like how this survey exposes the real dangers of human extinction from advanced AI and how people need to start taking AI safety more seriously. We should fund more AI safety research right now.
P.S. One of the greatest threats (and one that AI may exacerbate) is the threat of pathogens such as viruses arising from bio-safety level 4 laboratories. To mitigate this threat, I have proposed for all bio-safety level 4 labs to publicly post cryptographic timestamps of all of their records. To do this, they need to take very meticulous records in the first place. After those timestamps are posted, we can make sure that those timestamps make their way to public blockchains. But I am still the only person talking about this strategy to mitigate a very serious threat. Yes. We should work on AI safety, but there are other threats that we can mitigate if we just took the bare minimum amount of effort. But the bare minimum is probably too much to ask for.