As you know, I don’t spend much time worrying about the Large Hadron Collider when I’ve got much larger existential-risk-fish to fry.
----
After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?
----
The real question, Eliezer, is how many times the LHC would have to fail before you decide to fundamentally change the direction of your research? At some point the most profitable avenue of research in the pursuit of friendly AI would become the logistics of combining a mechanism for quantum suicide with a random number generator. Would you shut up, multiply and then invest the entirety of your research on the nuances of creating a secure, hostile-AI-preventing universe-suicide bunker? Let a RNG write the AI and save-scum yourself to friendly AI paradise!
As you know, I don’t spend much time worrying about the Large Hadron Collider when I’ve got much larger existential-risk-fish to fry.
----
After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?
----
The real question, Eliezer, is how many times the LHC would have to fail before you decide to fundamentally change the direction of your research? At some point the most profitable avenue of research in the pursuit of friendly AI would become the logistics of combining a mechanism for quantum suicide with a random number generator. Would you shut up, multiply and then invest the entirety of your research on the nuances of creating a secure, hostile-AI-preventing universe-suicide bunker? Let a RNG write the AI and save-scum yourself to friendly AI paradise!