Good post, good explanation. I agree. I saw the recent comment on OB that probably sparked you making this topic, I was thinking of posting it fleetingly before akrasia kicked in. So, thanks.
A throwaway parenthesized remark from RH that nevertheless should be of major importance, because it lowers the credence we should assign to the argument that “UFAI is a good great filter candidate, and a great filter is a good explanation for the Fermi paradox, ergo we should raise our belief in the the verisimilitude of UFAI occurring.”
“because it lowers the credence we should assign to the argument that “UFAI is a good great filter candidate, and a great filter is a good explanation for the Fermi paradox, ergo we should raise our belief in the the verisimilitude of UFAI occurring.”″
Can you identify some people who ever held or promoted this view? I don’t know of any writers who have actually made this argument. It’s pretty absurd on its face, basically saying that instead of there being super-convergence among biological civilizations not to colonize the galaxy, there is super-convergence among autonomous robotic civilizations not to colonize.
I did however, find plenty of refutations of precisely that argument, from the SI4 mailing list to various blogs. Related, Robin Hanson wrote this 2 years ago:
Let us call an AI unambitious if its values have no use for the rest of the universe. Then if the great filter is the main reason to think existential risks are likely, we should worry much more about unambitious unfriendly AI than just an unfriendly AI. Since designing an ambitious AI seems lots easier than designing a friendly one, maybe ambition should be the AI designer first priority.
I suppose that having seen some of those refutations, I falsely overestimated the importance of the argument that was being refuted:
I thought that to merit public refutations, there must be a certain number of people believing in it. If there are, I couldn’t identify any.
Maybe the association occurs from “uFAI” being so closely related to “x-risk”, and “x-risk” being so closely related to “the Great Filter”. No transitivity this time.
Maybe the association occurs from “uFAI” being so closely related to “x-risk”, and “x-risk” being so closely related to “the Great Filter”. No transitivity this time.
I think this may cause confusion for some casual observers, so it’s worth reiterating the refutation, but it’s also worth noting that no one has seriously pressed the refuted argument.
There are certainly some who think machine intelligence may account for the Fermi paradox. For instance, here’s George Dvorsky on the topic. Also, the Wikipedia article on the Fermi paradox lists “a badly programmed super-intelligence” as a possible cause.
Thanks for the links Tim. Yes, it certainly gets included in exhaustive laundry lists of Fermi Paradox explanations (Dvorsky has covered many proposed Fermi Paradox solutions, including very dubious ones). The Fermi Paradox wiki page also includes the following weird explanation:
technological singularity...Theoretical civilizations of this sort may have advanced drastically enough to render communication impossible. The intelligences of a post-singularity civilization might require more information exchange than is possible through interstellar communication, for example.
Good post, good explanation. I agree. I saw the recent comment on OB that probably sparked you making this topic, I was thinking of posting it fleetingly before akrasia kicked in. So, thanks.
A throwaway parenthesized remark from RH that nevertheless should be of major importance, because it lowers the credence we should assign to the argument that “UFAI is a good great filter candidate, and a great filter is a good explanation for the Fermi paradox, ergo we should raise our belief in the the verisimilitude of UFAI occurring.”
“because it lowers the credence we should assign to the argument that “UFAI is a good great filter candidate, and a great filter is a good explanation for the Fermi paradox, ergo we should raise our belief in the the verisimilitude of UFAI occurring.”″
Can you identify some people who ever held or promoted this view? I don’t know of any writers who have actually made this argument. It’s pretty absurd on its face, basically saying that instead of there being super-convergence among biological civilizations not to colonize the galaxy, there is super-convergence among autonomous robotic civilizations not to colonize.
You are correct; I cannot.
I did however, find plenty of refutations of precisely that argument, from the SI4 mailing list to various blogs. Related, Robin Hanson wrote this 2 years ago:
I suppose that having seen some of those refutations, I falsely overestimated the importance of the argument that was being refuted:
I thought that to merit public refutations, there must be a certain number of people believing in it. If there are, I couldn’t identify any.
Maybe the association occurs from “uFAI” being so closely related to “x-risk”, and “x-risk” being so closely related to “the Great Filter”. No transitivity this time.
I think this may cause confusion for some casual observers, so it’s worth reiterating the refutation, but it’s also worth noting that no one has seriously pressed the refuted argument.
There are certainly some who think machine intelligence may account for the Fermi paradox. For instance, here’s George Dvorsky on the topic. Also, the Wikipedia article on the Fermi paradox lists “a badly programmed super-intelligence” as a possible cause.
Thanks for the links Tim. Yes, it certainly gets included in exhaustive laundry lists of Fermi Paradox explanations (Dvorsky has covered many proposed Fermi Paradox solutions, including very dubious ones). The Fermi Paradox wiki page also includes the following weird explanation:
Hang on, we’ve known this for years, right? This is not new information.