Isn’t using a laptop as a metaphor exactly an example
The sentence could have stopped there. If someone makes a claim like “∀ x, p(x)”, it is entirely valid to disprove it via “~p(y)”, and it is not valid to complain that the first proposition is general but the second is specific.
Moving from the general to the specific myself, that laptop example is perfect. It is utterly baffling to me that people can insist we will be able to safely reason about the safety of AGI when we have yet to do so much as produce a consumer operating system that is safe from remote exploits or crashes. Are Microsoft employees uniquely incapable of “fully general intelligent behavior”? Are the OpenSSL developers especially imperfectly “capable of understanding the logical implications of models”?
If you argue that it is “nonsense” to believe that humans won’t naturally understand the complex things they devise, then that argument fails to predict the present, much less the future. If you argue that it is “nonsense” to believe that humans can’t eventually understand the complex things they devise after sufficient time and effort, then that’s more defensible, but that argument is pro-FAI-research, not anti-.
Problems with computer operating systems do not do arbitrary things in the absence of someone consciously using the exploit to make it do arbitrary things. If Windows was a metaphor for unfriendly AI, then it would be possible for AIs to halt in situations where they were intended to work, but they would only turn hostile if someone intentionally programmed them to become hostile. Unfriendly AI as discussed here is not someone intentionally programming the AI to become hostile.
The sentence could have stopped there. If someone makes a claim like “∀ x, p(x)”, it is entirely valid to disprove it via “~p(y)”, and it is not valid to complain that the first proposition is general but the second is specific.
Moving from the general to the specific myself, that laptop example is perfect. It is utterly baffling to me that people can insist we will be able to safely reason about the safety of AGI when we have yet to do so much as produce a consumer operating system that is safe from remote exploits or crashes. Are Microsoft employees uniquely incapable of “fully general intelligent behavior”? Are the OpenSSL developers especially imperfectly “capable of understanding the logical implications of models”?
If you argue that it is “nonsense” to believe that humans won’t naturally understand the complex things they devise, then that argument fails to predict the present, much less the future. If you argue that it is “nonsense” to believe that humans can’t eventually understand the complex things they devise after sufficient time and effort, then that’s more defensible, but that argument is pro-FAI-research, not anti-.
Problems with computer operating systems do not do arbitrary things in the absence of someone consciously using the exploit to make it do arbitrary things. If Windows was a metaphor for unfriendly AI, then it would be possible for AIs to halt in situations where they were intended to work, but they would only turn hostile if someone intentionally programmed them to become hostile. Unfriendly AI as discussed here is not someone intentionally programming the AI to become hostile.