Isn’t using a laptop as a metaphor exactly an example of
Most often reasoning by analogy?
I think one of the points trying to be made was that because we have this uncertainty about how a superintelligence would work, we can’t accurately predict anything without more data.
So maybe the next step in AI should be to create an “Aquarium,” a self-contained network with no actuators and no way to access the internet, but enough processing power to support a superintelligence. We then observe what that superintelligence does in the aquarium before deciding how to resolve further uncertainties.
There is a difference between argument by analogy and using an example. The relevant difference here is that examples illustrate arguments that are made separately, like how calef spent paragraphs 4 and 5 restating the arguments sans laptop.
If anything, the argument from analogy here is in the comparison between human working memory and computer RAM and a nebulous “size in mindspace,” because it is used as an important part of the argument but is not supported separately. But don’t fall for the fallacy fallacy—just because something isn’t modus ponens doesn’t mean it can’t be Bayesian evidence.
Isn’t using a laptop as a metaphor exactly an example
The sentence could have stopped there. If someone makes a claim like “∀ x, p(x)”, it is entirely valid to disprove it via “~p(y)”, and it is not valid to complain that the first proposition is general but the second is specific.
Moving from the general to the specific myself, that laptop example is perfect. It is utterly baffling to me that people can insist we will be able to safely reason about the safety of AGI when we have yet to do so much as produce a consumer operating system that is safe from remote exploits or crashes. Are Microsoft employees uniquely incapable of “fully general intelligent behavior”? Are the OpenSSL developers especially imperfectly “capable of understanding the logical implications of models”?
If you argue that it is “nonsense” to believe that humans won’t naturally understand the complex things they devise, then that argument fails to predict the present, much less the future. If you argue that it is “nonsense” to believe that humans can’t eventually understand the complex things they devise after sufficient time and effort, then that’s more defensible, but that argument is pro-FAI-research, not anti-.
Problems with computer operating systems do not do arbitrary things in the absence of someone consciously using the exploit to make it do arbitrary things. If Windows was a metaphor for unfriendly AI, then it would be possible for AIs to halt in situations where they were intended to work, but they would only turn hostile if someone intentionally programmed them to become hostile. Unfriendly AI as discussed here is not someone intentionally programming the AI to become hostile.
Isn’t using a laptop as a metaphor exactly an example of “Most often reasoning by analogy”?
Precisely correct, thank you for catching that.
I think one of the points trying to be made was that because we have this uncertainty about how a superintelligence would work, we can’t accurately predict anything without more data.
Also correct reading of my intent. The “aquarium” ides is basically what I have and would continue to advocate for: continue developing AGI technology within the confines of a safe experimental setup. By learning more about the types of programs which can perform limited general intelligence tasks in sandbox environments, we learn more about their various strengths and limitations in context, and from that experiance we can construct suitable safeguards for larger deployments.
Isn’t using a laptop as a metaphor exactly an example of
I think one of the points trying to be made was that because we have this uncertainty about how a superintelligence would work, we can’t accurately predict anything without more data.
So maybe the next step in AI should be to create an “Aquarium,” a self-contained network with no actuators and no way to access the internet, but enough processing power to support a superintelligence. We then observe what that superintelligence does in the aquarium before deciding how to resolve further uncertainties.
There is a difference between argument by analogy and using an example. The relevant difference here is that examples illustrate arguments that are made separately, like how calef spent paragraphs 4 and 5 restating the arguments sans laptop.
If anything, the argument from analogy here is in the comparison between human working memory and computer RAM and a nebulous “size in mindspace,” because it is used as an important part of the argument but is not supported separately. But don’t fall for the fallacy fallacy—just because something isn’t modus ponens doesn’t mean it can’t be Bayesian evidence.
The sentence could have stopped there. If someone makes a claim like “∀ x, p(x)”, it is entirely valid to disprove it via “~p(y)”, and it is not valid to complain that the first proposition is general but the second is specific.
Moving from the general to the specific myself, that laptop example is perfect. It is utterly baffling to me that people can insist we will be able to safely reason about the safety of AGI when we have yet to do so much as produce a consumer operating system that is safe from remote exploits or crashes. Are Microsoft employees uniquely incapable of “fully general intelligent behavior”? Are the OpenSSL developers especially imperfectly “capable of understanding the logical implications of models”?
If you argue that it is “nonsense” to believe that humans won’t naturally understand the complex things they devise, then that argument fails to predict the present, much less the future. If you argue that it is “nonsense” to believe that humans can’t eventually understand the complex things they devise after sufficient time and effort, then that’s more defensible, but that argument is pro-FAI-research, not anti-.
Problems with computer operating systems do not do arbitrary things in the absence of someone consciously using the exploit to make it do arbitrary things. If Windows was a metaphor for unfriendly AI, then it would be possible for AIs to halt in situations where they were intended to work, but they would only turn hostile if someone intentionally programmed them to become hostile. Unfriendly AI as discussed here is not someone intentionally programming the AI to become hostile.
Precisely correct, thank you for catching that.
Also correct reading of my intent. The “aquarium” ides is basically what I have and would continue to advocate for: continue developing AGI technology within the confines of a safe experimental setup. By learning more about the types of programs which can perform limited general intelligence tasks in sandbox environments, we learn more about their various strengths and limitations in context, and from that experiance we can construct suitable safeguards for larger deployments.