Isn’t using a laptop as a metaphor exactly an example of “Most often reasoning by analogy”?
Precisely correct, thank you for catching that.
I think one of the points trying to be made was that because we have this uncertainty about how a superintelligence would work, we can’t accurately predict anything without more data.
Also correct reading of my intent. The “aquarium” ides is basically what I have and would continue to advocate for: continue developing AGI technology within the confines of a safe experimental setup. By learning more about the types of programs which can perform limited general intelligence tasks in sandbox environments, we learn more about their various strengths and limitations in context, and from that experiance we can construct suitable safeguards for larger deployments.
Precisely correct, thank you for catching that.
Also correct reading of my intent. The “aquarium” ides is basically what I have and would continue to advocate for: continue developing AGI technology within the confines of a safe experimental setup. By learning more about the types of programs which can perform limited general intelligence tasks in sandbox environments, we learn more about their various strengths and limitations in context, and from that experiance we can construct suitable safeguards for larger deployments.