I would also argue that it could be possible to take an algorithm for a paperclip maximizer, and give it to a human to manually calculate. With the algorithm not having the same goals a the human, the human not knowing the algorithms goals, and the algorithm being able to solve problems that the lone human can’t. More so with many humans.
While I am familiar with the Chinese room argument, I don’t see that scenario as realistic.
the algorithm being able to solve problems that the lone human can’t.
I don’t see how an AI could run efficiently enough to be a threat, and not be understood by the human running it.* While it’s about writing, this mentions Vinge’s Law: “if you know exactly what a very smart agent would do, you must be at least that smart yourself.” This doesn’t have to hold for an algorithm but it seems hard enough to circumvent, that I don’t see how an AI could go FOOM in someone’s brain/on paper/in sets of dominos.
*I could see this problem potentially existing with an algorithm for programming, or an elaborate mnemonic device, made by an AI or very smart person, which contains say, a compressed source code for an AI, which the human who (attempts) to memorize it, can remember, but not understand the workings of—even if I memorized “uryybjbeyq” I might not recognize what that’s rot13 for, or that it has any meaning at all.
If you have an AI, you have an upper bound on the system requirements.
This sounds like your AI doesn’t distinguish between intelligence and artificial intelligence.
My point is that it’s hard to get a lower bound.
I would also argue that it could be possible to take an algorithm for a paperclip maximizer, and give it to a human to manually calculate. With the algorithm not having the same goals a the human, the human not knowing the algorithms goals, and the algorithm being able to solve problems that the lone human can’t. More so with many humans.
While I am familiar with the Chinese room argument, I don’t see that scenario as realistic.
I don’t see how an AI could run efficiently enough to be a threat, and not be understood by the human running it.* While it’s about writing, this mentions Vinge’s Law: “if you know exactly what a very smart agent would do, you must be at least that smart yourself.” This doesn’t have to hold for an algorithm but it seems hard enough to circumvent, that I don’t see how an AI could go FOOM in someone’s brain/on paper/in sets of dominos.
*I could see this problem potentially existing with an algorithm for programming, or an elaborate mnemonic device, made by an AI or very smart person, which contains say, a compressed source code for an AI, which the human who (attempts) to memorize it, can remember, but not understand the workings of—even if I memorized “uryybjbeyq” I might not recognize what that’s rot13 for, or that it has any meaning at all.