It doesn’t really matter whether the AI uses their full computational capacity. If the AI has a 100000 times larger capacity (which is again a conservative lower bound) and it only uses 1% of it, it will still be 1000 as smart as the human’s full capacity.
AGI’s algorithm will be better, because it has instant access to more facts than any human has time to memorize, and it will not have all of the biases that humans have. The entire point of the sequences is to list dozens of ways that the human brain reliably fails.
If the advantage is speed, then in one year an AI that thinks 10,000x faster could be as productive as a person who lives for 10,000 years. Something like that. Or as productive as one year each from 10,000 people. But a person could live to 10,000 and not be very productive, ever. That’s easy, right? Because they get stuck, unhappy, bored, superstitious … all kinds of things can go wrong with their thinking. If AGI only has a speed advantage, that won’t make it immune to dishonesty, wishful thinking, etc. Right?
Humans have fast access to facts via google, databases, and other tools, so memorizing isn’t crucial.
The entire point of the sequences is to list dozens of ways that the human brain reliably fails.
I thought they talked about things like biases. Couldn’t an AGI be biased, too?
No, not quite. It’s more like “let us poke around this NN and we’ll be able to craft inputs which look like one thing to a human and a completely different thing to the NN, and the NN is very sure of it”.
i think humans don’t use their full computational capacity. why expect an AGI to?
in what way do you think AGI will have a better algorithm than humans? what sort of differences do you have in mind?
It doesn’t really matter whether the AI uses their full computational capacity. If the AI has a 100000 times larger capacity (which is again a conservative lower bound) and it only uses 1% of it, it will still be 1000 as smart as the human’s full capacity.
AGI’s algorithm will be better, because it has instant access to more facts than any human has time to memorize, and it will not have all of the biases that humans have. The entire point of the sequences is to list dozens of ways that the human brain reliably fails.
If the advantage is speed, then in one year an AI that thinks 10,000x faster could be as productive as a person who lives for 10,000 years. Something like that. Or as productive as one year each from 10,000 people. But a person could live to 10,000 and not be very productive, ever. That’s easy, right? Because they get stuck, unhappy, bored, superstitious … all kinds of things can go wrong with their thinking. If AGI only has a speed advantage, that won’t make it immune to dishonesty, wishful thinking, etc. Right?
Humans have fast access to facts via google, databases, and other tools, so memorizing isn’t crucial.
I thought they talked about things like biases. Couldn’t an AGI be biased, too?
For fun ways in which NN classifiers reliably fail, google up adversarial inputs :-)
Example
Rubbish in, rubbish out—right?
No, not quite. It’s more like “let us poke around this NN and we’ll be able to craft inputs which look like one thing to a human and a completely different thing to the NN, and the NN is very sure of it”.