(N.B. I just discovered that I had not, in fact, downvoted the comment that began this discussion. I must have had it confused with another.)
Like Eliezer, I generally think of intelligence and optimization as describing the same phenomenon. So when I saw this exchange:
If a thousand species in nature with a thousand different abilities were to cooperate, would they equal the capabilities of a human? If not, what else is missing?
Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation.
I read your reply as meaning approximately “1000 small cognitive modules are a really powerful optimization process if and only if their cooperation is controlled by a sufficiently powerful optimization process.”
To answer the question you asked here, I thought the comment was worthy of a downvote (though apparently I did not actually follow through) because it was circular in a non-obvious way that contributed only confusion.
I am probably a much more ruthless downvoter than many other LessWrong posters; my downvotes indicate a desire to see “fewer things like this” with a very low threshold.
I read your reply as meaning approximately “1000 small cognitive modules are a really powerful optimization process if and only if their cooperation is controlled by a sufficiently powerful optimization process.”
Thank you for explaining this, and showing that I was operating under the illusion of transparency.
My intended meaning was nothing so circular. The optimization process I was talking about was the one that would have built the machine, not something that would be “controlling” it from inside. I thought (mistakenly, it appears) that this would be clear from the fact that I said “controlling the form of their cooperation” rather than “controlling their cooperation”. My comment was really nothing different from thomblake’s or wedrifid’s. I was saying, in effect, “yes, on the assumption that the individual components can be made to cooperate, I do believe that it is possible to assemble them in so clever a manner that their cooperation would produce effective intelligence.”
The “cleverness” referred to in the previous sentence is that of the whatever created the machine (which could be actual human programmers, or, theoretically, something else like natural selection) and not the “effective intelligence” of the machine itself. (Think of a programmer, not a homunculus.) Note that I easily envision the process of implementing such “cleverness” itself not looking particularly clever—perhaps the design would be arrived at after many iterations of trial-and-error, with simpler devices of similar form. (Natural selection being the extreme case of this kind of process.) So I’m definitely not thinking magically here, and least not in any obvious way (such as would warrant a downvote, for example).
I can now see how my words weren’t as transparent as I thought, and thank you for drawing this to my attention; at the same time, I hope you’ve updated your prior that a randomly selected comment of mine results from a lack of understanding of basic concepts.
Consider me updated. Thank you for taking my brief and relatively unhelpful comments seriously, and for explaining your intended point. While I disagree that the swiftest route to AGI will involve lots of small modules, it’s a complicated topic with many areas of high uncertainty; I suspect you are at least as informed about the topic as I am, and will be assigning your opinions more credence in the future.
(N.B. I just discovered that I had not, in fact, downvoted the comment that began this discussion. I must have had it confused with another.)
Like Eliezer, I generally think of intelligence and optimization as describing the same phenomenon. So when I saw this exchange:
I read your reply as meaning approximately “1000 small cognitive modules are a really powerful optimization process if and only if their cooperation is controlled by a sufficiently powerful optimization process.”
To answer the question you asked here, I thought the comment was worthy of a downvote (though apparently I did not actually follow through) because it was circular in a non-obvious way that contributed only confusion.
I am probably a much more ruthless downvoter than many other LessWrong posters; my downvotes indicate a desire to see “fewer things like this” with a very low threshold.
Thank you for explaining this, and showing that I was operating under the illusion of transparency.
My intended meaning was nothing so circular. The optimization process I was talking about was the one that would have built the machine, not something that would be “controlling” it from inside. I thought (mistakenly, it appears) that this would be clear from the fact that I said “controlling the form of their cooperation” rather than “controlling their cooperation”. My comment was really nothing different from thomblake’s or wedrifid’s. I was saying, in effect, “yes, on the assumption that the individual components can be made to cooperate, I do believe that it is possible to assemble them in so clever a manner that their cooperation would produce effective intelligence.”
The “cleverness” referred to in the previous sentence is that of the whatever created the machine (which could be actual human programmers, or, theoretically, something else like natural selection) and not the “effective intelligence” of the machine itself. (Think of a programmer, not a homunculus.) Note that I easily envision the process of implementing such “cleverness” itself not looking particularly clever—perhaps the design would be arrived at after many iterations of trial-and-error, with simpler devices of similar form. (Natural selection being the extreme case of this kind of process.) So I’m definitely not thinking magically here, and least not in any obvious way (such as would warrant a downvote, for example).
I can now see how my words weren’t as transparent as I thought, and thank you for drawing this to my attention; at the same time, I hope you’ve updated your prior that a randomly selected comment of mine results from a lack of understanding of basic concepts.
Consider me updated. Thank you for taking my brief and relatively unhelpful comments seriously, and for explaining your intended point. While I disagree that the swiftest route to AGI will involve lots of small modules, it’s a complicated topic with many areas of high uncertainty; I suspect you are at least as informed about the topic as I am, and will be assigning your opinions more credence in the future.
Hooray for polite, respectful, informative disagreements on LW!
It’s why I keep coming back even after getting mad at the place.
(That, and the fact that this is one of very few places I know where people reliably get easy questions right.)