I read your reply as meaning approximately “1000 small cognitive modules are a really powerful optimization process if and only if their cooperation is controlled by a sufficiently powerful optimization process.”
Thank you for explaining this, and showing that I was operating under the illusion of transparency.
My intended meaning was nothing so circular. The optimization process I was talking about was the one that would have built the machine, not something that would be “controlling” it from inside. I thought (mistakenly, it appears) that this would be clear from the fact that I said “controlling the form of their cooperation” rather than “controlling their cooperation”. My comment was really nothing different from thomblake’s or wedrifid’s. I was saying, in effect, “yes, on the assumption that the individual components can be made to cooperate, I do believe that it is possible to assemble them in so clever a manner that their cooperation would produce effective intelligence.”
The “cleverness” referred to in the previous sentence is that of the whatever created the machine (which could be actual human programmers, or, theoretically, something else like natural selection) and not the “effective intelligence” of the machine itself. (Think of a programmer, not a homunculus.) Note that I easily envision the process of implementing such “cleverness” itself not looking particularly clever—perhaps the design would be arrived at after many iterations of trial-and-error, with simpler devices of similar form. (Natural selection being the extreme case of this kind of process.) So I’m definitely not thinking magically here, and least not in any obvious way (such as would warrant a downvote, for example).
I can now see how my words weren’t as transparent as I thought, and thank you for drawing this to my attention; at the same time, I hope you’ve updated your prior that a randomly selected comment of mine results from a lack of understanding of basic concepts.
Consider me updated. Thank you for taking my brief and relatively unhelpful comments seriously, and for explaining your intended point. While I disagree that the swiftest route to AGI will involve lots of small modules, it’s a complicated topic with many areas of high uncertainty; I suspect you are at least as informed about the topic as I am, and will be assigning your opinions more credence in the future.
Thank you for explaining this, and showing that I was operating under the illusion of transparency.
My intended meaning was nothing so circular. The optimization process I was talking about was the one that would have built the machine, not something that would be “controlling” it from inside. I thought (mistakenly, it appears) that this would be clear from the fact that I said “controlling the form of their cooperation” rather than “controlling their cooperation”. My comment was really nothing different from thomblake’s or wedrifid’s. I was saying, in effect, “yes, on the assumption that the individual components can be made to cooperate, I do believe that it is possible to assemble them in so clever a manner that their cooperation would produce effective intelligence.”
The “cleverness” referred to in the previous sentence is that of the whatever created the machine (which could be actual human programmers, or, theoretically, something else like natural selection) and not the “effective intelligence” of the machine itself. (Think of a programmer, not a homunculus.) Note that I easily envision the process of implementing such “cleverness” itself not looking particularly clever—perhaps the design would be arrived at after many iterations of trial-and-error, with simpler devices of similar form. (Natural selection being the extreme case of this kind of process.) So I’m definitely not thinking magically here, and least not in any obvious way (such as would warrant a downvote, for example).
I can now see how my words weren’t as transparent as I thought, and thank you for drawing this to my attention; at the same time, I hope you’ve updated your prior that a randomly selected comment of mine results from a lack of understanding of basic concepts.
Consider me updated. Thank you for taking my brief and relatively unhelpful comments seriously, and for explaining your intended point. While I disagree that the swiftest route to AGI will involve lots of small modules, it’s a complicated topic with many areas of high uncertainty; I suspect you are at least as informed about the topic as I am, and will be assigning your opinions more credence in the future.
Hooray for polite, respectful, informative disagreements on LW!
It’s why I keep coming back even after getting mad at the place.
(That, and the fact that this is one of very few places I know where people reliably get easy questions right.)