Well, it seems we disagree. Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of “narrow AI” subcomponents.
Perhaps another way to put it would be that I suspect the Kolmogorov complexity of any AGI is so high that it’s unlikely that the source code could be stored in a small number of human brains (at least the way the latter currently work).
EDIT: When I say “I suspect” here, of course I mean “my impression is”. I don’t mean to imply that I don’t think this thought has occurred to the people at SIAI (though it might be nice if they could explain why they disagree).
An oddly somewhat relevant article on the information needed for specifying the brain. It is a biologist tearing a strip out of kurzweil for suggesting that we’ll be able reverse engineer the human brain in a decade by looking at the genome.
P.Z. is misreading a quote from a secondhand report. Kurzweil is not talking about reading out the genome and simulating the brain from that, but about using improvements in neuroimaging to inform input-output models of brain regions. The genome point is just an indicator of the limited number of component types involved, which helps to constrain estimates of difficulty.
Edit: Kurzweil has now replied, more or less along the lines above.
Kurzweil’s analysis is simply wrong. Here’s the gist of my refutation of it:
“So, who is right? Does the brain’s design fit into the genome? - or not?
The detailed form of proteins arises from a combination of the nucleotide sequence that specifies them, the cytoplasmic environment in which gene expression takes place, and the laws of physics.
We can safely ignore the contribution of cytoplasmic inheritance—however, the contribution of the laws of physics is harder to discount. At first sight, it may seem simply absurd to argue that the laws of physics contain design information relating to the construction of the human brain. However there is a well-established mechanism by which physical law may do just that—an idea known as the anthropic principle. This argues that the universe we observe must necessarily permit the emergence of intelligent agents. If that involves a coding the design of the brains of intelligent agents into the laws of physics then: so be it. There are plenty of apparently-arbitrary constants in physics where such information could conceivably be encoded: the fine structure constant, the cosmological constant, Planck’s constant—and so on.
At the moment, it is not even possible to bound the quantity of brain-design information so encoded. When we get machine intelligence, we will have an independent estimate of the complexity of the design required to produce an intelligent agent. Alternatively, when we know what the laws of physics are, we may be able to bound the quantity of information encoded by them. However, today neither option is available to us.”
I agree with your analysis, but I also understand where PZ is coming from. You write above that the portion of the genome coding for the brain is small. PZ replies that the small part of the genome you are referring to does not by itself explain the brain; you also need to understand the decoding algorithm—itself scattered through the whole genome and perhaps also the zygotic “epigenome”. You might perhaps clarify that what you were talking about with “small portion of the genome” was the Kolmogorov complexity, so you were already including the decoding algorithm in your estimate.
The problem is, how do you get the point through to PZ and other biologists who come at the question from an evo-devo PoV? I think that someone ought to write a comment correcting PZ, but in order to do so, the commenter would have to speak the languages of three fields—neuroscience, evo-devo, and information-theory. And understand all three well enough to unpack the jargon to laymen without thereby loosing credibility with people who do know one or more of the three fields.
Obviously the genome alone doesn’t build a brain. I wonder how many “bits” I should add on for the normal environment that’s also required (in terms of how much additional complexity is needed to get the first artificial mind that can learn about the world given additional sensory-like inputs). Probably not too many.
Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of “narrow AI” subcomponents.
What do you think you know and how do you think you know it? Let’s say you have a thousand narrow AI subcomponents. (Millions = implausible due to genome size, as Carl Shulman points out.) Then what happens, besides “then a miracle occurs”?
What happens is that the machine has so many different abilities (playing chess and walking and making airline reservations and...) that its cumulative effect on its environment is comparable to a human’s or greater; in contrast to the previous version with 900 components, which was only capable of responding to the environment on the level of a chess-playing, web-searching squirrel.
This view arises from what I understand about the “modular” nature of the human brain: we think we’re a single entity that is “flexible enough” to think about lots of different things, but in reality our brains consist of a whole bunch of highly specialized “modules”, each able to do some single specific thing.
Now, to head off the “Fly Q” objection, Iet me point out that I’m not at all suggesting that an AGI has to be designed like a human brain. Instead, I’m “arguing” (expressing my perception) that the human brain’s general intelligence isn’t a miracle: intelligence really is what inevitably happens when you string zillions of neurons together in response to some optimization pressure. And the “zillions” part is crucial.
(Whoever downvoted the grandparent was being needlessly harsh. Why in the world should I self-censor here? I’m just expressing my epistemic state, and I’ve even made it clear that I don’t believe I have information that SIAI folks don’t, or am being more rational than they are.)
If a thousand species in nature with a thousand different abilities were to cooperate, would they equal the capabilities of a human? If not, what else is missing?
Tough problem. My first reaction is ‘yes’, but I think that might be because we’re assuming cooperation, which might be letting more in the door than you want.
I am highly confused about the parent having been voted down, to the point where I am in a state of genuine curiosity about what went through the voter’s mind as he or she saw it.
Eliezer asked whether a thousand different animals cooperating could have the power of a human. I answered:
Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation
And then someone came along, read this, and thought....what? Was it:
“No, you idiot, obviously no optimization process could be that powerful.” ?
“There you go: ‘sufficiently powerful optimization process’ is equivalent to ‘magic happens’. That’s so obvious that I’m not going to waste my time pointing it out; instead, I’m just going to lower your status with a downvote.” ?
“Clearly you didn’t understand what Eliezer was asking. You’re in over your head, and shouldn’t be discussing this topic.” ?
Do you expect the conglomerate entity to be able to read or to be able to learn how to? Considering Eliezer can quite happily pick many many things like archer fish (ability to shoot water to take out flying insects) and chameleons (ability to control eyes independently), I’m not sure how they all add up to reading.
Natural selection is an optimization process, but it isn’t intelligent.
Also, the point here is AI—one is allowed to assume the use of intelligence in shaping the cooperation. That’s not the same as using intelligence as a black box in describing the nature of it.
If you were the downvoter, might I suggest giving me the benefit of the doubt that I’m up to speed on these kinds of subtleties? (I.e. if I make a comment that sounds dumb to you, think about it a little more before downvoting?)
Now it’s my turn to downvote, on the grounds that you didn’t understand my comment. I agree that natural selection is unintelligent—that was my whole point! It was intended as a counterexample to your implied assertion that an appeal to an optimization process is an appeal to intelligence.
EDIT: I suppose this confirms on a small scale what had become apparent in the larger discussion here about SIAI’s public relations: people really do have more trouble noticing intellectual competence than I tend to realize.
(N.B. I just discovered that I had not, in fact, downvoted the comment that began this discussion. I must have had it confused with another.)
Like Eliezer, I generally think of intelligence and optimization as describing the same phenomenon. So when I saw this exchange:
If a thousand species in nature with a thousand different abilities were to cooperate, would they equal the capabilities of a human? If not, what else is missing?
Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation.
I read your reply as meaning approximately “1000 small cognitive modules are a really powerful optimization process if and only if their cooperation is controlled by a sufficiently powerful optimization process.”
To answer the question you asked here, I thought the comment was worthy of a downvote (though apparently I did not actually follow through) because it was circular in a non-obvious way that contributed only confusion.
I am probably a much more ruthless downvoter than many other LessWrong posters; my downvotes indicate a desire to see “fewer things like this” with a very low threshold.
I read your reply as meaning approximately “1000 small cognitive modules are a really powerful optimization process if and only if their cooperation is controlled by a sufficiently powerful optimization process.”
Thank you for explaining this, and showing that I was operating under the illusion of transparency.
My intended meaning was nothing so circular. The optimization process I was talking about was the one that would have built the machine, not something that would be “controlling” it from inside. I thought (mistakenly, it appears) that this would be clear from the fact that I said “controlling the form of their cooperation” rather than “controlling their cooperation”. My comment was really nothing different from thomblake’s or wedrifid’s. I was saying, in effect, “yes, on the assumption that the individual components can be made to cooperate, I do believe that it is possible to assemble them in so clever a manner that their cooperation would produce effective intelligence.”
The “cleverness” referred to in the previous sentence is that of the whatever created the machine (which could be actual human programmers, or, theoretically, something else like natural selection) and not the “effective intelligence” of the machine itself. (Think of a programmer, not a homunculus.) Note that I easily envision the process of implementing such “cleverness” itself not looking particularly clever—perhaps the design would be arrived at after many iterations of trial-and-error, with simpler devices of similar form. (Natural selection being the extreme case of this kind of process.) So I’m definitely not thinking magically here, and least not in any obvious way (such as would warrant a downvote, for example).
I can now see how my words weren’t as transparent as I thought, and thank you for drawing this to my attention; at the same time, I hope you’ve updated your prior that a randomly selected comment of mine results from a lack of understanding of basic concepts.
Consider me updated. Thank you for taking my brief and relatively unhelpful comments seriously, and for explaining your intended point. While I disagree that the swiftest route to AGI will involve lots of small modules, it’s a complicated topic with many areas of high uncertainty; I suspect you are at least as informed about the topic as I am, and will be assigning your opinions more credence in the future.
Downvoted for retaliatory downvoting; voted everything else up toward 0.
Downvoted the parent and upvoted the grandparent. “On the grounds that you didn’t understand my comment” is a valid reason for downvoting and based off a clearly correct observation.
I do agree that komponisto would have been better served by leaving off mention of voting altogether. Just “You didn’t understand my comment. …” would have conveyed an appropriate level of assertiveness to make the point. That would have avoided sending a signal of insecurity and denied others the invitation to judge.
Status matters; it’s a basic human desideratum, like food and sex (in addition to being instrumentally useful in various ways). There seems to be a notion among some around here that concern with status is itself inherently irrational or bad in some way. But this is as wrong as saying that concern with money or good-tasting food is inherently irrational or bad. Yes, we don’t want the pursuit of status to interfere with our truth-detecting abilities; but the same goes for the pursuit of food, money, or sex, and no one thinks it’s wrong for aspiring rationalists to pursue those things. Still less is it considered bad to discuss them.
Comments like the parent are disingenuous. If we didn’t want users to think about status, we wouldn’t have adopted a karma system in the first place. A norm of forbidding the discussion of voting creates the wrong incentives: it encourages people to make aggressive status moves against others (downvoting) without explaining themselves. If a downvote is discussed, the person being targeted at least has better opportunity to gain information, rather than simply feeling attacked. They may learn whether their comment was actually stupid, or if instead the downvoter was being stupid. When I vote comments down I usually make a comment explaining why—certainly if I’m voting from 0 to −1. (Exceptions for obvious cases.)
I really don’t appreciate what you’ve done here. A little while ago I considered removing the edit from my original comment that questioned the downvote, but decided against it to preserve the context of the thread. Had I done so I wouldn’t now be suffering the stigma of a comment at −1.
When I vote comments down I usually make a comment explaining why—certainly if I’m voting from 0 to −1. (Exceptions for obvious cases.)
Then you must be making a lot of exceptions, or you don’t downvote very much. I find that “I want to see fewer comments like this one” is true of about 1⁄3 of the comments or so, though I don’t downvote quite that much anymore since there is a cap now. Could you imagine if every 4th comment in ‘recent comments’ was taken up by my explanations of why I downvoted a comment? And then what if people didn’t like my explanations and were following the same norm—we’d quickly become a site where most comments are explaining voting behavior.
A bit of a slippery slope argument, but I think it is justified—I can make it more rigorous if need be.
Then you must be making a lot of exceptions, or you don’t downvote very much
Indeed I don’t downvote very much; although probably more than you’re thinking, since I on reflection I don’t typically explain my votes if they don’t affect the sign of the comment’s score.
Could you imagine if every 4th comment in ‘recent comments’ was taken up by my explanations of why I downvoted a comment?
I think you downvote too much. My perception is that, other than the rapid downvoting of trolls and inane comments, the quality of this site is the result mainly of the incentives created by upvoting, rather than downvoting.
Yes, too much explanation would also be bad; but jimrandomh apparently wants none, and I vigorously oppose that. The right to inquire about a downvote should not be trampled upon!
However, it can feel really irritating to get downvoted, especially if one doesn’t know why. It happens to all of us sometimes, and it’s perfectly acceptable to ask for an explanation.
Perhaps we have different ideas of what ‘rights’ and ‘trampling upon’ rights entail.
You have the right to comment about reasons for downvoting—no one will stop you and armed guards will not show up and beat you for it. I think it is a good thing that you have this right.
If I think we would be better off with fewer comments like that, I’m fully within my rights to downvote the comment; similarly, no one will stop me and armed guards will not show up and beat me for it. I think it is a good thing that I have this right.
I’m not sure in what sense you think there is a contradiction between those two things, or if we are just talking past each other.
I think you should be permitted to downvote as you please, but do note that literal armed guards are not necessary for there to be real problems with the protection of rights.
My implicit premise was that 1) violent people or 2) a person actually preventing your action are severally necessary for there to be real problems with the protection of rights. Is there a problem with that version?
In such a context, when someone speaks of the “right” to do X, that means the ability to do X without being punished (in whatever way is being discussed). Here, downvoting is the analogue of armed guards beating one up.
Responding by pointing out that a yet harsher form of punishment is not being imposed is not a legitimate move, IMHO.
You are all talking about this topic, and yet you regard me as weird??? That’s like the extrusion die asserting that the metal wire has a grey spectral component!
In such a context, when someone speaks of the “right” to do X, that means the ability to do X without being punished (in whatever way is being discussed). Here, downvoting is the analogue of armed guards beating one up.
Ah, I could see how you would see that as a contradiction, then.
In that case, for purposes of this discussion, I withdraw my support for your right to do that.
And since I intend to downvote any comment or post for any reason I see fit at the time, it follows that no one has the right to post any comment or post of any sort, by your definition, since they can reasonably expect to be ‘punished’ for it.
For the purposes of other discussions, I do not accept your definition of ‘right’, nor do I accept your framing of a downvote as a ‘punishment’ in the relevant sense. I will continue to do my very best to ensure that only the highest-quality content is shown to new users, and if you consider that ‘punishment’, that is irrelevant to me.
Here, downvoting is the analogue of armed guards beating one up.
Wouldn’t that analogue better apply to publicly and personally insulting the poster, targeting your verbal abuse at the very attributes that this community holds dear, deleting posts and threatening banning? Although I suppose your analogous scale could be extended in scope to include ‘imprisonment and torture without trial’.
On the topic of the immediate context I do hope that you consider thomblakes position and make an exception to your usual policy in his case. I imagine it would be extremely frustrating for you to treat others with what you consider to be respect and courtesy when you know that the recipient does not grant you the same right. It would jar with my preference for symmetry if I thought you didn’t feel free to implement a downvote friendly voting policy at least on a case by case basis. I wouldn’t consider you to be inconsistent, and definitely not hypocritical. I would consider you sane.
The proper reason to request clarification is in order to not make the mistake again—NOT as a defensive measure against some kind of imagined slight on your social status. Yes social status is a part of the reason for the karma system—but it is not something you have an inherent right to. Otherwise there would be no point to it!
Some good reasons to be downvoted: badly formed assertions, ambiguous statements, being confidently wrong, being belligerent, derailing the topic.
In this case your statement was a vague disagreement with the intuitively correct answer, with no supporting argument provided. That is just bad writing, and I would downvote it for so being. It does not imply that I think you have no real idea (something I have no grounds to take a position on), just that the specific comment did not communicate your idea effectively. You should value such feedback, as it will help you improve your writing skills,
The proper reason to request clarification is in order to not make the mistake again
I reject out of hand any proposed rule of propriety that stipulates people must pretend to be naive supplicants.
When people ask me for an explanation of a downvote I most certainly do not take it for granted that by so doing they are entering into my moral reality and willing to accept my interpretation of what is right and what is a ‘mistake’. If I choose to explain reasons for a downvote I also don’t expect them to henceforth conform to my will. They can choose to keep doing whatever annoying thing they were doing (there are plenty more downvotes where that one came from.)
There is more than one reason to ask for clarification for a downvote—even “I’m just kinda curious” is a valid reason. Sometimes votes just seem bizarre and not even Machiavellian reasoning helps explain the pattern. I don’t feel obliged to answer any such request but I do so if convenient. I certainly never begrudge others the opportunity to ask if they do so politely.
Yes social status is a part of the reason for the karma system—but it is not something you have an inherent right to. Otherwise there would be no point to it!
I reject out of hand any proposed rule of propriety that stipulates people must pretend to be naive supplicants.
I never said anything about pretending anything. I said if you request clarification, and don’t actually need clarification, you’re just making noise. Ideally you will be downvoted for that.
There is more than one reason to ask for clarification for a downvote—even “I’m just kinda curious” is a valid reason. Sometimes votes just seem bizarre and not even Machiavellian reasoning helps explain the pattern. I don’t feel obliged to answer any such request but I do so if convenient. I certainly never begrudge others the opportunity to ask if they do so politely.
Sure, but I still maintain that a request for clarification itself can be annoying and hence downvote worthy. I don’t think any comment is inherently protected or should be exempt from being downvoted.
Sure, but I still maintain that a request for clarification itself can be annoying and hence downvote worthy. I don’t think any comment is inherently protected or should be exempt from being downvoted.
I agree with you on these points. I downvote requests for clarification sometime—particularly if, say, the reason for the downvote is transparent or the flow conveys an attitude that jars with me. I certainly agree that people should be free to downvote freely whenever they please and for whatever reason they please—again, for me to presume otherwise would be a demand for naivety or dishonesty (typically both).
Feedback is valuable when it is informative, as the exchange with WrongBot turned out to be in the end.
Unfortunately, a downvote by itself will not typically be that informative. Sometimes it’s obvious why a comment was downvoted (in which case it doesn’t provide much information anyway); but in this case, I had no real idea, and it seemed plausible that it resulted from a misinterpretation of the comment. (As turned out to be the case.)
(Also, the slight to one’s social status represented by a downvote isn’t “imagined”; it’s tangible and numerical.)
In this case your statement was a vague disagreement with the intuitively correct answer, with no supporting argument provided. That is just bad writing, and I would downvote it for so being
The comment was a quick answer to a yes-no question posed to me by Eliezer. Would you have been more or less inclined to downvote it if I had written only “Yes”?
Unfortunately, a downvote by itself will not typically be that informative. Sometimes it’s obvious why a comment was downvoted (in which case it doesn’t provide much information anyway); but in this case, I had no real idea, and it seemed plausible that it resulted from a misinterpretation of the comment. (As turned out to be the case.)
Providing information isn’t the point of downvoting, it is a means of expressing social disapproval. (Perhaps that is information in a sense, but it is more complicated than just that.) The fact that they are being contrary to a social norm may or may not be obvious to the commenter, if not then it is new information. Regardless, the downvote is a signal to reexamine the comment and think about why it was not approved by over 50% of the readers who felt strongly enough to vote on it.
(Also, the slight to one’s social status represented by a downvote isn’t “imagined”; it’s tangible and numerical.)
Tangibility and significance are completely different matters. A penny might appear more solid than a dollar, but is far less worthy of consideration. You could ignore a minus-1 comment quite safely without people deciding (even momentarily) that you are a loser or some such. That you chose not to makes it look like you have an inflated view of how significant it is.
The comment was a quick answer to a yes-no question posed to me by Eliezer. Would you have been more or less inclined to downvote it if I had written only “Yes”?
Probably less, as I would then have simply felt like requesting clarification, or perhaps even thinking of a reason on my own. A bad argument (or one that sounds bad) is worse than no argument.
Status matters; it’s a basic human desideratum, like food and sex (in addition to being instrumentally useful in various ways). There seems to be a notion among some around here that concern with status is itself inherently irrational or bad in some way. But this is as wrong as saying that concern with money or good-tasting food is inherently irrational or bad. Yes, we don’t want the pursuit of status to interfere with our truth-detecting abilities; but the same goes for the pursuit of food, money, or sex, and no one thinks it’s wrong for aspiring rationalists to pursue those things.
Status is an inherently zero-sum good, so while it is rational for any given individual to pursue it; we’d all be better off, cet par, if nobody pursued it. Everyone has a small incentive for other people not to pursue status, just as they have an incentive for them not to be violent or to smell funny; hence the existence of popular anti-status-seeking norms.
I don’t think I agree, at least in the present context. I think of status as being like money—or, in fact, the karma score on LW, since that is effectively what we’re talking about here anyway. It controls the granting of important privileges, such as what we might call “being listened to”—having folks read your words carefully, interpret them charitably, and perhaps even act on them or otherwise be influenced by them.
(To tie this to the larger context, this is why I started paying attention to SIAI: because Eliezer had won “status” in my mind.)
While status may appear zero-sum amongst those who are competing for influence in a community, for the community as a whole, status is postive sum when in it accurately reflects the value of people to the community.
I don’t think I agree, at least in the present context. I think of status as being like money—or, in fact, the karma score on LW, since that is effectively what we’re talking about here anyway. It controls the granting of important privileges, such as what we might call “being listened to”—having folks read your words carefully, interpret them charitably, and perhaps even act on them or otherwise be influenced by them.
(To tie this to the larger context, this is why I started paying attention to SIAI: because Eliezer had won “status” in my mind.)
This view arises from what I understand about the “modular” nature of the human brain: we think we’re a single entity that is “flexible enough” to think about lots of different things, but in reality our brains consist of a whole bunch of highly specialized “modules”, each able to do some single specific thing.
The brain has many different components with specializations, but the largest and human dominant portion, the cortex, is not really specialized at all in the way you outline.
The cortex is no more specialized than your hard drive.
Its composed of a single repeating structure and associated learning algorithm that appears to be universal. The functional specializations that appear in the adult brain arise due to topological wiring proximity to the relevant sensory and motor connections. The V1 region is not hard-wired to perform mathematically optimal gabor-like edge filters. It automatically evolves into this configuration because it is the optimal configuration for modelling the input data at that layer, and it evolves thus soley based on exposure to said input data from retinal ganglion cells.
You can think of cortical tissue as a biological ‘neuronium’. It has a semi-magical emergent capacity to self-organize into an appropriate set of feature detectors based on what its wired to. more on this
All that being said, the inter-regional wiring itself is currently less understood and is probably more genetically predetermined.
Well, it seems we disagree. Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of “narrow AI” subcomponents.
There may be other approaches that are significantly simpler (that we haven’t yet found, obviously). Assuming AGI happens, it will have been a race between the specific (type of) path you imagine, and every other alternative you didn’t think of. In other words, you think you have an upper bound on how much time/expense it will take.
I’m not a member of SIAI but my reason for thinking that AGI is not just going to be like lots of narrow bits of AI stuck together is that I can see interesting systems that haven’t been fully explored (due to difficulty of exploration). These types of systems might solve some of the open problems not addressed by narrow AI.
These are problems such as
How can a system become good at so many different things when it starts off the same. Especially puzzling is how people build complex (unconscious) machinery for dealing with problems that we are not adapted for, like Chess.
How can a system look after/upgrade itself without getting completely pwned by malware (We do get partially pwned by hostile memes, but is not complete take over of the same type as getting rooted).
Now I also doubt that these systems will develop quickly when people get around to investigating them. And they will have elements of traditional narrow AI in as well, but they will be changeable/adaptable parts of the system, not fixed sub-components. What I think needs is exploring is primarily changes in software life-cycles rather than a change in the nature of the software itself.
And Learning is equivalent to absorbing memes. The two are one and the same.
I don’t agree. Meme absorption is just one element of learning.
To learn how to play darts well you absorb a couple of dozen memes and then spend hours upon hours rewiring your brain to implement a complex coordination process.
To learn how to behave appropriately in a given culture you learn a huge swath of existing memes, continue to learn a stream of new ones but also dedicate huge amounts of background processing reconfiguring the weightings of existing memes relative to each other and external inputs. You also learn all sorts of implicit information about how memes work for you specifically (due to, for example, physical characteristics), much of this information will never be represented in meme form.
Fine, if you take memes to be just symbolic level transferable knowledge (which, thinking it over, I agree with), then at a more detailed level learning involves several sub-processes, one of which is the rapid transfer of memes into short term memory.
Well, it seems we disagree. Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of “narrow AI” subcomponents.
Perhaps another way to put it would be that I suspect the Kolmogorov complexity of any AGI is so high that it’s unlikely that the source code could be stored in a small number of human brains (at least the way the latter currently work).
EDIT: When I say “I suspect” here, of course I mean “my impression is”. I don’t mean to imply that I don’t think this thought has occurred to the people at SIAI (though it might be nice if they could explain why they disagree).
The portion of the genome coding for brain architecture is a lot smaller than Windows 7, bit-wise.
An oddly somewhat relevant article on the information needed for specifying the brain. It is a biologist tearing a strip out of kurzweil for suggesting that we’ll be able reverse engineer the human brain in a decade by looking at the genome.
P.Z. is misreading a quote from a secondhand report. Kurzweil is not talking about reading out the genome and simulating the brain from that, but about using improvements in neuroimaging to inform input-output models of brain regions. The genome point is just an indicator of the limited number of component types involved, which helps to constrain estimates of difficulty.
Edit: Kurzweil has now replied, more or less along the lines above.
Kurzweil’s analysis is simply wrong. Here’s the gist of my refutation of it:
“So, who is right? Does the brain’s design fit into the genome? - or not?
The detailed form of proteins arises from a combination of the nucleotide sequence that specifies them, the cytoplasmic environment in which gene expression takes place, and the laws of physics.
We can safely ignore the contribution of cytoplasmic inheritance—however, the contribution of the laws of physics is harder to discount. At first sight, it may seem simply absurd to argue that the laws of physics contain design information relating to the construction of the human brain. However there is a well-established mechanism by which physical law may do just that—an idea known as the anthropic principle. This argues that the universe we observe must necessarily permit the emergence of intelligent agents. If that involves a coding the design of the brains of intelligent agents into the laws of physics then: so be it. There are plenty of apparently-arbitrary constants in physics where such information could conceivably be encoded: the fine structure constant, the cosmological constant, Planck’s constant—and so on.
At the moment, it is not even possible to bound the quantity of brain-design information so encoded. When we get machine intelligence, we will have an independent estimate of the complexity of the design required to produce an intelligent agent. Alternatively, when we know what the laws of physics are, we may be able to bound the quantity of information encoded by them. However, today neither option is available to us.”
http://alife.co.uk/essays/how_long_before_superintelligence/
Wired really messed up the flow of the talk in that case. Is it based off a singularity summit talk?
I agree with your analysis, but I also understand where PZ is coming from. You write above that the portion of the genome coding for the brain is small. PZ replies that the small part of the genome you are referring to does not by itself explain the brain; you also need to understand the decoding algorithm—itself scattered through the whole genome and perhaps also the zygotic “epigenome”. You might perhaps clarify that what you were talking about with “small portion of the genome” was the Kolmogorov complexity, so you were already including the decoding algorithm in your estimate.
The problem is, how do you get the point through to PZ and other biologists who come at the question from an evo-devo PoV? I think that someone ought to write a comment correcting PZ, but in order to do so, the commenter would have to speak the languages of three fields—neuroscience, evo-devo, and information-theory. And understand all three well enough to unpack the jargon to laymen without thereby loosing credibility with people who do know one or more of the three fields.
Why bother? PZ’s rather misguided rant isn’t doing very much damage. Just ignore him, I figure.
Maybe it is a slow news day. PZ’s rant got Slashdotted:
http://science.slashdot.org/story/10/08/17/1536233/Ray-Kurzweil-Does-Not-Understand-the-Brain
PZ has stooped pretty low with the publicity recently:
http://scienceblogs.com/pharyngula/2010/08/the_eva_mendes_sex_tape.php
Maybe he was trolling with his Kurzweil rant. He does have a history with this subject matter, though:
http://scienceblogs.com/pharyngula/2009/02/singularly_silly_singularity.php
Obviously the genome alone doesn’t build a brain. I wonder how many “bits” I should add on for the normal environment that’s also required (in terms of how much additional complexity is needed to get the first artificial mind that can learn about the world given additional sensory-like inputs). Probably not too many.
Thanks, this is useful to know. Will revise beliefs accordingly.
What do you think you know and how do you think you know it? Let’s say you have a thousand narrow AI subcomponents. (Millions = implausible due to genome size, as Carl Shulman points out.) Then what happens, besides “then a miracle occurs”?
What happens is that the machine has so many different abilities (playing chess and walking and making airline reservations and...) that its cumulative effect on its environment is comparable to a human’s or greater; in contrast to the previous version with 900 components, which was only capable of responding to the environment on the level of a chess-playing, web-searching squirrel.
This view arises from what I understand about the “modular” nature of the human brain: we think we’re a single entity that is “flexible enough” to think about lots of different things, but in reality our brains consist of a whole bunch of highly specialized “modules”, each able to do some single specific thing.
Now, to head off the “Fly Q” objection, Iet me point out that I’m not at all suggesting that an AGI has to be designed like a human brain. Instead, I’m “arguing” (expressing my perception) that the human brain’s general intelligence isn’t a miracle: intelligence really is what inevitably happens when you string zillions of neurons together in response to some optimization pressure. And the “zillions” part is crucial.
(Whoever downvoted the grandparent was being needlessly harsh. Why in the world should I self-censor here? I’m just expressing my epistemic state, and I’ve even made it clear that I don’t believe I have information that SIAI folks don’t, or am being more rational than they are.)
If a thousand species in nature with a thousand different abilities were to cooperate, would they equal the capabilities of a human? If not, what else is missing?
Tough problem. My first reaction is ‘yes’, but I think that might be because we’re assuming cooperation, which might be letting more in the door than you want.
Exactly the thought I had. Cooperation is kind of a big deal.
Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation.
I am highly confused about the parent having been voted down, to the point where I am in a state of genuine curiosity about what went through the voter’s mind as he or she saw it.
Eliezer asked whether a thousand different animals cooperating could have the power of a human. I answered:
And then someone came along, read this, and thought....what? Was it:
“No, you idiot, obviously no optimization process could be that powerful.” ?
“There you go: ‘sufficiently powerful optimization process’ is equivalent to ‘magic happens’. That’s so obvious that I’m not going to waste my time pointing it out; instead, I’m just going to lower your status with a downvote.” ?
“Clearly you didn’t understand what Eliezer was asking. You’re in over your head, and shouldn’t be discussing this topic.” ?
Something else?
Do you expect the conglomerate entity to be able to read or to be able to learn how to? Considering Eliezer can quite happily pick many many things like archer fish (ability to shoot water to take out flying insects) and chameleons (ability to control eyes independently), I’m not sure how they all add up to reading.
The optimization process is the part where the intelligence lives.
Natural selection is an optimization process, but it isn’t intelligent.
Also, the point here is AI—one is allowed to assume the use of intelligence in shaping the cooperation. That’s not the same as using intelligence as a black box in describing the nature of it.
If you were the downvoter, might I suggest giving me the benefit of the doubt that I’m up to speed on these kinds of subtleties? (I.e. if I make a comment that sounds dumb to you, think about it a little more before downvoting?)
You were at +1 when I downvoted, so I’m not alone.
Natural selection is a very bad optimization process, and so it’s quite unintelligent relative to any standards we might have as humans.
Now it’s my turn to downvote, on the grounds that you didn’t understand my comment. I agree that natural selection is unintelligent—that was my whole point! It was intended as a counterexample to your implied assertion that an appeal to an optimization process is an appeal to intelligence.
EDIT: I suppose this confirms on a small scale what had become apparent in the larger discussion here about SIAI’s public relations: people really do have more trouble noticing intellectual competence than I tend to realize.
(N.B. I just discovered that I had not, in fact, downvoted the comment that began this discussion. I must have had it confused with another.)
Like Eliezer, I generally think of intelligence and optimization as describing the same phenomenon. So when I saw this exchange:
I read your reply as meaning approximately “1000 small cognitive modules are a really powerful optimization process if and only if their cooperation is controlled by a sufficiently powerful optimization process.”
To answer the question you asked here, I thought the comment was worthy of a downvote (though apparently I did not actually follow through) because it was circular in a non-obvious way that contributed only confusion.
I am probably a much more ruthless downvoter than many other LessWrong posters; my downvotes indicate a desire to see “fewer things like this” with a very low threshold.
Thank you for explaining this, and showing that I was operating under the illusion of transparency.
My intended meaning was nothing so circular. The optimization process I was talking about was the one that would have built the machine, not something that would be “controlling” it from inside. I thought (mistakenly, it appears) that this would be clear from the fact that I said “controlling the form of their cooperation” rather than “controlling their cooperation”. My comment was really nothing different from thomblake’s or wedrifid’s. I was saying, in effect, “yes, on the assumption that the individual components can be made to cooperate, I do believe that it is possible to assemble them in so clever a manner that their cooperation would produce effective intelligence.”
The “cleverness” referred to in the previous sentence is that of the whatever created the machine (which could be actual human programmers, or, theoretically, something else like natural selection) and not the “effective intelligence” of the machine itself. (Think of a programmer, not a homunculus.) Note that I easily envision the process of implementing such “cleverness” itself not looking particularly clever—perhaps the design would be arrived at after many iterations of trial-and-error, with simpler devices of similar form. (Natural selection being the extreme case of this kind of process.) So I’m definitely not thinking magically here, and least not in any obvious way (such as would warrant a downvote, for example).
I can now see how my words weren’t as transparent as I thought, and thank you for drawing this to my attention; at the same time, I hope you’ve updated your prior that a randomly selected comment of mine results from a lack of understanding of basic concepts.
Consider me updated. Thank you for taking my brief and relatively unhelpful comments seriously, and for explaining your intended point. While I disagree that the swiftest route to AGI will involve lots of small modules, it’s a complicated topic with many areas of high uncertainty; I suspect you are at least as informed about the topic as I am, and will be assigning your opinions more credence in the future.
Hooray for polite, respectful, informative disagreements on LW!
It’s why I keep coming back even after getting mad at the place.
(That, and the fact that this is one of very few places I know where people reliably get easy questions right.)
Downvoted for retaliatory downvoting; voted everything else up toward 0.
Downvoted the parent and upvoted the grandparent. “On the grounds that you didn’t understand my comment” is a valid reason for downvoting and based off a clearly correct observation.
I do agree that komponisto would have been better served by leaving off mention of voting altogether. Just “You didn’t understand my comment. …” would have conveyed an appropriate level of assertiveness to make the point. That would have avoided sending a signal of insecurity and denied others the invitation to judge.
Voted down all comments that talk about voting, for being too much about status rather than substance.
Vote my comment towards −1 for consistency.
Status matters; it’s a basic human desideratum, like food and sex (in addition to being instrumentally useful in various ways). There seems to be a notion among some around here that concern with status is itself inherently irrational or bad in some way. But this is as wrong as saying that concern with money or good-tasting food is inherently irrational or bad. Yes, we don’t want the pursuit of status to interfere with our truth-detecting abilities; but the same goes for the pursuit of food, money, or sex, and no one thinks it’s wrong for aspiring rationalists to pursue those things. Still less is it considered bad to discuss them.
Comments like the parent are disingenuous. If we didn’t want users to think about status, we wouldn’t have adopted a karma system in the first place. A norm of forbidding the discussion of voting creates the wrong incentives: it encourages people to make aggressive status moves against others (downvoting) without explaining themselves. If a downvote is discussed, the person being targeted at least has better opportunity to gain information, rather than simply feeling attacked. They may learn whether their comment was actually stupid, or if instead the downvoter was being stupid. When I vote comments down I usually make a comment explaining why—certainly if I’m voting from 0 to −1. (Exceptions for obvious cases.)
I really don’t appreciate what you’ve done here. A little while ago I considered removing the edit from my original comment that questioned the downvote, but decided against it to preserve the context of the thread. Had I done so I wouldn’t now be suffering the stigma of a comment at −1.
Then you must be making a lot of exceptions, or you don’t downvote very much. I find that “I want to see fewer comments like this one” is true of about 1⁄3 of the comments or so, though I don’t downvote quite that much anymore since there is a cap now. Could you imagine if every 4th comment in ‘recent comments’ was taken up by my explanations of why I downvoted a comment? And then what if people didn’t like my explanations and were following the same norm—we’d quickly become a site where most comments are explaining voting behavior.
A bit of a slippery slope argument, but I think it is justified—I can make it more rigorous if need be.
Indeed I don’t downvote very much; although probably more than you’re thinking, since I on reflection I don’t typically explain my votes if they don’t affect the sign of the comment’s score.
I think you downvote too much. My perception is that, other than the rapid downvoting of trolls and inane comments, the quality of this site is the result mainly of the incentives created by upvoting, rather than downvoting.
Yes, too much explanation would also be bad; but jimrandomh apparently wants none, and I vigorously oppose that. The right to inquire about a downvote should not be trampled upon!
I have no problem with your right to inquire about a downvote; I will continue to exercise my right to downvote such requests without explanation.
I consider that a contradiction.
From the recent welcome post (emphasis added):
Perhaps we have different ideas of what ‘rights’ and ‘trampling upon’ rights entail.
You have the right to comment about reasons for downvoting—no one will stop you and armed guards will not show up and beat you for it. I think it is a good thing that you have this right.
If I think we would be better off with fewer comments like that, I’m fully within my rights to downvote the comment; similarly, no one will stop me and armed guards will not show up and beat me for it. I think it is a good thing that I have this right.
I’m not sure in what sense you think there is a contradiction between those two things, or if we are just talking past each other.
I think you should be permitted to downvote as you please, but do note that literal armed guards are not necessary for there to be real problems with the protection of rights.
My implicit premise was that 1) violent people or 2) a person actually preventing your action are severally necessary for there to be real problems with the protection of rights. Is there a problem with that version?
In such a context, when someone speaks of the “right” to do X, that means the ability to do X without being punished (in whatever way is being discussed). Here, downvoting is the analogue of armed guards beating one up.
Responding by pointing out that a yet harsher form of punishment is not being imposed is not a legitimate move, IMHO.
*reads through subthread*
You are all talking about this topic, and yet you regard me as weird??? That’s like the extrusion die asserting that the metal wire has a grey spectral component!
(if it could communicate, I mean)
It is unfortunate that I can only vote you up here once.
Ah, I could see how you would see that as a contradiction, then.
In that case, for purposes of this discussion, I withdraw my support for your right to do that.
And since I intend to downvote any comment or post for any reason I see fit at the time, it follows that no one has the right to post any comment or post of any sort, by your definition, since they can reasonably expect to be ‘punished’ for it.
For the purposes of other discussions, I do not accept your definition of ‘right’, nor do I accept your framing of a downvote as a ‘punishment’ in the relevant sense. I will continue to do my very best to ensure that only the highest-quality content is shown to new users, and if you consider that ‘punishment’, that is irrelevant to me.
I won’t bother trying any further to convince you here; but in general I will continue to ask that people behave in a less hostile manner.
Wouldn’t that analogue better apply to publicly and personally insulting the poster, targeting your verbal abuse at the very attributes that this community holds dear, deleting posts and threatening banning? Although I suppose your analogous scale could be extended in scope to include ‘imprisonment and torture without trial’.
On the topic of the immediate context I do hope that you consider thomblakes position and make an exception to your usual policy in his case. I imagine it would be extremely frustrating for you to treat others with what you consider to be respect and courtesy when you know that the recipient does not grant you the same right. It would jar with my preference for symmetry if I thought you didn’t feel free to implement a downvote friendly voting policy at least on a case by case basis. I wouldn’t consider you to be inconsistent, and definitely not hypocritical. I would consider you sane.
The proper reason to request clarification is in order to not make the mistake again—NOT as a defensive measure against some kind of imagined slight on your social status. Yes social status is a part of the reason for the karma system—but it is not something you have an inherent right to. Otherwise there would be no point to it!
Some good reasons to be downvoted: badly formed assertions, ambiguous statements, being confidently wrong, being belligerent, derailing the topic.
In this case your statement was a vague disagreement with the intuitively correct answer, with no supporting argument provided. That is just bad writing, and I would downvote it for so being. It does not imply that I think you have no real idea (something I have no grounds to take a position on), just that the specific comment did not communicate your idea effectively. You should value such feedback, as it will help you improve your writing skills,
I reject out of hand any proposed rule of propriety that stipulates people must pretend to be naive supplicants.
When people ask me for an explanation of a downvote I most certainly do not take it for granted that by so doing they are entering into my moral reality and willing to accept my interpretation of what is right and what is a ‘mistake’. If I choose to explain reasons for a downvote I also don’t expect them to henceforth conform to my will. They can choose to keep doing whatever annoying thing they were doing (there are plenty more downvotes where that one came from.)
There is more than one reason to ask for clarification for a downvote—even “I’m just kinda curious” is a valid reason. Sometimes votes just seem bizarre and not even Machiavellian reasoning helps explain the pattern. I don’t feel obliged to answer any such request but I do so if convenient. I certainly never begrudge others the opportunity to ask if they do so politely.
Not what Kompo was saying.
I never said anything about pretending anything. I said if you request clarification, and don’t actually need clarification, you’re just making noise. Ideally you will be downvoted for that.
Sure, but I still maintain that a request for clarification itself can be annoying and hence downvote worthy. I don’t think any comment is inherently protected or should be exempt from being downvoted.
I agree with you on these points. I downvote requests for clarification sometime—particularly if, say, the reason for the downvote is transparent or the flow conveys an attitude that jars with me. I certainly agree that people should be free to downvote freely whenever they please and for whatever reason they please—again, for me to presume otherwise would be a demand for naivety or dishonesty (typically both).
Feedback is valuable when it is informative, as the exchange with WrongBot turned out to be in the end.
Unfortunately, a downvote by itself will not typically be that informative. Sometimes it’s obvious why a comment was downvoted (in which case it doesn’t provide much information anyway); but in this case, I had no real idea, and it seemed plausible that it resulted from a misinterpretation of the comment. (As turned out to be the case.)
(Also, the slight to one’s social status represented by a downvote isn’t “imagined”; it’s tangible and numerical.)
The comment was a quick answer to a yes-no question posed to me by Eliezer. Would you have been more or less inclined to downvote it if I had written only “Yes”?
Providing information isn’t the point of downvoting, it is a means of expressing social disapproval. (Perhaps that is information in a sense, but it is more complicated than just that.) The fact that they are being contrary to a social norm may or may not be obvious to the commenter, if not then it is new information. Regardless, the downvote is a signal to reexamine the comment and think about why it was not approved by over 50% of the readers who felt strongly enough to vote on it.
Tangibility and significance are completely different matters. A penny might appear more solid than a dollar, but is far less worthy of consideration. You could ignore a minus-1 comment quite safely without people deciding (even momentarily) that you are a loser or some such. That you chose not to makes it look like you have an inflated view of how significant it is.
Probably less, as I would then have simply felt like requesting clarification, or perhaps even thinking of a reason on my own. A bad argument (or one that sounds bad) is worse than no argument.
You can live without sex, you can’t live without food. So the latter two are “desiderata” in rather different senses.
Status is an inherently zero-sum good, so while it is rational for any given individual to pursue it; we’d all be better off, cet par, if nobody pursued it. Everyone has a small incentive for other people not to pursue status, just as they have an incentive for them not to be violent or to smell funny; hence the existence of popular anti-status-seeking norms.
I don’t think I agree, at least in the present context. I think of status as being like money—or, in fact, the karma score on LW, since that is effectively what we’re talking about here anyway. It controls the granting of important privileges, such as what we might call “being listened to”—having folks read your words carefully, interpret them charitably, and perhaps even act on them or otherwise be influenced by them.
(To tie this to the larger context, this is why I started paying attention to SIAI: because Eliezer had won “status” in my mind.)
I agree with this.
While status may appear zero-sum amongst those who are competing for influence in a community, for the community as a whole, status is postive sum when in it accurately reflects the value of people to the community.
I don’t think I agree, at least in the present context. I think of status as being like money—or, in fact, the karma score on LW, since that is effectively what we’re talking about here anyway. It controls the granting of important privileges, such as what we might call “being listened to”—having folks read your words carefully, interpret them charitably, and perhaps even act on them or otherwise be influenced by them.
(To tie this to the larger context, this is why I started paying attention to SIAI: because Eliezer had won “status” in my mind.)
The brain has many different components with specializations, but the largest and human dominant portion, the cortex, is not really specialized at all in the way you outline.
The cortex is no more specialized than your hard drive.
Its composed of a single repeating structure and associated learning algorithm that appears to be universal. The functional specializations that appear in the adult brain arise due to topological wiring proximity to the relevant sensory and motor connections. The V1 region is not hard-wired to perform mathematically optimal gabor-like edge filters. It automatically evolves into this configuration because it is the optimal configuration for modelling the input data at that layer, and it evolves thus soley based on exposure to said input data from retinal ganglion cells.
You can think of cortical tissue as a biological ‘neuronium’. It has a semi-magical emergent capacity to self-organize into an appropriate set of feature detectors based on what its wired to. more on this
All that being said, the inter-regional wiring itself is currently less understood and is probably more genetically predetermined.
There may be other approaches that are significantly simpler (that we haven’t yet found, obviously). Assuming AGI happens, it will have been a race between the specific (type of) path you imagine, and every other alternative you didn’t think of. In other words, you think you have an upper bound on how much time/expense it will take.
I’m not a member of SIAI but my reason for thinking that AGI is not just going to be like lots of narrow bits of AI stuck together is that I can see interesting systems that haven’t been fully explored (due to difficulty of exploration). These types of systems might solve some of the open problems not addressed by narrow AI.
These are problems such as
How can a system become good at so many different things when it starts off the same. Especially puzzling is how people build complex (unconscious) machinery for dealing with problems that we are not adapted for, like Chess.
How can a system look after/upgrade itself without getting completely pwned by malware (We do get partially pwned by hostile memes, but is not complete take over of the same type as getting rooted).
Now I also doubt that these systems will develop quickly when people get around to investigating them. And they will have elements of traditional narrow AI in as well, but they will be changeable/adaptable parts of the system, not fixed sub-components. What I think needs is exploring is primarily changes in software life-cycles rather than a change in the nature of the software itself.
Learning is the capacity to build complex unconscious machinery for dealing with novel problems. Thats the whole point of AGI.
And Learning is equivalent to absorbing memes. The two are one and the same.
I don’t agree. Meme absorption is just one element of learning.
To learn how to play darts well you absorb a couple of dozen memes and then spend hours upon hours rewiring your brain to implement a complex coordination process.
To learn how to behave appropriately in a given culture you learn a huge swath of existing memes, continue to learn a stream of new ones but also dedicate huge amounts of background processing reconfiguring the weightings of existing memes relative to each other and external inputs. You also learn all sorts of implicit information about how memes work for you specifically (due to, for example, physical characteristics), much of this information will never be represented in meme form.
Fine, if you take memes to be just symbolic level transferable knowledge (which, thinking it over, I agree with), then at a more detailed level learning involves several sub-processes, one of which is the rapid transfer of memes into short term memory.