Many people on Less Wrong see Judea Pearl’s work as the right approach to formalizing causality and getting the right conditional probabilities. But if it was simple to formally specify the right causal model of the world given sensory information, we’d probably have AGI already.
Sitting and figuring out how exactly causality works, is the kind of thing we want the AGI to be able to do on it’s own. We don’t seem to be born with any advanced expectations of the world, such as notion of causality; we even learn to see; the causality took a while to invent, and great many people have superstitions which are poorly compatible with causality; yet even my cats seem to, in their lives, have learnt some understanding of causality.
edit: i mean, I’d say we definitely invent causality. It’s implausible we evolved implementation of the equivalent of Pearl’s work, and even if we did, that is once again a case of intelligent-ish process (evolution) figuring it out.
I’d say that our intuitive grasp of causality relates to theoretical work like Pearl’s in the same way that our subconscious heuristics of vision relate to the theory of computer vision.
That is to say, we have evolved heuristics that we’re not usually conscious of, and which function very well and cheaply on the circumstances our ancestors frequently encountered, but we currently don’t understand the details well enough to program something of equivalent power.
Except a lot of our vision heuristics seem not to be produced by evolution, but by the process that happens in first years of life. Ditto goes for virtually all aspects of brain organization, given the ratio between brain complexity and DNA’s complexity.
Steven Pinker covers this topic well. I highly recommend How The Mind Works; The Blank Slate may be more relevant, but I haven’t read it yet.
Essentially, the human brain isn’t a general-purpose learner, but one with strong heuristics (like natural grammar and all kinds of particular visual pattern-seeking) that are meta enough to encompass a wide variety of human languages and visual surroundings. The development of the human brain responds to the environment not because it’s a ghost of perfect emptiness but because it has a very particular architecture already, which is adapted to the range of possible experience. The visual cortex has special structure before the eyes ever open.
Honestly, the guys who never wrote any vision algorithm should just go and stop trying to think about the subject too hard, they aren’t getting anywhere sane with the intuitions that are many orders of magnitude off. That goes for much of the cognition evolution out there. We know the evolution can produce ‘complex’ stuff, we got that hammer, we use it on every nail, even when the nail is in fact a giant screw which is way bigger than the hammer itself, which ain’t going to even move if hit with the hammer—firstly, it is big and secondly it needs entirely different motion—yet one who can’t see size of the screw can insist it would. Some adjustments for the near vs far connectivity, perhaps even somewhat specific layering, but that’s all the evolution is going to give you in the relevant timeframe for brains bigger than peanuts. That’s just the way things are, folks.
People with early brain damage have other parts of brain take over, and perform nearly as well, suggesting that the networks processing the visual input have only minor optimizations for visual task compared to the rest, i.e. precisely the near-vs-far connectivity tweaks, more/less neurons per cortical column, that kind of stuff which makes it somewhat more efficient at the task.
edit: here’s the taster: mammals never evolved extra pair of limbs, extra eyes, or anything of that sort. But in the world of the evolutionary psychologists and evolutionary ‘cognitive scientists’, mammals should evolve an extra eye or extra pair of limbs, along with much more complex multi-step adaptations, every couple millions years. This is outright ridiculous. Just look at your legs, look how much slower is the fastest human runner than any comparable animal. You’ll need to be running for another ten millions years before you are any good at running. And in that time, in which monkey’s body can barely adapt to locomotion on a flat surface again, monkey’s brain evolves some complex algorithms for grammar of the human languages? The hunter-gatherer specific functionality? You got to be kidding me.
All of that evolutionary explaining of complex phenomena via vague handwave and talk of how beneficial it would’ve been in ancestral environment, will be regarded as utter and complete pseudoscience within 20..50 years. It’s not enough to show that something was beneficial. The extra eye on the back, too, would have been beneficial for great many animals. They are given free pass to use evolution as magic because brains don’t fossilize. The things that fossilize, however, provide good sample of how many generations it takes for evolution to make something.
We’ve drifted way off topic. Brain plasticity is a good point, but it’s not the only piece of evidence available. I promise you that if you check out How the Mind Works, you’ll find unambiguous evidence that the human brain is not a general-purpose learner, but begins with plenty of structure.
If you doubt the existence of universal grammar, you should try The Language Instinct as well.
You can have the last word, but I’m tapping out on this particular topic.
If you doubt the existence of universal grammar, you should try The Language Instinct as well.
While some linguistic universals definitely exist, and a sufficiently weak version of the language acquisition device idea is pretty much obvious (‘the human brain has the ability to learn human language’), I think Chomsky’s ideas are way too strong. See e.g. Scholz, Barbara C. and Geoffrey K. Pullum (2006) Irrational nativist exuberance
re: structure, yes, it is made of cortical columns, and yes there’s some global wiring, nobody’s been doubting that.
I created a new topic for that . The issue with attributing brain functionality to evolution is the immense difficulty of coding any specific wiring in the DNA, especially in mammals. Insects can do it—going through several generations in a year, and having the genome that controls the brain down to individual neurons. Mammals aren’t structured like this, and live much too long.
Except a lot of our vision heuristics seem not to be produced by evolution, but by the process that happens in first years of life.
It’s hard to draw a clear line between the two. Certainly much of what evolution evolved in this area is the ability to reliably develop suitable heuristics given expected stimulus so I more or less agree with you here. If we develop without sight this area of the brain gets used for entirely different purposes.
Ditto goes for virtually all aspects of brain organization, given the ratio between brain complexity and DNA’s complexity.
Here I must disagree. The high level organization we are stuck with and even at somewhat lower levels the other areas of the brain aren’t quite so versatile as the cortex that handles the (high level aspects of) vision.
I’m not expecting the specialization to be going beyond adjusting e.g. number of local vs long range connections, basic motion detection back from couple hundred millions years of evolution as a fish etc. People are expecting some miracles from evolution, along the lines of hard-coding specific highly nontrivial algorithms. Mostly due to very screwy intuitions regarding how complex anything resembling an algorithm is (compared to rest of the body). Yes, it is the case that other parts of brain are worse at performing this function, no, it isn’t because there’s some actual algorithms hard-wired there. In so much as other parts of brain are able to perform even remotely close in function, sight is learn-able. One particular thing about the from-scratch AI crowd is that it doesn’t like to give much credit to brain when its due.
Mostly due to very screwy intuitions regarding how complex anything resembling an algorithm is (compared to rest of the body).
Most algorithms are far, far less complicated than say, the behaviors that constitute the immune system.
From what I can tell there are rather a lot of behavioral algorithms that are hard wired at the lower levels, particular when it comes to emotional responses and desires. The brain then learns to specialize them to the circumstance it finds itself in. More particularly, any set of (common) behaviors that give long term benefits rather than solving immediate problems is almost certainly hard wired. The brain doesn’t even know what is being optimized much less how this particular algorithm is supposed to help!
Immune system is way old. Why is it just the complex algorithms we don’t quite understand, that we think evolve quickly in mammals, but not the obvious things like retinal pigments, number of eyes, number of limbs, etc? Why we ‘evolve support for language’ in the time during which we barely adapt our legs to walking on flat surface again?
The emotional responses and desires are, to some extent, evolved, but the complex mechanisms of calculation which objects to desire have to be created from scratch.
The brain does 100 steps per second, 8 640 000 steps per day, 3 153 600 000 steps per year. The evolution does 1 step per generation. There are very tight bounds on what functionality could evolve in a given timeframe. And there is a lot that can be generated in very short time by the brain.
Yes. Very old, incredibly powerful and amazingly complex. The complexity of that feature of humans (and diverse relatives) makes the appeal “regarding how complex anything resembling an algorithm is (compared to rest of the body)” incredibly weak. Most algorithms used by the brain, learned or otherwise, are simpler than what the rest of the body does.
The immune system doesn’t have some image recognition algo that looks at projection of proteins and recognize their shapes. It uses molecular binding. And it evolved over many billions generations, in much shorter living animals, re-using, to huge extent, the features that evolved back in single celled organisms. And as far as algorithms go, it consists just of a huge number of if clauses, chained very few levels deep.
The 3D object recognition from 2D images from the eyes, for comparison, is an incredibly difficult task.
edit: on topic of immune ‘algorithm’ that makes you acquire immunity to foreign, but not your, chemicals:
Randomly edit the proteins so that they stick. The random editing happens by utilising somewhat broken replication machinery. Some of your body evolves immune response when you catch flu or cold. The products of that evolution are not even passed down, that’s how amazingly complex and well evolved it is (not).
The immune system doesn’t have some image recognition algo that looks at projection of proteins and recognize their shapes. It uses molecular binding. And it evolved over many billions generations, in much shorter living animals, re-using, to huge extent, the features that evolved back in single celled organisms. And as far as algorithms go, it consists just of a huge number of if clauses, chained very few levels deep.
This matches my understanding.
The 3D object recognition from 2D images from the eyes, for comparison, is an incredibly difficult task.
And here I no longer agree, at least when it comes to the assumption that the aforementioned task is not incredibly difficult.
I added in an edit a reference as to how immune system, basically, operates. You have population of b-cells, which evolves for elimination of foreign substances. Good ol evolution re-used to evolve a part of the b-cell genome, inside your body. The results seem very impressive—recognition of substances—but all the heavy lifting is done using very simple and very stupid methods. If anything, our proneness to seasonal cold and flu is a great demonstration of the extreme stupidity of the immune system. The viruses only need to modify some entirely non-functional proteins to have to be recognized afresh. That’s because there is no pattern recognition going on what so ever, only incredibly stupid process of evolution of b-cells.
If I was trying to claim that immune systems were complex in a way that is similar in nature to learned cortical algorithms then I would be thoroughly dissuaded by now.
The immune system is actually a rather good example of what sort of mechanisms you can expect to evolve over many billions generations, and in which way they can be called ‘complex’.
My original point was that much of evolutionary cognitive science is explaining way more complex mechanisms (with a lot of hidden complexity. For very outrageous example consider preference for specific details of mate body shape, which is a task with immense hidden complexity) as evolving in thousandth the generations count of the immune system. Instead of being generated in some way by operation of the brain, in the context whereby other brain areas are only marginally less effective at the tasks—suggesting not the hardwiring of algorithms of any kind but minor tweaks to the properties of the network which slightly improve the network’s efficiency after the network learns the specific task.
We probably don’t disagree too much on the core issue here by the way. Compared to an arbitrary reference class that is somewhat meaningful I tend to be far more likely to accepting of the ‘blank slate’ capabilities of the brain. The way it just learns how to build models of reality from visual input is amazing. It’s particularly fascinating to see areas in the brain that are consistent across (nearly) all people that turn out not to be hardwired after all. Except in as much as they happen to be always connected to the same stuff and usually develop in the same way!
Many people on Less Wrong see Judea Pearl’s work as the right approach to formalizing causality and getting the right conditional probabilities. But if it was simple to formally specify the right causal model of the world given sensory information, we’d probably have AGI already.
Sitting and figuring out how exactly causality works, is the kind of thing we want the AGI to be able to do on it’s own. We don’t seem to be born with any advanced expectations of the world, such as notion of causality; we even learn to see; the causality took a while to invent, and great many people have superstitions which are poorly compatible with causality; yet even my cats seem to, in their lives, have learnt some understanding of causality.
You have to start somewhere, though.
We seem to start with very little.
edit: i mean, I’d say we definitely invent causality. It’s implausible we evolved implementation of the equivalent of Pearl’s work, and even if we did, that is once again a case of intelligent-ish process (evolution) figuring it out.
I’d say that our intuitive grasp of causality relates to theoretical work like Pearl’s in the same way that our subconscious heuristics of vision relate to the theory of computer vision.
That is to say, we have evolved heuristics that we’re not usually conscious of, and which function very well and cheaply on the circumstances our ancestors frequently encountered, but we currently don’t understand the details well enough to program something of equivalent power.
Except a lot of our vision heuristics seem not to be produced by evolution, but by the process that happens in first years of life. Ditto goes for virtually all aspects of brain organization, given the ratio between brain complexity and DNA’s complexity.
Steven Pinker covers this topic well. I highly recommend How The Mind Works; The Blank Slate may be more relevant, but I haven’t read it yet.
Essentially, the human brain isn’t a general-purpose learner, but one with strong heuristics (like natural grammar and all kinds of particular visual pattern-seeking) that are meta enough to encompass a wide variety of human languages and visual surroundings. The development of the human brain responds to the environment not because it’s a ghost of perfect emptiness but because it has a very particular architecture already, which is adapted to the range of possible experience. The visual cortex has special structure before the eyes ever open.
Honestly, the guys who never wrote any vision algorithm should just go and stop trying to think about the subject too hard, they aren’t getting anywhere sane with the intuitions that are many orders of magnitude off. That goes for much of the cognition evolution out there. We know the evolution can produce ‘complex’ stuff, we got that hammer, we use it on every nail, even when the nail is in fact a giant screw which is way bigger than the hammer itself, which ain’t going to even move if hit with the hammer—firstly, it is big and secondly it needs entirely different motion—yet one who can’t see size of the screw can insist it would. Some adjustments for the near vs far connectivity, perhaps even somewhat specific layering, but that’s all the evolution is going to give you in the relevant timeframe for brains bigger than peanuts. That’s just the way things are, folks.
People with early brain damage have other parts of brain take over, and perform nearly as well, suggesting that the networks processing the visual input have only minor optimizations for visual task compared to the rest, i.e. precisely the near-vs-far connectivity tweaks, more/less neurons per cortical column, that kind of stuff which makes it somewhat more efficient at the task.
edit: here’s the taster: mammals never evolved extra pair of limbs, extra eyes, or anything of that sort. But in the world of the evolutionary psychologists and evolutionary ‘cognitive scientists’, mammals should evolve an extra eye or extra pair of limbs, along with much more complex multi-step adaptations, every couple millions years. This is outright ridiculous. Just look at your legs, look how much slower is the fastest human runner than any comparable animal. You’ll need to be running for another ten millions years before you are any good at running. And in that time, in which monkey’s body can barely adapt to locomotion on a flat surface again, monkey’s brain evolves some complex algorithms for grammar of the human languages? The hunter-gatherer specific functionality? You got to be kidding me.
All of that evolutionary explaining of complex phenomena via vague handwave and talk of how beneficial it would’ve been in ancestral environment, will be regarded as utter and complete pseudoscience within 20..50 years. It’s not enough to show that something was beneficial. The extra eye on the back, too, would have been beneficial for great many animals. They are given free pass to use evolution as magic because brains don’t fossilize. The things that fossilize, however, provide good sample of how many generations it takes for evolution to make something.
We’ve drifted way off topic. Brain plasticity is a good point, but it’s not the only piece of evidence available. I promise you that if you check out How the Mind Works, you’ll find unambiguous evidence that the human brain is not a general-purpose learner, but begins with plenty of structure.
If you doubt the existence of universal grammar, you should try The Language Instinct as well.
You can have the last word, but I’m tapping out on this particular topic.
While some linguistic universals definitely exist, and a sufficiently weak version of the language acquisition device idea is pretty much obvious (‘the human brain has the ability to learn human language’), I think Chomsky’s ideas are way too strong. See e.g. Scholz, Barbara C. and Geoffrey K. Pullum (2006) Irrational nativist exuberance
re: structure, yes, it is made of cortical columns, and yes there’s some global wiring, nobody’s been doubting that.
I created a new topic for that . The issue with attributing brain functionality to evolution is the immense difficulty of coding any specific wiring in the DNA, especially in mammals. Insects can do it—going through several generations in a year, and having the genome that controls the brain down to individual neurons. Mammals aren’t structured like this, and live much too long.
It’s hard to draw a clear line between the two. Certainly much of what evolution evolved in this area is the ability to reliably develop suitable heuristics given expected stimulus so I more or less agree with you here. If we develop without sight this area of the brain gets used for entirely different purposes.
Here I must disagree. The high level organization we are stuck with and even at somewhat lower levels the other areas of the brain aren’t quite so versatile as the cortex that handles the (high level aspects of) vision.
I’m not expecting the specialization to be going beyond adjusting e.g. number of local vs long range connections, basic motion detection back from couple hundred millions years of evolution as a fish etc. People are expecting some miracles from evolution, along the lines of hard-coding specific highly nontrivial algorithms. Mostly due to very screwy intuitions regarding how complex anything resembling an algorithm is (compared to rest of the body). Yes, it is the case that other parts of brain are worse at performing this function, no, it isn’t because there’s some actual algorithms hard-wired there. In so much as other parts of brain are able to perform even remotely close in function, sight is learn-able. One particular thing about the from-scratch AI crowd is that it doesn’t like to give much credit to brain when its due.
Most algorithms are far, far less complicated than say, the behaviors that constitute the immune system.
From what I can tell there are rather a lot of behavioral algorithms that are hard wired at the lower levels, particular when it comes to emotional responses and desires. The brain then learns to specialize them to the circumstance it finds itself in. More particularly, any set of (common) behaviors that give long term benefits rather than solving immediate problems is almost certainly hard wired. The brain doesn’t even know what is being optimized much less how this particular algorithm is supposed to help!
Immune system is way old. Why is it just the complex algorithms we don’t quite understand, that we think evolve quickly in mammals, but not the obvious things like retinal pigments, number of eyes, number of limbs, etc? Why we ‘evolve support for language’ in the time during which we barely adapt our legs to walking on flat surface again?
The emotional responses and desires are, to some extent, evolved, but the complex mechanisms of calculation which objects to desire have to be created from scratch.
The brain does 100 steps per second, 8 640 000 steps per day, 3 153 600 000 steps per year. The evolution does 1 step per generation. There are very tight bounds on what functionality could evolve in a given timeframe. And there is a lot that can be generated in very short time by the brain.
Yes. Very old, incredibly powerful and amazingly complex. The complexity of that feature of humans (and diverse relatives) makes the appeal “regarding how complex anything resembling an algorithm is (compared to rest of the body)” incredibly weak. Most algorithms used by the brain, learned or otherwise, are simpler than what the rest of the body does.
The immune system doesn’t have some image recognition algo that looks at projection of proteins and recognize their shapes. It uses molecular binding. And it evolved over many billions generations, in much shorter living animals, re-using, to huge extent, the features that evolved back in single celled organisms. And as far as algorithms go, it consists just of a huge number of if clauses, chained very few levels deep.
The 3D object recognition from 2D images from the eyes, for comparison, is an incredibly difficult task.
edit: on topic of immune ‘algorithm’ that makes you acquire immunity to foreign, but not your, chemicals:
http://en.wikipedia.org/wiki/Somatic_hypermutation
Randomly edit the proteins so that they stick. The random editing happens by utilising somewhat broken replication machinery. Some of your body evolves immune response when you catch flu or cold. The products of that evolution are not even passed down, that’s how amazingly complex and well evolved it is (not).
This matches my understanding.
And here I no longer agree, at least when it comes to the assumption that the aforementioned task is not incredibly difficult.
I added in an edit a reference as to how immune system, basically, operates. You have population of b-cells, which evolves for elimination of foreign substances. Good ol evolution re-used to evolve a part of the b-cell genome, inside your body. The results seem very impressive—recognition of substances—but all the heavy lifting is done using very simple and very stupid methods. If anything, our proneness to seasonal cold and flu is a great demonstration of the extreme stupidity of the immune system. The viruses only need to modify some entirely non-functional proteins to have to be recognized afresh. That’s because there is no pattern recognition going on what so ever, only incredibly stupid process of evolution of b-cells.
If I was trying to claim that immune systems were complex in a way that is similar in nature to learned cortical algorithms then I would be thoroughly dissuaded by now.
The immune system is actually a rather good example of what sort of mechanisms you can expect to evolve over many billions generations, and in which way they can be called ‘complex’.
My original point was that much of evolutionary cognitive science is explaining way more complex mechanisms (with a lot of hidden complexity. For very outrageous example consider preference for specific details of mate body shape, which is a task with immense hidden complexity) as evolving in thousandth the generations count of the immune system. Instead of being generated in some way by operation of the brain, in the context whereby other brain areas are only marginally less effective at the tasks—suggesting not the hardwiring of algorithms of any kind but minor tweaks to the properties of the network which slightly improve the network’s efficiency after the network learns the specific task.
We probably don’t disagree too much on the core issue here by the way. Compared to an arbitrary reference class that is somewhat meaningful I tend to be far more likely to accepting of the ‘blank slate’ capabilities of the brain. The way it just learns how to build models of reality from visual input is amazing. It’s particularly fascinating to see areas in the brain that are consistent across (nearly) all people that turn out not to be hardwired after all. Except in as much as they happen to be always connected to the same stuff and usually develop in the same way!
So are eyes.