Emergency still feels like a “nonapple”. You are right that mass is not an emergent property of quarks, but still, pretty much everything else in this universe is. If I understand it correctly, even “the distance between two specific quarks” is already an emergent property of quarks, because neither of those two quarks contains their distance in itself. So if I say e.g. “consciousness is an emergent property of quarks”, I pretty much said “consciousness is not mass”, which is technically true, but still mostly useless. Most of us already expected that.
Similarly, “consciousness is an emergent property of neurons” is only a surprise to those people who expected individual neurons to be conscious. I am sure such people exist. But for the rest of us, what new information does it convey?
Because the trick is that even if you don’t believe that individual neurons are conscious, hearing “consciousness is an emergent property of neurons” still feels like new information. Except, there is nothing more there, only the aura of having an explanation.
The ability to express basic nonsurprising facts is useful.
When discussing whether or not to allow abortion of a fetus it matters whether you believe that real human consciousness needs a certain amount of neurons to emerge.
Plenty of people believe in some form of soul that’s a unit that creates consciousness. Saying that it’s emergent means that you disagree.
According to Scott’s latest post about EA global, there are people at the foundational research institute who do ask themseves whether particles can be conscious.
There are plenty of cases where people try to find reductionist ways to thinking about a domain. Calories in, calories out is a common paradigm that drives a lot of thinking about diet. If you instead have a paradigm that centeres around a cybernetic system that has an emergent set point that’s managed by a complex net of neurons, that paradigm gives a different perspective about what to do about weightloss.
Maybe this is just me, but it seems to me like there is a “motte and bailey” game being played with “emergence”.
The “motte” is the definition provided here by the defenders of “emergence”. An emergent property is any property exhibited by a system composed of pieces, where no individual piece has that property alone. Taking this literally, even “distance between two oranges” is an emergent property of those two oranges. I just somehow do not remember anyone using that word in this sense.
The “bailey” of “emergence” is that it is a mysterious process, which will somehow inevitably happen if you put a lot of pieces together and let them interact randomly. It is somehow important for those pieces to not be arranged in any simple/regular way that would allow us to fully understand their interaction, otherwise the expected effect will not happen. But as long as you close your eyes and arrange those pieces randomly, it is simply a question of having enough pieces in the system for the property to emerge.
For example, the “motte” of “consciousness is an emergent property of neurons” is saying that one neuron is not conscious, but there are some systems of neurons (i.e. brains) which are conscious.
The “bailey” of “consciousness is an emergent property of neurons” is that if you simulate a sufficiently large number of randomly connected neurons on your computer, the system is fated to evolve consciousness. If the consciousness does not appear, it must be because there are not enough neurons, or because the simulation is not fast enough.
In other words, if we consider the space of all possible systems composed of 10^11 neurons, the “motte” version merely says that at least one such system is conscious, while the “bailey” version would predict that actually most of them are conscious, because when you have sufficient complexity, the emergent behavior will appear.
The relevance for LW is that for a believer in “emergence”, the problem of creating artificial intelligence (although not necessarily friendly one) is simply a question of having enough computing power to simulate a sufficiently large number of neurons.
There are positions between those. Medium-strength emergentism would have it that some systems are conscious, that cosnciousness is not a property of their parts, and that it is not reductively understandable in terms of the parts and their interactions, but that it is by no mean inevitable.
Reduction has its problems too. Many writings on LW confuses the claim that things are understandable in terms of their parts with the claim that they are merely made of parts.
Eg:-
(1) The explanatory power of a model is a function of its ingredients. (2) Reductionism includes all the ingredients that actually exist in the real world. Therefore (3) Emergentists must be treating the “emergent properties” as extra ingredients, thereby confusing the “map” with the “territory”. So Reductionism is defined by EY and others as not treating emergent properties as extra ingredients (in effect).
I am questioning the implicit premise that some kinds of emergent things are “reductively understandable in terms of the parts and their interactions.” I think humans have a basic problem with getting any grasp at all on the idea of things being made of other things, and therefore you have arguments like those of Parmenides, Zeno, etc., which are basically a mirror of modern arguments about reductionism. I would illustrate this with Viliam’s example of the distance between two oranges. I do not see how the oranges explain the fact that they have a distance between them, at all. Consciousness may seem even less intelligible, but this is a difference of degree, not kind.
I am questioning the implicit premise that some kinds of emergent things are “reductively understandable in terms of the parts and their interactions.
It’s not so much some emergent things, for a uniform definiton of “emergent”, as all things that come under a vriant definition of “emergent”.
I think humans have a basic problem with getting any grasp at all on the idea of things being made of other things, and therefore you have arguments like those of Parmenides, Zeno, etc., which are basically a mirror of modern arguments about reductionism
Not really, they are about what we would now call mereology. But as I noted, the two tend to get conflated here.
. I would illustrate this with Viliam’s example of the distance between two oranges. I do not see how the oranges explain the fact that they have a distance between them, at all.
Reductionism is about preserving and operating within a physicalist world view, and physicalism is comfortable with spacial relations and causal interactions as being basic elements or reality. Careful reducitonists say “reducible to its parts, their structure, and their interactions”.
“physicalism is comfortable with spacial relations and causal interactions as being basic elements or reality”
I am suggesting this is a psychological comfort, and there is actually no more reason to be comfortable with those things, than with consciousness or any other properties that combinations have that parts do not have.
I don’t understand what is supposed to be so bad about “mysterious” things. Take the distance between two oranges: if you look at a single orange, it doesn’t tell you anything about how far it should be from another. And special relativity implies that there is no difference between a situation where one orange is moving and the other isn’t, and the situation where the movements are reversed. So the distance between two oranges can be changing, even though apparently neither one is changing more than the other, or at all, when you just sit and look at one of the oranges. So the distance between two oranges seems pretty mysterious to me.
Also, I’m not sure anyone actually says that emergent things “inevitably happen” due to a large quantity and randomness.
Like many cases of Motte-and-Bailey, the Motte is mainly held by people who dislike the Bailey. I suspect that an average scientist in a relevant field somewhere at or below neurophysics in the generality hierarchy (e.g. chemist, physicist, but not sociologist), would consider that bailey to be… non-likely at best, while holding the motte very firmly.
The relevance for LW is that for a believer in “emergence”, the problem of creating artificial intelligence (although not necessarily friendly one) is simply a question of having enough computing power to simulate a sufficiently large number of neurons.
I don’t think in practice that has much to do with whether or not someone uses the word emergence. As far as a I understand EY thinks that if you simulate enough neurons sufficiently well you get something that’s conscious.
I understand EY thinks that if you simulate enough neurons sufficiently well you get something that’s conscious.
Without specifying the arrangements of those neurons? Of course it should if you copy the arrangement of neurons out of a real person, say, but that doesn’t sound like what you meant.
I think you’re right. I also think saying ‘x is emergent’ may sound more magical than it is, if I am understanding emergence right, depending on your understanding of it. Like it doesn’t mean that the higher scale phenomenon isn’t /made up of/ lower-level phenomena, but that it isn’t (like a homonculi) itself present as anything smaller than that level. Like a robot hopping kangaroo toy needs both a body, and legs. The hopping behavior isn’t contained in the body—that just rotates a joint. The hopping behavior isn’t contained in the legs—those just have a joint that can connect to the body joint. Its only when the two bits are plugged into each other that the ‘hopping’ behavior ‘emerges’ from the torso-legs system. Its not coming from any essential ‘hoppiness’ in the legs or the torso. I think it can seem a bit magical because it can sound like the behavior just ‘appears’ at a certain point but its no more than a picture of a tiger ‘appears’ from a bunch of pixels. Only we’re talking about names for systems of functions (hopping is made of the leg and torso behaviors and their interaction with the ground and stuff) more than names for systems of objects (tiger picture is made up of lines and corners and stuff are made of pixels and stuff). In some sense ‘tigers’ and ‘hopping’ don’t really exist—just pixels (or atoms or whatever) and particle interactions. But we have names for systems of objects, and systems of functions, because those names are useful.
Emergency still feels like a “nonapple”. You are right that mass is not an emergent property of quarks, but still, pretty much everything else in this universe is. If I understand it correctly, even “the distance between two specific quarks” is already an emergent property of quarks, because neither of those two quarks contains their distance in itself. So if I say e.g. “consciousness is an emergent property of quarks”, I pretty much said “consciousness is not mass”, which is technically true, but still mostly useless. Most of us already expected that.
Similarly, “consciousness is an emergent property of neurons” is only a surprise to those people who expected individual neurons to be conscious. I am sure such people exist. But for the rest of us, what new information does it convey?
Because the trick is that even if you don’t believe that individual neurons are conscious, hearing “consciousness is an emergent property of neurons” still feels like new information. Except, there is nothing more there, only the aura of having an explanation.
The ability to express basic nonsurprising facts is useful.
When discussing whether or not to allow abortion of a fetus it matters whether you believe that real human consciousness needs a certain amount of neurons to emerge.
Plenty of people believe in some form of soul that’s a unit that creates consciousness. Saying that it’s emergent means that you disagree.
According to Scott’s latest post about EA global, there are people at the foundational research institute who do ask themseves whether particles can be conscious.
There are plenty of cases where people try to find reductionist ways to thinking about a domain. Calories in, calories out is a common paradigm that drives a lot of thinking about diet. If you instead have a paradigm that centeres around a cybernetic system that has an emergent set point that’s managed by a complex net of neurons, that paradigm gives a different perspective about what to do about weightloss.
Maybe this is just me, but it seems to me like there is a “motte and bailey” game being played with “emergence”.
The “motte” is the definition provided here by the defenders of “emergence”. An emergent property is any property exhibited by a system composed of pieces, where no individual piece has that property alone. Taking this literally, even “distance between two oranges” is an emergent property of those two oranges. I just somehow do not remember anyone using that word in this sense.
The “bailey” of “emergence” is that it is a mysterious process, which will somehow inevitably happen if you put a lot of pieces together and let them interact randomly. It is somehow important for those pieces to not be arranged in any simple/regular way that would allow us to fully understand their interaction, otherwise the expected effect will not happen. But as long as you close your eyes and arrange those pieces randomly, it is simply a question of having enough pieces in the system for the property to emerge.
For example, the “motte” of “consciousness is an emergent property of neurons” is saying that one neuron is not conscious, but there are some systems of neurons (i.e. brains) which are conscious.
The “bailey” of “consciousness is an emergent property of neurons” is that if you simulate a sufficiently large number of randomly connected neurons on your computer, the system is fated to evolve consciousness. If the consciousness does not appear, it must be because there are not enough neurons, or because the simulation is not fast enough.
In other words, if we consider the space of all possible systems composed of 10^11 neurons, the “motte” version merely says that at least one such system is conscious, while the “bailey” version would predict that actually most of them are conscious, because when you have sufficient complexity, the emergent behavior will appear.
The relevance for LW is that for a believer in “emergence”, the problem of creating artificial intelligence (although not necessarily friendly one) is simply a question of having enough computing power to simulate a sufficiently large number of neurons.
There are positions between those. Medium-strength emergentism would have it that some systems are conscious, that cosnciousness is not a property of their parts, and that it is not reductively understandable in terms of the parts and their interactions, but that it is by no mean inevitable.
Reduction has its problems too. Many writings on LW confuses the claim that things are understandable in terms of their parts with the claim that they are merely made of parts.
Eg:-
I am questioning the implicit premise that some kinds of emergent things are “reductively understandable in terms of the parts and their interactions.” I think humans have a basic problem with getting any grasp at all on the idea of things being made of other things, and therefore you have arguments like those of Parmenides, Zeno, etc., which are basically a mirror of modern arguments about reductionism. I would illustrate this with Viliam’s example of the distance between two oranges. I do not see how the oranges explain the fact that they have a distance between them, at all. Consciousness may seem even less intelligible, but this is a difference of degree, not kind.
It’s not so much some emergent things, for a uniform definiton of “emergent”, as all things that come under a vriant definition of “emergent”.
Not really, they are about what we would now call mereology. But as I noted, the two tend to get conflated here.
Reductionism is about preserving and operating within a physicalist world view, and physicalism is comfortable with spacial relations and causal interactions as being basic elements or reality. Careful reducitonists say “reducible to its parts, their structure, and their interactions”.
“physicalism is comfortable with spacial relations and causal interactions as being basic elements or reality”
I am suggesting this is a psychological comfort, and there is actually no more reason to be comfortable with those things, than with consciousness or any other properties that combinations have that parts do not have.
I don’t understand what is supposed to be so bad about “mysterious” things. Take the distance between two oranges: if you look at a single orange, it doesn’t tell you anything about how far it should be from another. And special relativity implies that there is no difference between a situation where one orange is moving and the other isn’t, and the situation where the movements are reversed. So the distance between two oranges can be changing, even though apparently neither one is changing more than the other, or at all, when you just sit and look at one of the oranges. So the distance between two oranges seems pretty mysterious to me.
Also, I’m not sure anyone actually says that emergent things “inevitably happen” due to a large quantity and randomness.
Like many cases of Motte-and-Bailey, the Motte is mainly held by people who dislike the Bailey. I suspect that an average scientist in a relevant field somewhere at or below neurophysics in the generality hierarchy (e.g. chemist, physicist, but not sociologist), would consider that bailey to be… non-likely at best, while holding the motte very firmly.
I don’t think in practice that has much to do with whether or not someone uses the word emergence. As far as a I understand EY thinks that if you simulate enough neurons sufficiently well you get something that’s conscious.
I would really want a cite on that claim. It doesn’t sound right.
Can you be more specific about what you are skeptic about?
Without specifying the arrangements of those neurons? Of course it should if you copy the arrangement of neurons out of a real person, say, but that doesn’t sound like what you meant.
I think you’re right. I also think saying ‘x is emergent’ may sound more magical than it is, if I am understanding emergence right, depending on your understanding of it. Like it doesn’t mean that the higher scale phenomenon isn’t /made up of/ lower-level phenomena, but that it isn’t (like a homonculi) itself present as anything smaller than that level. Like a robot hopping kangaroo toy needs both a body, and legs. The hopping behavior isn’t contained in the body—that just rotates a joint. The hopping behavior isn’t contained in the legs—those just have a joint that can connect to the body joint. Its only when the two bits are plugged into each other that the ‘hopping’ behavior ‘emerges’ from the torso-legs system. Its not coming from any essential ‘hoppiness’ in the legs or the torso. I think it can seem a bit magical because it can sound like the behavior just ‘appears’ at a certain point but its no more than a picture of a tiger ‘appears’ from a bunch of pixels. Only we’re talking about names for systems of functions (hopping is made of the leg and torso behaviors and their interaction with the ground and stuff) more than names for systems of objects (tiger picture is made up of lines and corners and stuff are made of pixels and stuff). In some sense ‘tigers’ and ‘hopping’ don’t really exist—just pixels (or atoms or whatever) and particle interactions. But we have names for systems of objects, and systems of functions, because those names are useful.