The reason he does not like the term is that, as pointed out before, “emergence” is not an explanation of anything. However, it is an observational phenomenon: when you get a lot of simple things together, they combine in ways one could not foresee and the resulting entities behave by the rules not constructable from (but reducible to) those of the simple constituents. When you combine a lot of simple molecules, you get a solid, a liquid or a gas with the properties you generally cannot infer without observing them first. When you get a group of people together, they start interacting in apriori unpredictable ways as they form a group. Once you observe the group behavior, you can often reduce it to that of its constituents, but a useful description is generally not in terms of the constituents, but in terms of the collective. For example, in thermodynamics people use gas laws and other macroscopic laws instead of the Newton’s laws.
I am guessing that one reason that the (friendly) machine intelligence problem is so hard is that intelligence is an emergent property: once you understand it, you can reduce it to interactions between neurons, but you cannot infer it from such interactions. And what’s more, it’s several layers above, given that intelligence evolved long after simpler neural processes got established.
Thus what MIRI is doing is studying the laws of an emergent structure (AI) without being able to observe the structure first, since it does not exist, yet. This is like trying to deduce the behavior of a bee hive by studying single cells. Even if you come up with some new “emergent” laws, it may well end up being more like a tree than a hive.
Emergence is a subset of the word Surprise. It’s not meaningless but you can’t use it to usefully predict things you want to achieve with something, because it’s equivalent to saying “If we put all these things together maybe they’ll surprise us in an awesome way!”
Sort of. It is not surprising that incremental quantitative changes results in a qualitative change, but the exact nature of what emerges can indeed be quite a surprise. It is nevertheless useful to keep in mind the general pattern in order to not be blindsided by the fact of emergence in each particular case (“But… but.. they are all nice people, I didn’t expect them to turn into a mindless murderous mob!”). And to be ready to take action when the emergent entity hits the fan.
Surely what MIRI would ideally like to do is to find a way of making intelligence not “emergent”, so that it’s easier to make something intelligent that behaves predictably enough to be classified as Friendly.
I don’t believe that MIRI has been consciously paying attention to thwarting undesirable emergence, given that EY refuses to acknowledge it as a real phenomenon.
I fear we’re at cross purposes. I meant not “thwart emergent intelligence” but “find ways of making intelligence that don’t rely on it emerging mysteriously from incomprehensible complications”.
Sure, you cannot rely on spontaneous emergence for anything predictable, as neural network attempts at AGI demonstrate. My point was that if you ignore the chance of something emerging, that something will emerge in a most inopportune moment. I see your original point, though. Not sure if it can be successful. My guess is that the best case is some kind of “controlled emergence”, where you at least set the parameter space of what might happen.
The reason he does not like the term is that, as pointed out before, “emergence” is not an explanation of anything. However, it is an observational phenomenon: when you get a lot of simple things together, they combine in ways one could not foresee and the resulting entities behave by the rules not constructable from (but reducible to) those of the simple constituents. When you combine a lot of simple molecules, you get a solid, a liquid or a gas with the properties you generally cannot infer without observing them first. When you get a group of people together, they start interacting in apriori unpredictable ways as they form a group. Once you observe the group behavior, you can often reduce it to that of its constituents, but a useful description is generally not in terms of the constituents, but in terms of the collective. For example, in thermodynamics people use gas laws and other macroscopic laws instead of the Newton’s laws.
I am guessing that one reason that the (friendly) machine intelligence problem is so hard is that intelligence is an emergent property: once you understand it, you can reduce it to interactions between neurons, but you cannot infer it from such interactions. And what’s more, it’s several layers above, given that intelligence evolved long after simpler neural processes got established.
Thus what MIRI is doing is studying the laws of an emergent structure (AI) without being able to observe the structure first, since it does not exist, yet. This is like trying to deduce the behavior of a bee hive by studying single cells. Even if you come up with some new “emergent” laws, it may well end up being more like a tree than a hive.
Emergence is a subset of the word Surprise. It’s not meaningless but you can’t use it to usefully predict things you want to achieve with something, because it’s equivalent to saying “If we put all these things together maybe they’ll surprise us in an awesome way!”
Sort of. It is not surprising that incremental quantitative changes results in a qualitative change, but the exact nature of what emerges can indeed be quite a surprise. It is nevertheless useful to keep in mind the general pattern in order to not be blindsided by the fact of emergence in each particular case (“But… but.. they are all nice people, I didn’t expect them to turn into a mindless murderous mob!”). And to be ready to take action when the emergent entity hits the fan.
Or in simpler terms, AI is a crapshoot.
Agreed. Like with surprises, you can try to be robust to them or agile enough to adapt.
If something is an emergent property, you can bet on it not being the sum of its parts. That has some use.
Aiming the tiny Friendly dot in AI-space is not one of them, though.
Surely what MIRI would ideally like to do is to find a way of making intelligence not “emergent”, so that it’s easier to make something intelligent that behaves predictably enough to be classified as Friendly.
I don’t believe that MIRI has been consciously paying attention to thwarting undesirable emergence, given that EY refuses to acknowledge it as a real phenomenon.
I fear we’re at cross purposes. I meant not “thwart emergent intelligence” but “find ways of making intelligence that don’t rely on it emerging mysteriously from incomprehensible complications”.
Sure, you cannot rely on spontaneous emergence for anything predictable, as neural network attempts at AGI demonstrate. My point was that if you ignore the chance of something emerging, that something will emerge in a most inopportune moment. I see your original point, though. Not sure if it can be successful. My guess is that the best case is some kind of “controlled emergence”, where you at least set the parameter space of what might happen.