From my (dxu’s) perspective, it’s allowable for there to be “deep fundamental theories” such that, once you understand those theories well enough, you lose the ability to imagine coherent counterfactual worlds where the theories in question are false.
To use thermodynamics as an example: the first law of thermodynamics (conservation of energy) is actually a consequence of Noether’s theorem, which ties conserved quantities in physics to symmetries in physical laws. Before someone becomes aware of this, it’s perhaps possible for them to imagine a universe exactly like our own, except that energy is not conserved; once they understand the connection implied by Noether’s theorem, this becomes an incoherent notion: you cannot remove the conservation-of-energy property without changing deep aspects of the laws of physics.
The second law of thermodynamics is similarly deep: it’s actually a consequence of there being a (low-entropy) boundary condition at the beginning of the universe, but no corresponding (low-entropy) boundary condition at any future state. This asymmetry in boundary conditions is what causes entropy to appear directionally increasing—and again, once someone becomes aware of this, it is no longer possible for them to imagine living in a universe which started out in a very low-entropy state, but where the second law of thermodynamics does not hold.
In other words, thermodynamics as a “deep fundamental theory” is not merely [what you characterized as] a “powerful abstraction that is useful in a lot of domains”. Thermodynamics is a logically necessary consequence of existing, more primitive notions—and the fact that (historically) we arrived at our understanding of thermodynamics via a substantially longer route (involving heat engines and the like), without noticing this deep connection until much later on, does not change the fact that grasping said deep connection allows one to see “at a glance” why the laws of thermodynamics inevitably follow.
Of course, this doesn’t imply infinite certainty, but it does imply a level of certainty substantially higher than what would be assigned merely to a “powerful abstraction that is useful in a lot of domains”. So the relevant question would seem to be: given my above described epistemic state, how might one convince me that the case for thermodynamics is not as airtight as I currently think it is? I think there are essentially two angles of attack: (1) convince me that the arguments for thermodynamics being a logically necessary consequence of the laws of physics are somehow flawed, or (2) convince me that the laws of physics don’t have the properties I think they do.
Both of these are hard to do, however—and for good reason! And absent arguments along those lines, I don’t think I am (or should be) particularly moved by [what you characterized as] philosophy-of-science-style objections about “advance predictions”, “systematic biases”, and the like. I think there are certain theories for which the object-level case is strong enough that it more or less screens off meta-level objections; and I think this is right, and good.
Which is to say:
The mental move I’m doing for each of these examples is not imagining universes where addition/evolution/other deep theory is wrong, but imagining phenomena/problems where addition/evolution/other deep theory is not adapted. If you’re describing something that doesn’t commute, addition might be a deep theory, but it’s not useful for what you want. Similarly, you could argue that given how we’re building AIs and trying to build AGI, evolution is not the deep theory that you want to use. (emphasis mine)
I think you could argue this, yes—but the crucial point is that you have to actually argue it. You have to (1) highlight some aspect of the evolutionary paradigm, (2) point out [what appears to you to be] an important disanalogy between that aspect and [what you expect cognition to look like in] AGI, and then (3) argue that that disanalogy directly undercuts the reliability of the conclusions you would like to contest. In other words, you have to do things the “hard way”—no shortcuts.
...and the sense I got from Richard’s questions in the post (as well as the arguments you made in this subthread) is one that very much smells like a shortcut is being attempted. This is why I wrote, in my other comment, that
I don’t think I have a good sense of the implied objections contained within Richard’s model. That is to say: I don’t have a good handle on the way(s) in which Richard expects expected utility theoryto fail, even conditioning on Eliezer being wrong about the theory being useful. I think this important because—absent a strong model of expected utility theory’s likely failure modes—I don’t think questions of the form “but why hasn’t your theory made a lot of successful advance predictions yet?” move me very much on the object level.
I think I share Eliezer’s sense of not really knowing what Richard means by “deep fundamental theory” or “wide range of applications we hadn’t previous thought of”, and I think what would clarify this for me would have been for Richard to provide examples of “deep fundamental theories [with] a wide range of applications we hadn’t previously thought of”, accompanied by an explanation of why, if those applications hadn’t been present, that would have indicated something wrong with the theory.
My objection is mostly fleshed out in my other comment. I’d just flag here that “In other words, you have to do things the “hard way”—no shortcuts” assigns the burden of proof in a way which I think is not usually helpful. You shouldn’t believe my argument that I have a deep theory linking AGI and evolution unless I can explain some really compelling aspects of that theory. Because otherwise you’ll also believe in the deep theory linking AGI and capitalism, and the one linking AGI and symbolic logic, and the one linking intelligence and ethics, and the one linking recursive self-improvement with cultural evolution, etc etc etc.
Now, I’m happy to agree that all of the links I just mentioned are useful lenses which help you understand AGI. But for utility theory to do the type of work Eliezer tries to make it do, it can’t just be a useful lens—it has to be something much more fundamental. And that’s what I don’t think Eliezer’s established.
It also isn’t clear to me that Eliezer has established the strong inferences he draws from noticing this general pattern (“expected utility theory/consequentialism”). But when you asked Eliezer (in the original dialogue) to give examples of successful predictions, I was thinking “No, that’s not how these things work.” In the mistaken applications of Grand Theories you mention (AGI and capitalism, AGI and symbolic logic, intelligence and ethics, recursive self-improvement and cultural evolution, etc.), the easiest way to point out why they are dumb is with counterexamples. We can quickly “see” the counterexamples. E.g., if you’re trying to see AGI as the next step in capitalism, you’ll be able to find counterexamples where things become altogether different (misaligned AI killing everything; singleton that brings an end to the need to compete). By contrast, if the theory fits, you’ll find that whenever you try to construct such a counterexample, it is just a non-central (but still valid) manifestation of the theory. Eliezer would probably say that people who are good at this sort of thinking will quickly see how the skeptics’ counterexamples fall relevantly short.
---
The reason I remain a bit skeptical about Eliezer’s general picture: I’m not sure if his thinking about AGI makes implicit questionable predictions about humans
I don’t understand his thinking well enough to be confident that it doesn’t
It seems to me that Eliezer_2011 placed weirdly strong emphasis on presenting humans in ways that matched the pattern “(scary) consequentialism always generalizes as you scale capabilities.” I consider some of these claims false or at least would want to make the counterexamples more salient
For instance:
Eliezer seemed to think that “extremely few things are worse than death” is something all philosophically sophisticated humans would agree with
Early writings on CEV seemed to emphasize things like the “psychological unity of humankind” and talk as though humans would mostly have the same motivational drives, also with respect to how it relates to “enjoying being agenty” as opposed to “grudgingly doing agenty things but wishing you could be done with your obligations faster”
In HPMOR all the characters are either not philosophically sophisticated or they were amped up into scary consequentialists plotting all the time
All of the above could be totally innocent matters of wanting to emphasize the thing that other commenters were missing, so they aren’t necessarily indicative of overlooking certain possibilities. Still, the pattern there makes me wonder if maybe Eliezer hasn’t spent a lot of time imagining what sorts of motivations humans can have that make them benign not in terms outcome-related ethics (what they want the world to look like), but relational ethics (who they want to respect or assist, what sort of role model they want to follow). It makes me wonder if it’s really true that when you try to train an AI to be helpful and corrigible, the “consequentialism-wants-to-become-agenty-with-its-own-goals part” will be stronger than the “helping this person feels meaningful” part. (Leading to an agent that’s consequentialist about following proper cognition rather than about other world-outcomes.)
FWIW I think I mostly share Eliezer’s intuitions about the arguments where he makes them; I just feel like I lack the part of his picture that lets him discount the observation that some humans are interpersonally corrigible and not all that focused on other explicit goals, and that maybe this means corrigibility has a crisp/natural shape after all.
the easiest way to point out why they are dumb is with counterexamples. We can quickly “see” the counterexamples. E.g., if you’re trying to see AGI as the next step in capitalism, you’ll be able to find counterexamples where things become altogether different (misaligned AI killing everything; singleton that brings an end to the need to compete).
I’m not sure how this would actually work. The proponent of the AGI-capitalism analogy might say “ah yes, AGI killing everyone is another data point on the trend of capitalism becoming increasingly destructive”. Or they might say (as Marx did) that capitalism contains the seeds of its own destruction. Or they might just deny that AGI will play out the way you claim, because their analogy to capitalism is more persuasive than your analogy to humans (or whatever other reasoning you’re using). How do you then classify this as a counterexample rather than a “non-central (but still valid) manifestation of the theory”?
My broader point is that these types of theories are usually sufficiently flexible that they can “predict” most outcomes, which is why it’s so important to pin them down by forcing them to make advance predictions.
On the rest of your comment, +1. I think that one of the weakest parts of Eliezer’s argument was when he appealed to the difference between von Neumann and the village idiot in trying to explain why the next step above humans will be much more consequentialist than most humans (although unfortunately I failed to pursue this point much in the dialogue).
How do you then classify this as a counterexample rather than a “non-central (but still valid) manifestation of the theory”?
My only reply is “You know it when you see it.” And yeah, a crackpot would reason the same way, but non-modest epistemology says that if it’s obvious to you that you’re not a crackpot then you have to operate on the assumption that you’re not a crackpot. (In the alternative scenario, you won’t have much impact anyway.)
Specifically, the situation I mean is the following:
You have an epistemic track record like Eliezer or someone making lots of highly upvoted posts in our communities.
You find yourself having strong intuitions about how to apply powerful principles like “consequentialism” to new domains, and your intuitions are strong because it feels to you like you have a gears-level understanding that others lack. You trust your intuitions in cases like these.
My recommended policy in cases where this applies is “trust your intuitions and operate on the assumption that you’re not a crackpot.”
Maybe there’s a potential crux here about how much of scientific knowledge is dependent on successful predictions. In my view, the sequences have convincingly argued that locating the hypothesis in the first place is often done in the absence of already successful predictions, which goes to show that there’s a core of “good reasoning” that lets you jump to (tentative) conclusions, or at least good guesses, much faster than if you were to try lots of things at random.
My recommended policy in cases where this applies is “trust your intuitions and operate on the assumption that you’re not a crackpot.”
Oh, certainly Eliezer should trust his intuitions and believe that he’s not a crackpot. But I’m not arguing about what the person with the theory should believe, I’m arguing about what outside observers should believe, if they don’t have enough time to fully download and evaluate the relevant intuitions. Asking the person with the theory to give evidence that their intuitions track reality isn’t modest epistemology.
Damn. I actually think you might have provided the first clear pointer I’ve seen about this form of knowledge production, why and how it works, and what could break it. There’s a lot to chew on in this reply, but thanks a lot for the amazing food for thought!
(I especially like that you explained the physical points and put links that actually explain the specific implication)
And I agree (tentatively) that a lot of the epistemology of science stuff doesn’t have the same object-level impact. I was not claiming that normal philosophy of science was required, just that if that was not how we should evaluate and try to break the deep theory, I wanted to understand how I was supposed to do that.
The difference between evolution and gradient descent is sexual selection and predator/prey/parasite relations.
Agents running around inside everywhere—completely changes the process.
Likewise for comparing any kind of flat optimization or search to evolution. I think sexual selection and predator-prey made natural selection dramatically more efficient.
So I think it’s pretty fair to object that you don’t take evolution as adequate evidence to expect this flat, dead, temporary number cruncher will blow up in exponential intelligence.
I think there are other reasons to expect that though.
I haven’t read these 500 pages of dialogues so somebody probably made this point already.
From my (dxu’s) perspective, it’s allowable for there to be “deep fundamental theories” such that, once you understand those theories well enough, you lose the ability to imagine coherent counterfactual worlds where the theories in question are false.
To use thermodynamics as an example: the first law of thermodynamics (conservation of energy) is actually a consequence of Noether’s theorem, which ties conserved quantities in physics to symmetries in physical laws. Before someone becomes aware of this, it’s perhaps possible for them to imagine a universe exactly like our own, except that energy is not conserved; once they understand the connection implied by Noether’s theorem, this becomes an incoherent notion: you cannot remove the conservation-of-energy property without changing deep aspects of the laws of physics.
The second law of thermodynamics is similarly deep: it’s actually a consequence of there being a (low-entropy) boundary condition at the beginning of the universe, but no corresponding (low-entropy) boundary condition at any future state. This asymmetry in boundary conditions is what causes entropy to appear directionally increasing—and again, once someone becomes aware of this, it is no longer possible for them to imagine living in a universe which started out in a very low-entropy state, but where the second law of thermodynamics does not hold.
In other words, thermodynamics as a “deep fundamental theory” is not merely [what you characterized as] a “powerful abstraction that is useful in a lot of domains”. Thermodynamics is a logically necessary consequence of existing, more primitive notions—and the fact that (historically) we arrived at our understanding of thermodynamics via a substantially longer route (involving heat engines and the like), without noticing this deep connection until much later on, does not change the fact that grasping said deep connection allows one to see “at a glance” why the laws of thermodynamics inevitably follow.
Of course, this doesn’t imply infinite certainty, but it does imply a level of certainty substantially higher than what would be assigned merely to a “powerful abstraction that is useful in a lot of domains”. So the relevant question would seem to be: given my above described epistemic state, how might one convince me that the case for thermodynamics is not as airtight as I currently think it is? I think there are essentially two angles of attack: (1) convince me that the arguments for thermodynamics being a logically necessary consequence of the laws of physics are somehow flawed, or (2) convince me that the laws of physics don’t have the properties I think they do.
Both of these are hard to do, however—and for good reason! And absent arguments along those lines, I don’t think I am (or should be) particularly moved by [what you characterized as] philosophy-of-science-style objections about “advance predictions”, “systematic biases”, and the like. I think there are certain theories for which the object-level case is strong enough that it more or less screens off meta-level objections; and I think this is right, and good.
Which is to say:
I think you could argue this, yes—but the crucial point is that you have to actually argue it. You have to (1) highlight some aspect of the evolutionary paradigm, (2) point out [what appears to you to be] an important disanalogy between that aspect and [what you expect cognition to look like in] AGI, and then (3) argue that that disanalogy directly undercuts the reliability of the conclusions you would like to contest. In other words, you have to do things the “hard way”—no shortcuts.
...and the sense I got from Richard’s questions in the post (as well as the arguments you made in this subthread) is one that very much smells like a shortcut is being attempted. This is why I wrote, in my other comment, that
My objection is mostly fleshed out in my other comment. I’d just flag here that “In other words, you have to do things the “hard way”—no shortcuts” assigns the burden of proof in a way which I think is not usually helpful. You shouldn’t believe my argument that I have a deep theory linking AGI and evolution unless I can explain some really compelling aspects of that theory. Because otherwise you’ll also believe in the deep theory linking AGI and capitalism, and the one linking AGI and symbolic logic, and the one linking intelligence and ethics, and the one linking recursive self-improvement with cultural evolution, etc etc etc.
Now, I’m happy to agree that all of the links I just mentioned are useful lenses which help you understand AGI. But for utility theory to do the type of work Eliezer tries to make it do, it can’t just be a useful lens—it has to be something much more fundamental. And that’s what I don’t think Eliezer’s established.
It also isn’t clear to me that Eliezer has established the strong inferences he draws from noticing this general pattern (“expected utility theory/consequentialism”). But when you asked Eliezer (in the original dialogue) to give examples of successful predictions, I was thinking “No, that’s not how these things work.” In the mistaken applications of Grand Theories you mention (AGI and capitalism, AGI and symbolic logic, intelligence and ethics, recursive self-improvement and cultural evolution, etc.), the easiest way to point out why they are dumb is with counterexamples. We can quickly “see” the counterexamples. E.g., if you’re trying to see AGI as the next step in capitalism, you’ll be able to find counterexamples where things become altogether different (misaligned AI killing everything; singleton that brings an end to the need to compete). By contrast, if the theory fits, you’ll find that whenever you try to construct such a counterexample, it is just a non-central (but still valid) manifestation of the theory. Eliezer would probably say that people who are good at this sort of thinking will quickly see how the skeptics’ counterexamples fall relevantly short.
---
The reason I remain a bit skeptical about Eliezer’s general picture: I’m not sure if his thinking about AGI makes implicit questionable predictions about humans
I don’t understand his thinking well enough to be confident that it doesn’t
It seems to me that Eliezer_2011 placed weirdly strong emphasis on presenting humans in ways that matched the pattern “(scary) consequentialism always generalizes as you scale capabilities.” I consider some of these claims false or at least would want to make the counterexamples more salient
For instance:
Eliezer seemed to think that “extremely few things are worse than death” is something all philosophically sophisticated humans would agree with
Early writings on CEV seemed to emphasize things like the “psychological unity of humankind” and talk as though humans would mostly have the same motivational drives, also with respect to how it relates to “enjoying being agenty” as opposed to “grudgingly doing agenty things but wishing you could be done with your obligations faster”
In HPMOR all the characters are either not philosophically sophisticated or they were amped up into scary consequentialists plotting all the time
All of the above could be totally innocent matters of wanting to emphasize the thing that other commenters were missing, so they aren’t necessarily indicative of overlooking certain possibilities. Still, the pattern there makes me wonder if maybe Eliezer hasn’t spent a lot of time imagining what sorts of motivations humans can have that make them benign not in terms outcome-related ethics (what they want the world to look like), but relational ethics (who they want to respect or assist, what sort of role model they want to follow). It makes me wonder if it’s really true that when you try to train an AI to be helpful and corrigible, the “consequentialism-wants-to-become-agenty-with-its-own-goals part” will be stronger than the “helping this person feels meaningful” part. (Leading to an agent that’s consequentialist about following proper cognition rather than about other world-outcomes.)
FWIW I think I mostly share Eliezer’s intuitions about the arguments where he makes them; I just feel like I lack the part of his picture that lets him discount the observation that some humans are interpersonally corrigible and not all that focused on other explicit goals, and that maybe this means corrigibility has a crisp/natural shape after all.
I’m not sure how this would actually work. The proponent of the AGI-capitalism analogy might say “ah yes, AGI killing everyone is another data point on the trend of capitalism becoming increasingly destructive”. Or they might say (as Marx did) that capitalism contains the seeds of its own destruction. Or they might just deny that AGI will play out the way you claim, because their analogy to capitalism is more persuasive than your analogy to humans (or whatever other reasoning you’re using). How do you then classify this as a counterexample rather than a “non-central (but still valid) manifestation of the theory”?
My broader point is that these types of theories are usually sufficiently flexible that they can “predict” most outcomes, which is why it’s so important to pin them down by forcing them to make advance predictions.
On the rest of your comment, +1. I think that one of the weakest parts of Eliezer’s argument was when he appealed to the difference between von Neumann and the village idiot in trying to explain why the next step above humans will be much more consequentialist than most humans (although unfortunately I failed to pursue this point much in the dialogue).
My only reply is “You know it when you see it.” And yeah, a crackpot would reason the same way, but non-modest epistemology says that if it’s obvious to you that you’re not a crackpot then you have to operate on the assumption that you’re not a crackpot. (In the alternative scenario, you won’t have much impact anyway.)
Specifically, the situation I mean is the following:
You have an epistemic track record like Eliezer or someone making lots of highly upvoted posts in our communities.
You find yourself having strong intuitions about how to apply powerful principles like “consequentialism” to new domains, and your intuitions are strong because it feels to you like you have a gears-level understanding that others lack. You trust your intuitions in cases like these.
My recommended policy in cases where this applies is “trust your intuitions and operate on the assumption that you’re not a crackpot.”
Maybe there’s a potential crux here about how much of scientific knowledge is dependent on successful predictions. In my view, the sequences have convincingly argued that locating the hypothesis in the first place is often done in the absence of already successful predictions, which goes to show that there’s a core of “good reasoning” that lets you jump to (tentative) conclusions, or at least good guesses, much faster than if you were to try lots of things at random.
Oh, certainly Eliezer should trust his intuitions and believe that he’s not a crackpot. But I’m not arguing about what the person with the theory should believe, I’m arguing about what outside observers should believe, if they don’t have enough time to fully download and evaluate the relevant intuitions. Asking the person with the theory to give evidence that their intuitions track reality isn’t modest epistemology.
Damn. I actually think you might have provided the first clear pointer I’ve seen about this form of knowledge production, why and how it works, and what could break it. There’s a lot to chew on in this reply, but thanks a lot for the amazing food for thought!
(I especially like that you explained the physical points and put links that actually explain the specific implication)
And I agree (tentatively) that a lot of the epistemology of science stuff doesn’t have the same object-level impact. I was not claiming that normal philosophy of science was required, just that if that was not how we should evaluate and try to break the deep theory, I wanted to understand how I was supposed to do that.
The difference between evolution and gradient descent is sexual selection and predator/prey/parasite relations.
Agents running around inside everywhere—completely changes the process.
Likewise for comparing any kind of flat optimization or search to evolution. I think sexual selection and predator-prey made natural selection dramatically more efficient.
So I think it’s pretty fair to object that you don’t take evolution as adequate evidence to expect this flat, dead, temporary number cruncher will blow up in exponential intelligence.
I think there are other reasons to expect that though.
I haven’t read these 500 pages of dialogues so somebody probably made this point already.