I believe in some form of rationality realism: that is, that there’s a neat mathematical theory of ideal rationality that’s in practice relevant for how to build rational agents and be rational. I expect there to be a theory of bounded rationality about as mathematically specifiable and neat as electromagnetism (which after all in the real world requires a bunch of materials science to tell you about the permittivity of things).
If I didn’t believe the above, I’d be less interested in things like AIXI and reflective oracles. In general, the above tells you quite a bit about my ‘worldview’ related to AI.
Searching for beliefs I hold for which ‘rationality realism’ is crucial by imagining what I’d conclude if I learned that ‘rationality irrealism’ was more right:
I’d be more interested in empirical understanding of deep learning and less interested in an understanding of learning theory.
I’d be less interested in probabilistic forecasting of things.
I’d want to find some higher-level thing that was more ‘real’/mathematically characterisable, and study that instead.
I’d be less optimistic about the prospects for an ‘ideal’ decision and reasoning theory.
My research depends on the belief that rational agents in the real world are likely to have some kind of ordered internal structure that is comprehensible to people. This belief is informed by rationality realism but distinct from it.
How critical is it that rationality is as real as electromagnetism, rather than as real as reproductive fitness? I think the latter seems much more plausible, but I also don’t see why the distinction should be so cruxy.
My suspicion is that Rationality Realism would have captured a crux much more closely if the line weren’t “momentum vs reproductive fitness”, but rather, “momentum vs the bystander effect” (ie, physics vs social psychology). Reproductive fitness implies something that’s quite mathematizable, but with relatively “fake” models—e.g., evolutionary models tend to assume perfectly separated generations, perfect mixing for breeding, etc. It would be absurd to model the full details of reality in an evolutionary model, although it’s possible to get closer and closer.
I think that’s more the sort of thing I expect for theories of agency! I am curious why you expect electromagnetism-esque levels of mathematical modeling. Even AIXI describes a heavy dependence on programming language. Any theory of bounded rationality which doesn’t ignore poly-time differences (ie, anything “closer to the ground” than logical induction) has to be hardware-dependent as well.
Meta/summary: I think we’re talking past each other, and hope that this comment clarifies things.
How critical is it that rationality is as real as electromagnetism, rather than as real as reproductive fitness? I think the latter seems much more plausible, but I also don’t see why the distinction should be so cruxy...
Reproductive fitness implies something that’s quite mathematizable, but with relatively “fake” models
I was thinking of the difference between the theory of electromagnetism vs the idea that there’s a reproductive fitness function, but that it’s very hard to realistically mathematise or actually determine what it is. The difference between the theory of electromagnetism and mathematical theories of population genetics (which are quite mathematisable but again deal with ‘fake’ models and inputs, and which I guess is more like what you mean?) is smaller, and if pressed I’m unsure which theory rationality will end up closer to.
Separately, I feel weird having people ask me about why things are ‘cruxy’ when I didn’t initially say that they were and without the context of an underlying disagreement that we’re hashing out. Like, either there’s some misunderstanding going on, or you’re asking me to check all the consequences of a belief that I have compared to a different belief that I could have, which is hard for me to do.
I am curious why you expect electromagnetism-esque levels of mathematical modeling. Even AIXI describes a heavy dependence on programming language. Any theory of bounded rationality which doesn’t ignore poly-time differences (ie, anything “closer to the ground” than logical induction) has to be hardware-dependent as well.
I confess to being quite troubled by AIXI’s language-dependence and the difficulty in getting around it. I do hope that there are ways of mathematically specifying the amount of computation available to a system more precisely than “polynomial in some input”, which should be some input to a good theory of bounded rationality.
If I didn’t believe the above,
What alternative world are you imagining, though?
I think I was imagining an alternative world where useful theories of rationality could only be about as precise as theories of liberalism, or current theories about why England had an industrial revolution when it did, and no other country did instead.
I was thinking of the difference between the theory of electromagnetism vs the idea that there’s a reproductive fitness function, but that it’s very hard to realistically mathematise or actually determine what it is. The difference between the theory of electromagnetism and mathematical theories of population genetics (which are quite mathematisable but again deal with ‘fake’ models and inputs, and which I guess is more like what you mean?) is smaller, and if pressed I’m unsure which theory rationality will end up closer to.
[Spoiler-boxing the following response not because it’s a spoiler, but because I was typing a response as I was reading your message and the below became less relevant. The end of your message includes exactly the examples I was asking for (I think), but I didn’t want to totally delete my thinking-out-loud in case it gave helpful evidence about my state.]
I’m having trouble here because yes, the theory of population genetics factors in heavily to what I said, but to me reproductive fitness functions (largely) inherit their realness from the role they play in population genetics. So the two comparisons you give seem not very different to me. The “hard to determine what it is” from the first seems to lead directly to the “fake inputs” from the second.
So possibly you’re gesturing at a level of realness which is “how real fitness functions would be if there were not a theory of population genetics”? But I’m not sure exactly what to imagine there, so could you give a different example (maybe a few) of something which is that level of real?
Separately, I feel weird having people ask me about why things are ‘cruxy’ when I didn’t initially say that they were and without the context of an underlying disagreement that we’re hashing out. Like, either there’s some misunderstanding going on, or you’re asking me to check all the consequences of a belief that I have compared to a different belief that I could have, which is hard for me to do.
Ah, well. I interpreted this earlier statement from you as a statement of cruxiness:
If I didn’t believe the above, I’d be less interested in things like AIXI and reflective oracles. In general, the above tells you quite a bit about my ‘worldview’ related to AI.
And furthermore the list following this:
Searching for beliefs I hold for which ‘rationality realism’ is crucial by imagining what I’d conclude if I learned that ‘rationality irrealism’ was more right:
So, yeah, I’m asking you about something which you haven’t claimed is a crux of a disagreement which you and I are having, but, I am asking about it because I seem to have a disagreement with you about (a) whether rationality realism is true (pending clarification of what the term means to each of us), and (b) whether rationality realism should make a big difference for several positions you listed.
I confess to being quite troubled by AIXI’s language-dependence and the difficulty in getting around it. I do hope that there are ways of mathematically specifying the amount of computation available to a system more precisely than “polynomial in some input”, which should be some input to a good theory of bounded rationality.
Ah, so this points to a real and large disagreement between us about how subjective a theory of rationality should be (which may be somewhat independent of just how real rationality is, but is related).
I think I was imagining an alternative world where useful theories of rationality could only be about as precise as theories of liberalism, or current theories about why England had an industrial revolution when it did, and no other country did instead.
Ok. Taking this as the rationality irrealism position, I would disagree with it, and also agree that it would make a big difference for the things you said rationality-irrealism would make a big difference for.
So I now think we have a big disagreement around point “a” (just how real rationality is), but maybe not so much around “b” (what the consequences are for the various bullet points you listed).
So, yeah, I’m asking you about something which you haven’t claimed is a crux of a disagreement which you and I are having, but, I am asking about it because I seem to have a disagreement with you about (a) whether rationality realism is true (pending clarification of what the term means to each of us), and (b) whether rationality realism should make a big difference for several positions you listed.
For what it’s worth, from my perspective, two months ago I said I fell into a certain pattern of thinking, then raemon put me in the position of saying what that was a crux for, then I was asked to elaborate about why a specific facet of the distinction was cruxy, and also the pattern of thinking morphed into something more analogous to a proposition. So I’m happy to elaborate on consequences of ‘rationality realism’ in my mind (such as they are—the term seems vague enough that I’m a ‘rationality realism’ anti-realist and so don’t want to lean too heavily on the concept) in order to further a discussion, but in the context of an exchange that was initially framed as a debate I’d like to be clear about what commitments I am and am not making.
Anyway, glad to clarify that we have a big disagreement about how ‘real’ a theory of rationality should be, which probably resolves to a medium-sized disagreement about how ‘real’ rationality and/or its best theory actually is.
I believe in some form of rationality realism: that is, that there’s a neat mathematical theory of ideal rationality that’s in practice relevant for how to build rational agents and be rational. I expect there to be a theory of bounded rationality about as mathematically specifiable and neat as electromagnetism (which after all in the real world requires a bunch of materials science to tell you about the permittivity of things).
If I didn’t believe the above, I’d be less interested in things like AIXI and reflective oracles. In general, the above tells you quite a bit about my ‘worldview’ related to AI.
Searching for beliefs I hold for which ‘rationality realism’ is crucial by imagining what I’d conclude if I learned that ‘rationality irrealism’ was more right:
I’d be more interested in empirical understanding of deep learning and less interested in an understanding of learning theory.
I’d be less interested in probabilistic forecasting of things.
I’d want to find some higher-level thing that was more ‘real’/mathematically characterisable, and study that instead.
I’d be less optimistic about the prospects for an ‘ideal’ decision and reasoning theory.
My research depends on the belief that rational agents in the real world are likely to have some kind of ordered internal structure that is comprehensible to people. This belief is informed by rationality realism but distinct from it.
How critical is it that rationality is as real as electromagnetism, rather than as real as reproductive fitness? I think the latter seems much more plausible, but I also don’t see why the distinction should be so cruxy.
My suspicion is that Rationality Realism would have captured a crux much more closely if the line weren’t “momentum vs reproductive fitness”, but rather, “momentum vs the bystander effect” (ie, physics vs social psychology). Reproductive fitness implies something that’s quite mathematizable, but with relatively “fake” models—e.g., evolutionary models tend to assume perfectly separated generations, perfect mixing for breeding, etc. It would be absurd to model the full details of reality in an evolutionary model, although it’s possible to get closer and closer.
I think that’s more the sort of thing I expect for theories of agency! I am curious why you expect electromagnetism-esque levels of mathematical modeling. Even AIXI describes a heavy dependence on programming language. Any theory of bounded rationality which doesn’t ignore poly-time differences (ie, anything “closer to the ground” than logical induction) has to be hardware-dependent as well.
What alternative world are you imagining, though?
Meta/summary: I think we’re talking past each other, and hope that this comment clarifies things.
I was thinking of the difference between the theory of electromagnetism vs the idea that there’s a reproductive fitness function, but that it’s very hard to realistically mathematise or actually determine what it is. The difference between the theory of electromagnetism and mathematical theories of population genetics (which are quite mathematisable but again deal with ‘fake’ models and inputs, and which I guess is more like what you mean?) is smaller, and if pressed I’m unsure which theory rationality will end up closer to.
Separately, I feel weird having people ask me about why things are ‘cruxy’ when I didn’t initially say that they were and without the context of an underlying disagreement that we’re hashing out. Like, either there’s some misunderstanding going on, or you’re asking me to check all the consequences of a belief that I have compared to a different belief that I could have, which is hard for me to do.
I confess to being quite troubled by AIXI’s language-dependence and the difficulty in getting around it. I do hope that there are ways of mathematically specifying the amount of computation available to a system more precisely than “polynomial in some input”, which should be some input to a good theory of bounded rationality.
I think I was imagining an alternative world where useful theories of rationality could only be about as precise as theories of liberalism, or current theories about why England had an industrial revolution when it did, and no other country did instead.
[Spoiler-boxing the following response not because it’s a spoiler, but because I was typing a response as I was reading your message and the below became less relevant. The end of your message includes exactly the examples I was asking for (I think), but I didn’t want to totally delete my thinking-out-loud in case it gave helpful evidence about my state.]
I’m having trouble here because yes, the theory of population genetics factors in heavily to what I said, but to me reproductive fitness functions (largely) inherit their realness from the role they play in population genetics. So the two comparisons you give seem not very different to me. The “hard to determine what it is” from the first seems to lead directly to the “fake inputs” from the second.
So possibly you’re gesturing at a level of realness which is “how real fitness functions would be if there were not a theory of population genetics”? But I’m not sure exactly what to imagine there, so could you give a different example (maybe a few) of something which is that level of real?
Ah, well. I interpreted this earlier statement from you as a statement of cruxiness:
And furthermore the list following this:
So, yeah, I’m asking you about something which you haven’t claimed is a crux of a disagreement which you and I are having, but, I am asking about it because I seem to have a disagreement with you about (a) whether rationality realism is true (pending clarification of what the term means to each of us), and (b) whether rationality realism should make a big difference for several positions you listed.
Ah, so this points to a real and large disagreement between us about how subjective a theory of rationality should be (which may be somewhat independent of just how real rationality is, but is related).
Ok. Taking this as the rationality irrealism position, I would disagree with it, and also agree that it would make a big difference for the things you said rationality-irrealism would make a big difference for.
So I now think we have a big disagreement around point “a” (just how real rationality is), but maybe not so much around “b” (what the consequences are for the various bullet points you listed).
For what it’s worth, from my perspective, two months ago I said I fell into a certain pattern of thinking, then raemon put me in the position of saying what that was a crux for, then I was asked to elaborate about why a specific facet of the distinction was cruxy, and also the pattern of thinking morphed into something more analogous to a proposition. So I’m happy to elaborate on consequences of ‘rationality realism’ in my mind (such as they are—the term seems vague enough that I’m a ‘rationality realism’ anti-realist and so don’t want to lean too heavily on the concept) in order to further a discussion, but in the context of an exchange that was initially framed as a debate I’d like to be clear about what commitments I am and am not making.
Anyway, glad to clarify that we have a big disagreement about how ‘real’ a theory of rationality should be, which probably resolves to a medium-sized disagreement about how ‘real’ rationality and/or its best theory actually is.
This is such an interesting use of a spoiler tags. I might try it myself sometime.