Physics theories import low-complexity mathematical models. “Goddidit” imports complicated human notions of agency.
Frankly, I think this idea is attractive but ultimately an error. It is indeed true that to an analytical mind with an interest in physics, mathematics feels a lot less complex, in some sense, than intuitive notions of agency. But no matter how much physics or psychology you know, you don’t have introspective access to the universal prior—maybe the prior privileges math over psychology, or maybe it doesn’t. All we have is our evidence, often in the form of conclusions drawn from intuitive analyses of what hypotheses have or haven’t tended to bear intellectual or instrumental fruit in the past—id est, we’re awfully close to talkin’ ’bout pragmatics and decision theory here. And yes, mathematical explanations have been surprisingly effective. But if you look at human history, hypotheses that make use of “complicated human notions of agency” have also been pretty damn effective. It’s not obvious what notion of complexity would massively privilege the former over the latter, and at any rate, we have no way of knowing, because you can’t find the universal prior in your backyard.
It is indeed true that to an analytical mind with an interest in physics, mathematics feels a lot less complex,
We have objective verification of the low complexity of formalized mathematical theories because we can look at the length of their formal description in say, first-order logic.
But no matter how much physics or psychology you know, you don’t have introspective access to the universal prior—maybe the prior privileges math over psychology, or maybe it doesn’t.
Are you really suggesting some model of computation based on human ideas might work better than say, lambda calculus for computing Kolmogorov complexity for Solomonoff Induction? I’m not sure how to argue with that but I would appreciate it if you would state it explicitly.
We have objective verification of the low complexity of formalized mathematical theories because we can look at the length of their formal description in say, first-order logic.
Right, and that’ll be important if we ever run into aliens that for some reason can’t wrap their brains around English, but instead can figure out our category theory notation and so on. Or if we’re trying to build an FAI, or collaborate with the aforementioned aliens to build FAI.
I’m not sure how to argue with that but I would appreciate it if you would state it explicitly.
Apologies, inferential distance, and there’s a few meta-level points that I think are important to communicate. But there’s inferential distance on the meta level too.
Frankly, I think this idea is attractive but ultimately an error. It is indeed true that to an analytical mind with an interest in physics, mathematics feels a lot less complex, in some sense, than intuitive notions of agency. But no matter how much physics or psychology you know, you don’t have introspective access to the universal prior—maybe the prior privileges math over psychology, or maybe it doesn’t. All we have is our evidence, often in the form of conclusions drawn from intuitive analyses of what hypotheses have or haven’t tended to bear intellectual or instrumental fruit in the past—id est, we’re awfully close to talkin’ ’bout pragmatics and decision theory here. And yes, mathematical explanations have been surprisingly effective. But if you look at human history, hypotheses that make use of “complicated human notions of agency” have also been pretty damn effective. It’s not obvious what notion of complexity would massively privilege the former over the latter, and at any rate, we have no way of knowing, because you can’t find the universal prior in your backyard.
We have objective verification of the low complexity of formalized mathematical theories because we can look at the length of their formal description in say, first-order logic.
Are you really suggesting some model of computation based on human ideas might work better than say, lambda calculus for computing Kolmogorov complexity for Solomonoff Induction? I’m not sure how to argue with that but I would appreciate it if you would state it explicitly.
Right, and that’ll be important if we ever run into aliens that for some reason can’t wrap their brains around English, but instead can figure out our category theory notation and so on. Or if we’re trying to build an FAI, or collaborate with the aforementioned aliens to build FAI.
Apologies, inferential distance, and there’s a few meta-level points that I think are important to communicate. But there’s inferential distance on the meta level too.