Say you have a Bayesian reasoner. It’s got hypotheses; it’s got priors on them; it’s got data. So you watch it doing stuff. What happens? Lots of stuff changes, tide goes in, tide goes out, but it’s still a Bayesian, can’t explain that. The stuff changing is “not deep”. There’s something stable though: the architecture in the background that “makes it a Bayesian”. The update rules, and the rest of the stuff (for example, whatever machinery takes a hypothesis and produces “predictions” which can be compared to the “predictions” from other hypotheses). And: it seems really stable? Like, even reflectively stable, if you insist?
So does this solve stability? I would say, no. You might complain that the reason it doesn’t solve stability is just that the thing doesn’t have goal-pursuits. That’s true but it’s not the core problem. The same issue would show up if we for example looked at the classical agent architecture (utility function, counterfactual beliefs, argmaxxing actions).
The problem is that the agency you can write down is not the true agency. “Deep change” is change that changes elements that you would have considered deep, core, fundamental, overarching… Change that doesn’t fit neatly into the mind, change that isn’t just another piece of data that updates some existing hypotheses. See https://tsvibt.blogspot.com/2023/01/endo-dia-para-and-ecto-systemic-novelty.html
You might complain that the reason it doesn’t solve stability is just that the thing doesn’t have goal-pursuits.
Not so—I’d just call it the trivial case and implore us to do better literally at all!
Apart from that, thanks—I have a better sense of what you meant there. “Deep change” as in “no, actually, whatever you pointed to as the architecture of what’s Really Going On… can’t be that, not for certain, not forever.”
I’d go stronger than just “not for certain, not forever”, and I’d worry you’re not hearing my meaning (agree or not). I’d say in practice more like “pretty soon, with high likelihood, in a pretty deep / comprehensive / disruptive way”. E.g. human culture isn’t just another biotic species (you can make interesting analogies but it’s really not the same).
I’d go stronger than just “not for certain, not forever”, and I’d worry you’re not hearing my meaning (agree or not).
That’s entirely possible. I’ve thought about this deeply for entire tens of minutes, after all. I think I might just be erring (habitually) on the side of caution in qualities of state-changes I describe expecting to see from systems I don’t fully understand. OTOH… I have a hard time believing that even (especially?) an extremely capable mind would find it worthwhile to repeatedly rebuild itself from the ground up, such that few of even the ?biggest?/most salient features of a mind stick around for long at all.
I have no idea what goes on in the limit, and I would guess that what determines the ultimate effects (https://tsvibt.blogspot.com/2023/04/fundamental-question-what-determines.html) would become stable in some important senses. Here I’m mainly saying that the stuff we currently think of as being core architecture would be upturned.
I mean it’s complicated… like, all minds are absolutely subject to some constraints—there’s some Bayesian constraint, like you can’t “concentrate caring in worlds” in a way that correlates too much with “multiversally contingent” facts, compared to how much you’ve interacted with the world, or something… IDK what it would look like exactly, and if no one else know then that’s kinda my point. Like, there’s
Some math about probabilities, which is just true—information-theoretic bounds and such. But: not clear precisely how this constrains minds in what ways.
Some rough-and-ready ways that minds are constrained in practice, such as obvious stuff about like you can’t know what’s in the cupboard without looking, you can’t shove more than such and such amount of information through a wire, etc. These are true enough in practice, but also can be broken in terms of their relevant-in-practice implications (e.g. by “hypercompressing” images using generative AI; you didn’t truly violate any law of probability but you did compress way beyond what would be expected in a mundane sense).
You can attempt to state more absolute constraints, but IDK how to do that. Naive attempts just don’t work, e.g. “you can’t gain information just by sitting there with your eyes closed” just isn’t true in real life for any meaning of “information” that I know how to state other than a mathematical one (because for example you can gain “logical information”, or because you can “unpack” information you already got (which is maybe “just” gaining logical information but I’m not sure, or rather I’m not sure how to really distinguish non/logical info), or because you can gain/explicitize information about how your brain works which is also information about how other brains work).
You can describe or design minds as having some architecture that you think of as Bayesian. E.g. writing a Bayesian updater in code. But such a program would emerge / be found / rewrite itself so that the hypotheses it entertains, in the descriptive Bayesian sense, are not the things stored in memory and pointed at by the “hypotheses” token in your program.
Another class of constraints like this are those discussed in computational complexity theory.
So there are probably constraints, but we don’t really understand them and definitely don’t know how to weild them, and in particular we understand the ones about goal-pursuits much less well than we understand the ones about probability.
Say more about point 2 there? Thinking about 5 and 6 though—I think I now maybe have a hopeworthy intuition worth sharing later.
Say you have a Bayesian reasoner. It’s got hypotheses; it’s got priors on them; it’s got data. So you watch it doing stuff. What happens? Lots of stuff changes, tide goes in, tide goes out, but it’s still a Bayesian, can’t explain that. The stuff changing is “not deep”. There’s something stable though: the architecture in the background that “makes it a Bayesian”. The update rules, and the rest of the stuff (for example, whatever machinery takes a hypothesis and produces “predictions” which can be compared to the “predictions” from other hypotheses). And: it seems really stable? Like, even reflectively stable, if you insist?
So does this solve stability? I would say, no. You might complain that the reason it doesn’t solve stability is just that the thing doesn’t have goal-pursuits. That’s true but it’s not the core problem. The same issue would show up if we for example looked at the classical agent architecture (utility function, counterfactual beliefs, argmaxxing actions).
The problem is that the agency you can write down is not the true agency. “Deep change” is change that changes elements that you would have considered deep, core, fundamental, overarching… Change that doesn’t fit neatly into the mind, change that isn’t just another piece of data that updates some existing hypotheses. See https://tsvibt.blogspot.com/2023/01/endo-dia-para-and-ecto-systemic-novelty.html
Not so—I’d just call it the trivial case and implore us to do better literally at all!
Apart from that, thanks—I have a better sense of what you meant there. “Deep change” as in “no, actually, whatever you pointed to as the architecture of what’s Really Going On… can’t be that, not for certain, not forever.”
I’d go stronger than just “not for certain, not forever”, and I’d worry you’re not hearing my meaning (agree or not). I’d say in practice more like “pretty soon, with high likelihood, in a pretty deep / comprehensive / disruptive way”. E.g. human culture isn’t just another biotic species (you can make interesting analogies but it’s really not the same).
That’s entirely possible. I’ve thought about this deeply for entire tens of minutes, after all. I think I might just be erring (habitually) on the side of caution in qualities of state-changes I describe expecting to see from systems I don’t fully understand. OTOH… I have a hard time believing that even (especially?) an extremely capable mind would find it worthwhile to repeatedly rebuild itself from the ground up, such that few of even the ?biggest?/most salient features of a mind stick around for long at all.
I have no idea what goes on in the limit, and I would guess that what determines the ultimate effects (https://tsvibt.blogspot.com/2023/04/fundamental-question-what-determines.html) would become stable in some important senses. Here I’m mainly saying that the stuff we currently think of as being core architecture would be upturned.
I mean it’s complicated… like, all minds are absolutely subject to some constraints—there’s some Bayesian constraint, like you can’t “concentrate caring in worlds” in a way that correlates too much with “multiversally contingent” facts, compared to how much you’ve interacted with the world, or something… IDK what it would look like exactly, and if no one else know then that’s kinda my point. Like, there’s
Some math about probabilities, which is just true—information-theoretic bounds and such. But: not clear precisely how this constrains minds in what ways.
Some rough-and-ready ways that minds are constrained in practice, such as obvious stuff about like you can’t know what’s in the cupboard without looking, you can’t shove more than such and such amount of information through a wire, etc. These are true enough in practice, but also can be broken in terms of their relevant-in-practice implications (e.g. by “hypercompressing” images using generative AI; you didn’t truly violate any law of probability but you did compress way beyond what would be expected in a mundane sense).
You can attempt to state more absolute constraints, but IDK how to do that. Naive attempts just don’t work, e.g. “you can’t gain information just by sitting there with your eyes closed” just isn’t true in real life for any meaning of “information” that I know how to state other than a mathematical one (because for example you can gain “logical information”, or because you can “unpack” information you already got (which is maybe “just” gaining logical information but I’m not sure, or rather I’m not sure how to really distinguish non/logical info), or because you can gain/explicitize information about how your brain works which is also information about how other brains work).
You can describe or design minds as having some architecture that you think of as Bayesian. E.g. writing a Bayesian updater in code. But such a program would emerge / be found / rewrite itself so that the hypotheses it entertains, in the descriptive Bayesian sense, are not the things stored in memory and pointed at by the “hypotheses” token in your program.
Another class of constraints like this are those discussed in computational complexity theory.
So there are probably constraints, but we don’t really understand them and definitely don’t know how to weild them, and in particular we understand the ones about goal-pursuits much less well than we understand the ones about probability.