You might complain that the reason it doesn’t solve stability is just that the thing doesn’t have goal-pursuits.
Not so—I’d just call it the trivial case and implore us to do better literally at all!
Apart from that, thanks—I have a better sense of what you meant there. “Deep change” as in “no, actually, whatever you pointed to as the architecture of what’s Really Going On… can’t be that, not for certain, not forever.”
I’d go stronger than just “not for certain, not forever”, and I’d worry you’re not hearing my meaning (agree or not). I’d say in practice more like “pretty soon, with high likelihood, in a pretty deep / comprehensive / disruptive way”. E.g. human culture isn’t just another biotic species (you can make interesting analogies but it’s really not the same).
I’d go stronger than just “not for certain, not forever”, and I’d worry you’re not hearing my meaning (agree or not).
That’s entirely possible. I’ve thought about this deeply for entire tens of minutes, after all. I think I might just be erring (habitually) on the side of caution in qualities of state-changes I describe expecting to see from systems I don’t fully understand. OTOH… I have a hard time believing that even (especially?) an extremely capable mind would find it worthwhile to repeatedly rebuild itself from the ground up, such that few of even the ?biggest?/most salient features of a mind stick around for long at all.
I have no idea what goes on in the limit, and I would guess that what determines the ultimate effects (https://tsvibt.blogspot.com/2023/04/fundamental-question-what-determines.html) would become stable in some important senses. Here I’m mainly saying that the stuff we currently think of as being core architecture would be upturned.
I mean it’s complicated… like, all minds are absolutely subject to some constraints—there’s some Bayesian constraint, like you can’t “concentrate caring in worlds” in a way that correlates too much with “multiversally contingent” facts, compared to how much you’ve interacted with the world, or something… IDK what it would look like exactly, and if no one else know then that’s kinda my point. Like, there’s
Some math about probabilities, which is just true—information-theoretic bounds and such. But: not clear precisely how this constrains minds in what ways.
Some rough-and-ready ways that minds are constrained in practice, such as obvious stuff about like you can’t know what’s in the cupboard without looking, you can’t shove more than such and such amount of information through a wire, etc. These are true enough in practice, but also can be broken in terms of their relevant-in-practice implications (e.g. by “hypercompressing” images using generative AI; you didn’t truly violate any law of probability but you did compress way beyond what would be expected in a mundane sense).
You can attempt to state more absolute constraints, but IDK how to do that. Naive attempts just don’t work, e.g. “you can’t gain information just by sitting there with your eyes closed” just isn’t true in real life for any meaning of “information” that I know how to state other than a mathematical one (because for example you can gain “logical information”, or because you can “unpack” information you already got (which is maybe “just” gaining logical information but I’m not sure, or rather I’m not sure how to really distinguish non/logical info), or because you can gain/explicitize information about how your brain works which is also information about how other brains work).
You can describe or design minds as having some architecture that you think of as Bayesian. E.g. writing a Bayesian updater in code. But such a program would emerge / be found / rewrite itself so that the hypotheses it entertains, in the descriptive Bayesian sense, are not the things stored in memory and pointed at by the “hypotheses” token in your program.
Another class of constraints like this are those discussed in computational complexity theory.
So there are probably constraints, but we don’t really understand them and definitely don’t know how to weild them, and in particular we understand the ones about goal-pursuits much less well than we understand the ones about probability.
Not so—I’d just call it the trivial case and implore us to do better literally at all!
Apart from that, thanks—I have a better sense of what you meant there. “Deep change” as in “no, actually, whatever you pointed to as the architecture of what’s Really Going On… can’t be that, not for certain, not forever.”
I’d go stronger than just “not for certain, not forever”, and I’d worry you’re not hearing my meaning (agree or not). I’d say in practice more like “pretty soon, with high likelihood, in a pretty deep / comprehensive / disruptive way”. E.g. human culture isn’t just another biotic species (you can make interesting analogies but it’s really not the same).
That’s entirely possible. I’ve thought about this deeply for entire tens of minutes, after all. I think I might just be erring (habitually) on the side of caution in qualities of state-changes I describe expecting to see from systems I don’t fully understand. OTOH… I have a hard time believing that even (especially?) an extremely capable mind would find it worthwhile to repeatedly rebuild itself from the ground up, such that few of even the ?biggest?/most salient features of a mind stick around for long at all.
I have no idea what goes on in the limit, and I would guess that what determines the ultimate effects (https://tsvibt.blogspot.com/2023/04/fundamental-question-what-determines.html) would become stable in some important senses. Here I’m mainly saying that the stuff we currently think of as being core architecture would be upturned.
I mean it’s complicated… like, all minds are absolutely subject to some constraints—there’s some Bayesian constraint, like you can’t “concentrate caring in worlds” in a way that correlates too much with “multiversally contingent” facts, compared to how much you’ve interacted with the world, or something… IDK what it would look like exactly, and if no one else know then that’s kinda my point. Like, there’s
Some math about probabilities, which is just true—information-theoretic bounds and such. But: not clear precisely how this constrains minds in what ways.
Some rough-and-ready ways that minds are constrained in practice, such as obvious stuff about like you can’t know what’s in the cupboard without looking, you can’t shove more than such and such amount of information through a wire, etc. These are true enough in practice, but also can be broken in terms of their relevant-in-practice implications (e.g. by “hypercompressing” images using generative AI; you didn’t truly violate any law of probability but you did compress way beyond what would be expected in a mundane sense).
You can attempt to state more absolute constraints, but IDK how to do that. Naive attempts just don’t work, e.g. “you can’t gain information just by sitting there with your eyes closed” just isn’t true in real life for any meaning of “information” that I know how to state other than a mathematical one (because for example you can gain “logical information”, or because you can “unpack” information you already got (which is maybe “just” gaining logical information but I’m not sure, or rather I’m not sure how to really distinguish non/logical info), or because you can gain/explicitize information about how your brain works which is also information about how other brains work).
You can describe or design minds as having some architecture that you think of as Bayesian. E.g. writing a Bayesian updater in code. But such a program would emerge / be found / rewrite itself so that the hypotheses it entertains, in the descriptive Bayesian sense, are not the things stored in memory and pointed at by the “hypotheses” token in your program.
Another class of constraints like this are those discussed in computational complexity theory.
So there are probably constraints, but we don’t really understand them and definitely don’t know how to weild them, and in particular we understand the ones about goal-pursuits much less well than we understand the ones about probability.