I have no idea what goes on in the limit, and I would guess that what determines the ultimate effects (https://tsvibt.blogspot.com/2023/04/fundamental-question-what-determines.html) would become stable in some important senses. Here I’m mainly saying that the stuff we currently think of as being core architecture would be upturned.
I mean it’s complicated… like, all minds are absolutely subject to some constraints—there’s some Bayesian constraint, like you can’t “concentrate caring in worlds” in a way that correlates too much with “multiversally contingent” facts, compared to how much you’ve interacted with the world, or something… IDK what it would look like exactly, and if no one else know then that’s kinda my point. Like, there’s
Some math about probabilities, which is just true—information-theoretic bounds and such. But: not clear precisely how this constrains minds in what ways.
Some rough-and-ready ways that minds are constrained in practice, such as obvious stuff about like you can’t know what’s in the cupboard without looking, you can’t shove more than such and such amount of information through a wire, etc. These are true enough in practice, but also can be broken in terms of their relevant-in-practice implications (e.g. by “hypercompressing” images using generative AI; you didn’t truly violate any law of probability but you did compress way beyond what would be expected in a mundane sense).
You can attempt to state more absolute constraints, but IDK how to do that. Naive attempts just don’t work, e.g. “you can’t gain information just by sitting there with your eyes closed” just isn’t true in real life for any meaning of “information” that I know how to state other than a mathematical one (because for example you can gain “logical information”, or because you can “unpack” information you already got (which is maybe “just” gaining logical information but I’m not sure, or rather I’m not sure how to really distinguish non/logical info), or because you can gain/explicitize information about how your brain works which is also information about how other brains work).
You can describe or design minds as having some architecture that you think of as Bayesian. E.g. writing a Bayesian updater in code. But such a program would emerge / be found / rewrite itself so that the hypotheses it entertains, in the descriptive Bayesian sense, are not the things stored in memory and pointed at by the “hypotheses” token in your program.
Another class of constraints like this are those discussed in computational complexity theory.
So there are probably constraints, but we don’t really understand them and definitely don’t know how to weild them, and in particular we understand the ones about goal-pursuits much less well than we understand the ones about probability.
I have no idea what goes on in the limit, and I would guess that what determines the ultimate effects (https://tsvibt.blogspot.com/2023/04/fundamental-question-what-determines.html) would become stable in some important senses. Here I’m mainly saying that the stuff we currently think of as being core architecture would be upturned.
I mean it’s complicated… like, all minds are absolutely subject to some constraints—there’s some Bayesian constraint, like you can’t “concentrate caring in worlds” in a way that correlates too much with “multiversally contingent” facts, compared to how much you’ve interacted with the world, or something… IDK what it would look like exactly, and if no one else know then that’s kinda my point. Like, there’s
Some math about probabilities, which is just true—information-theoretic bounds and such. But: not clear precisely how this constrains minds in what ways.
Some rough-and-ready ways that minds are constrained in practice, such as obvious stuff about like you can’t know what’s in the cupboard without looking, you can’t shove more than such and such amount of information through a wire, etc. These are true enough in practice, but also can be broken in terms of their relevant-in-practice implications (e.g. by “hypercompressing” images using generative AI; you didn’t truly violate any law of probability but you did compress way beyond what would be expected in a mundane sense).
You can attempt to state more absolute constraints, but IDK how to do that. Naive attempts just don’t work, e.g. “you can’t gain information just by sitting there with your eyes closed” just isn’t true in real life for any meaning of “information” that I know how to state other than a mathematical one (because for example you can gain “logical information”, or because you can “unpack” information you already got (which is maybe “just” gaining logical information but I’m not sure, or rather I’m not sure how to really distinguish non/logical info), or because you can gain/explicitize information about how your brain works which is also information about how other brains work).
You can describe or design minds as having some architecture that you think of as Bayesian. E.g. writing a Bayesian updater in code. But such a program would emerge / be found / rewrite itself so that the hypotheses it entertains, in the descriptive Bayesian sense, are not the things stored in memory and pointed at by the “hypotheses” token in your program.
Another class of constraints like this are those discussed in computational complexity theory.
So there are probably constraints, but we don’t really understand them and definitely don’t know how to weild them, and in particular we understand the ones about goal-pursuits much less well than we understand the ones about probability.