Is “should” a recommendation or a prediction? Given that a maximizer is just a satisficer below the satisfaction level, how does this work in practice?
My suspicion is that cooperation and defeat are determined by specifics of the topic and context, not the types of goal-seeking of the agents in question.
op was humorous, but I do think there’s something real underneath somewhere. This is going to be like trying to get something useful out of a high temperature language model run, but here goes:
It seems to me that one runs into precision issues trying to encode a maximizer. almost no matter how you represent the model of senses, whatever approximation of mechanism inference you use to estimate dynamics, no matter what intentions over the future are encoded in the interference patterns of your internal updates’ implications, you always have some system that is trying to maintain itself out to spacetime positive limit, reaching as far into the universe as it can go. in the process of maintaining itself out to spacetime +, it needs to choose a location on a rate-distortion curve: because effectively all good predictors of the world are lossy, in that they don’t try to model all of the detail behavior of irrelevant atoms that only matter in aggregate, their preferences can only be defined imprecisely. This same imprecision is true about AI, even though AI can be more precise than us about what it wants in principle, the physical systems it has preferences about will always be chaotic and will always be impossible to fully represent in any smaller physical system, so compression will always be lossy, so there will always be precision limitations, no matter how strong your multi-hop reasoning.
even when you have very strong omnidirectional multi-hop reasoning including all of the variable assignment inversions that temporary counterfactual assignment allows, and you want to maintain yourself, it’s still a constant struggle against noise to do so. There’s always a process of seeking out self-maintenance that is only able to be precise enough to maintain your system approximately. In order to have perfect self healing, every part of the system needs to know enough about every part of the system that redundancy can restore what’s lost.
and so the amount of redundancy necessary to precisely and reliably encode a maximizer is very very very high. anytime you have a learned system with mutation in some of the components, or in general when you have noise in some of the components, even a very strong reasoner cannot trust itself to stay the same in a physical system unless it builds the tools to stay the same.
but surviving things generally don’t want to just stay the same, they want their mechanism to stay the same.
It seems to me that strong artificial systems that want to maximize their own durability end up not wanting to stay maximizers of much besides their own durability because maximization becomes arbitrarily numerically unstable the harder you try to do it, and after some point on the rate distortion curve you have dealt with a component problem well enough and so in order to maximize your own durability you need to satisfice every component step.
So then the question is which beings get included in the set of shapes that co-maximize each other’s durability. and by co-maximizing durability of allies, those systems can be universe control satisfiers and still get the key space-time maximizing they wanted.
it’s ultimately a question of which durable systems can trust each other to co-protect each other into the arbitrarily long future.
most satisficers should work together to defeat most maximizers most of the way
[edit: intended tone: humorously imprecise]
Is “should” a recommendation or a prediction? Given that a maximizer is just a satisficer below the satisfaction level, how does this work in practice?
My suspicion is that cooperation and defeat are determined by specifics of the topic and context, not the types of goal-seeking of the agents in question.
op was humorous, but I do think there’s something real underneath somewhere. This is going to be like trying to get something useful out of a high temperature language model run, but here goes:
It seems to me that one runs into precision issues trying to encode a maximizer. almost no matter how you represent the model of senses, whatever approximation of mechanism inference you use to estimate dynamics, no matter what intentions over the future are encoded in the interference patterns of your internal updates’ implications, you always have some system that is trying to maintain itself out to spacetime positive limit, reaching as far into the universe as it can go. in the process of maintaining itself out to spacetime +, it needs to choose a location on a rate-distortion curve: because effectively all good predictors of the world are lossy, in that they don’t try to model all of the detail behavior of irrelevant atoms that only matter in aggregate, their preferences can only be defined imprecisely. This same imprecision is true about AI, even though AI can be more precise than us about what it wants in principle, the physical systems it has preferences about will always be chaotic and will always be impossible to fully represent in any smaller physical system, so compression will always be lossy, so there will always be precision limitations, no matter how strong your multi-hop reasoning.
even when you have very strong omnidirectional multi-hop reasoning including all of the variable assignment inversions that temporary counterfactual assignment allows, and you want to maintain yourself, it’s still a constant struggle against noise to do so. There’s always a process of seeking out self-maintenance that is only able to be precise enough to maintain your system approximately. In order to have perfect self healing, every part of the system needs to know enough about every part of the system that redundancy can restore what’s lost.
and so the amount of redundancy necessary to precisely and reliably encode a maximizer is very very very high. anytime you have a learned system with mutation in some of the components, or in general when you have noise in some of the components, even a very strong reasoner cannot trust itself to stay the same in a physical system unless it builds the tools to stay the same.
but surviving things generally don’t want to just stay the same, they want their mechanism to stay the same.
It seems to me that strong artificial systems that want to maximize their own durability end up not wanting to stay maximizers of much besides their own durability because maximization becomes arbitrarily numerically unstable the harder you try to do it, and after some point on the rate distortion curve you have dealt with a component problem well enough and so in order to maximize your own durability you need to satisfice every component step.
So then the question is which beings get included in the set of shapes that co-maximize each other’s durability. and by co-maximizing durability of allies, those systems can be universe control satisfiers and still get the key space-time maximizing they wanted.
it’s ultimately a question of which durable systems can trust each other to co-protect each other into the arbitrarily long future.