I think things (minds, physical objects, social phenomena) should be characterized by computations that they could simulate/incarnate. The most straightforward example is a computer that holds a program, it could start running it. The program is not in any way fundamentally there, it’s an abstraction of what the computer physically happens to be. And it still characterizes the computer even if it’s not inevitable that it will start running, merely the possibility that it could start running is significant to the interactive behavior of the computer, the way it should be understood when making decisions that involve it. There are many smaller programs that simultaneously describe the same computer at multiple levels of abstraction, their incarnations overlapping with each other, only some of them getting simulated in actuality to manifest results of further computation, depending on circumstance.
Minds and models are things that are hoards of computations also found elsewhere in the world. They are world-simulators by virtue of simulating many computations that the world also happens to be simulating, they learn these computations by observing the world and incarnating them in themselves.
The role of probability seems to be twofold, characterizing the way in which models learn computations simulated by other things, and characterizing the way in which the simulation of particular computations is only possible for a given thing rather than necessary, probability distribution of whether it gets to actually simulate a given computation and such. In either case, probability is just another thing to be characterized in terms of the computations it could be simulating.
(Since computations can themselves be characterized in terms of other computations they could be simulating or being-static-analyses-of, there is a presheaf structure in a characterization of a thing in terms of computations possibly simulated by it. But I’m not sure what the morphisms of the base category should be, what should count as a computation simulating another, and how it should describe possible rather than necessary simulation. The point of this reframing is reduction of the physical world to the language of abstract computations, while allowing descriptions of things that are more than individual computations.)
I don’t seem to understand how you use the word “thing” here; if it can refer to a physical object, then what computations can a wooden crate do, for instance? If none, then it doesn’t get characterized different to a cup, and that seems strange..
Self-supervised learning is a widely applicable illustration, it extracts computations from a phenomenon as circuits of a model. So you might hide some details of a crate and ask which principles reconstruct them, some theory of parallelepipeds might be relevant, or material properties of wood. These computations take in a problem statement (context) and then arrive at further facts implied by it.
This doesn’t cleanly extract individual computations, and has trouble eliciting potential computations that don’t manifest in actuality under most circumstances. Presence of more general minds helps with that, humans might be able to represent such facts of potentiality about other things and then write them down, so that the less general self-superwised learning can observe their traces on the web corpus.
Another issue is that this gets to lump together all things from the world, the models learn what the world simulates, not what individual things simulate. This is significant when the things in question are people or civilizations, and understanding them on their own, without distortion from external circumstance, is key to defining respect for their autonomy, or aims and decisions that are their own. (I tried to articulate a related point in this post, though I seem to have failed, since there were multiple convergent objections that missed it. I explain more in my comment replies there.)
I think things (minds, physical objects, social phenomena) should be characterized by computations that they could simulate/incarnate. The most straightforward example is a computer that holds a program, it could start running it. The program is not in any way fundamentally there, it’s an abstraction of what the computer physically happens to be. And it still characterizes the computer even if it’s not inevitable that it will start running, merely the possibility that it could start running is significant to the interactive behavior of the computer, the way it should be understood when making decisions that involve it. There are many smaller programs that simultaneously describe the same computer at multiple levels of abstraction, their incarnations overlapping with each other, only some of them getting simulated in actuality to manifest results of further computation, depending on circumstance.
Minds and models are things that are hoards of computations also found elsewhere in the world. They are world-simulators by virtue of simulating many computations that the world also happens to be simulating, they learn these computations by observing the world and incarnating them in themselves.
The role of probability seems to be twofold, characterizing the way in which models learn computations simulated by other things, and characterizing the way in which the simulation of particular computations is only possible for a given thing rather than necessary, probability distribution of whether it gets to actually simulate a given computation and such. In either case, probability is just another thing to be characterized in terms of the computations it could be simulating.
(Since computations can themselves be characterized in terms of other computations they could be simulating or being-static-analyses-of, there is a presheaf structure in a characterization of a thing in terms of computations possibly simulated by it. But I’m not sure what the morphisms of the base category should be, what should count as a computation simulating another, and how it should describe possible rather than necessary simulation. The point of this reframing is reduction of the physical world to the language of abstract computations, while allowing descriptions of things that are more than individual computations.)
I don’t seem to understand how you use the word “thing” here; if it can refer to a physical object, then what computations can a wooden crate do, for instance? If none, then it doesn’t get characterized different to a cup, and that seems strange..
Self-supervised learning is a widely applicable illustration, it extracts computations from a phenomenon as circuits of a model. So you might hide some details of a crate and ask which principles reconstruct them, some theory of parallelepipeds might be relevant, or material properties of wood. These computations take in a problem statement (context) and then arrive at further facts implied by it.
This doesn’t cleanly extract individual computations, and has trouble eliciting potential computations that don’t manifest in actuality under most circumstances. Presence of more general minds helps with that, humans might be able to represent such facts of potentiality about other things and then write them down, so that the less general self-superwised learning can observe their traces on the web corpus.
Another issue is that this gets to lump together all things from the world, the models learn what the world simulates, not what individual things simulate. This is significant when the things in question are people or civilizations, and understanding them on their own, without distortion from external circumstance, is key to defining respect for their autonomy, or aims and decisions that are their own. (I tried to articulate a related point in this post, though I seem to have failed, since there were multiple convergent objections that missed it. I explain more in my comment replies there.)