Imagine there’s a Game of Life world with a society of evolved intelligent creatures inside it, and you’re observing the world from the outside. The creatures communicate with each other, “refer” to things in their environment, introspect on their internal state, consider counterfactuals, make decisions, etc. You from the outside can understand all these cognitive processes fully mechanistically, as tools that don’t require any ontologically fundamental special sauce to work. When they refer to things, it’s “just” some mechanism for constructing models of their environment and sharing information about it between each other or something, and you can understand that mechanism without ever being compelled to think that “fundamental deixis” or something is involved. You will even observe some of the more introspective agents formulate theories about how explaining their experience seems to require positing fundamental deixis, or fundamental free will, or fundamental qualia, but you’ll be able to tell from the outside that they’re confused, and everything about them is fully explicable without any of that stuff.
Right?
Then the obvious question is: should I think I’m different from the Game of Life creatures? Should I think that I need ontologically fundamental special sauce to do things that they seem to be able do without it?
This isn’t really a tight argument, but I’d like to see how your views deal with thought experiments of this general kind.
Suppose we run the GOL simulation. What we have ontological access to is the state at each time, as a Boolean grid. We do not, yet, have ontological access to: agents in the world, references made by these agents, the agents’ cognitive processes, etc.
To find these things in the world, we need to do some philosophical work that bridges between GOL state-by-state data and these concepts (such as “references”) that we are familiar with through use, and are actually using in the course of studying GOL-world.
Our notion of things like “imagination” has to start without a physical definition, and is based on our familiarity. We can consider mappings between this non-physical concept and the state-by-state ontology, and judge them by how well-fitting they are.
Now to the question of whether I’m “ontologically special” compared to GOL agents. Since I’m taking a relativistic perspective, the answer is “yes” in a tautological way, in that my ontology is my ontology. That is, I’m here and now, and GOL agents aren’t here and now. I have access to my vision directly, and access to the vision of GOL agents through a complex process of running computer simulations, doing reductionist philosophy, and so on.
However, I could also (after doing reductionist philosophy) take the perspective of the GOL agents, and see that, if they adopt a similar philosophy, they must consider themselves ontologically special, and me as some agent they can access in only a very indirect manner.
At this point I (and they) may want semantics for a joint ontology, that can faithfully represent both of our perspectives, in a neutral way that respects the symmetry (and uses it for compression). This is a worthy goal, but it requires at least each of us understanding our own initial perspective. (I wrote previously about this sort of Copernican move here)
Even after I’ve done this, my world model will treat myself as special in the sense that the data used to build it directly comes from my senses (not the GOL agent’s). But there will be a mental symmetry represented. And there will be semantics for a joint model that includes what is common between our perspectives.
I do not make any assertion that this neutral joint map will treat anyone as ontologically special compared to anyone else. It really shouldn’t unless there is some important asymmetry.
Imagine there’s a Game of Life world with a society of evolved intelligent creatures inside it, and you’re observing the world from the outside. The creatures communicate with each other, “refer” to things in their environment, introspect on their internal state, consider counterfactuals, make decisions, etc. You from the outside can understand all these cognitive processes fully mechanistically, as tools that don’t require any ontologically fundamental special sauce to work. When they refer to things, it’s “just” some mechanism for constructing models of their environment and sharing information about it between each other or something, and you can understand that mechanism without ever being compelled to think that “fundamental deixis” or something is involved. You will even observe some of the more introspective agents formulate theories about how explaining their experience seems to require positing fundamental deixis, or fundamental free will, or fundamental qualia, but you’ll be able to tell from the outside that they’re confused, and everything about them is fully explicable without any of that stuff.
Right?
Then the obvious question is: should I think I’m different from the Game of Life creatures? Should I think that I need ontologically fundamental special sauce to do things that they seem to be able do without it?
This isn’t really a tight argument, but I’d like to see how your views deal with thought experiments of this general kind.
Suppose we run the GOL simulation. What we have ontological access to is the state at each time, as a Boolean grid. We do not, yet, have ontological access to: agents in the world, references made by these agents, the agents’ cognitive processes, etc.
To find these things in the world, we need to do some philosophical work that bridges between GOL state-by-state data and these concepts (such as “references”) that we are familiar with through use, and are actually using in the course of studying GOL-world.
Our notion of things like “imagination” has to start without a physical definition, and is based on our familiarity. We can consider mappings between this non-physical concept and the state-by-state ontology, and judge them by how well-fitting they are.
Now to the question of whether I’m “ontologically special” compared to GOL agents. Since I’m taking a relativistic perspective, the answer is “yes” in a tautological way, in that my ontology is my ontology. That is, I’m here and now, and GOL agents aren’t here and now. I have access to my vision directly, and access to the vision of GOL agents through a complex process of running computer simulations, doing reductionist philosophy, and so on.
However, I could also (after doing reductionist philosophy) take the perspective of the GOL agents, and see that, if they adopt a similar philosophy, they must consider themselves ontologically special, and me as some agent they can access in only a very indirect manner.
At this point I (and they) may want semantics for a joint ontology, that can faithfully represent both of our perspectives, in a neutral way that respects the symmetry (and uses it for compression). This is a worthy goal, but it requires at least each of us understanding our own initial perspective. (I wrote previously about this sort of Copernican move here)
Even after I’ve done this, my world model will treat myself as special in the sense that the data used to build it directly comes from my senses (not the GOL agent’s). But there will be a mental symmetry represented. And there will be semantics for a joint model that includes what is common between our perspectives.
I do not make any assertion that this neutral joint map will treat anyone as ontologically special compared to anyone else. It really shouldn’t unless there is some important asymmetry.