Which brings me to my main disagreement with bottom-up approaches: they assume we already have a physics theory in hand, and are trying to locate consciousness within that theory. Yet, we needed conscious observations, and at least some preliminary theory of consciousness, to even get to a low-level physics theory in the first place. Scientific observations are a subset of conscious experience, and the core task of science is to predict scientific observations; this requires pumping a type of conscious experience out of a physical theory, which requires at least some preliminary theory of consciousness. Anthropics makes this clear, as theories such as SSA and SIA require identifying observers who are in our reference class.
There’s something a bit off about this that’s hard to quite put my finger on. To gesture vaguely at it, it’s not obvious to me that this problem ought to have a solution. At the end of the day, we’re thinking meat, and we think because thinking makes the meat better at becoming more meat. We have experiences correlated with our environments because agents whose experiences aren’t correlated with their environments don’t arise from chemical soup without cause.
My guess is that if we want to understand “consciousness”, the best approach would be functionalist. What work is the inner listener doing? It has to be doing something, or it wouldn’t be there.
Do you feel you have an angle on that question? Would be very curious to hear more if so.
Not sure how satisfying this is, but here’s a rough sketch:
Anthropically, the meat we’re paying attention to is meat that implements an algorithm that has general cognition including the capacity of building physics theories from observations. Such meat may become more common either due to physics theories being generally useful or general cognition that does physics among other things being generally useful. The algorithm on the meat selects theories of physics that explain their observations. To explain the observations, the physics theory has to bridge between the subject matter of physics and the observational inputs to the algorithm that are used to build and apply the theory. The thing that is bridged to isn’t, according to the bridging law, identical to the subject matter of low level physics (atoms or whatever), and there also isn’t a very simple translation, although there is a complex translation. The presence of a complex but not simple load-bearing translation motivates further investigation to find a more parsimonious theory. Additionally, there are things the algorithm implemented on the meat does other than building physics theories that use similar algorithmic infrastructure to the infrastructure that builds physics theories from observations. It is therefore parsimonious to posit that there is a broader class of entity than “physical observation” that includes observations not directly used to build physical theories, due to natural category considerations. “Experience” seems a fitting name for such a class.
There’s something a bit off about this that’s hard to quite put my finger on. To gesture vaguely at it, it’s not obvious to me that this problem ought to have a solution. At the end of the day, we’re thinking meat, and we think because thinking makes the meat better at becoming more meat. We have experiences correlated with our environments because agents whose experiences aren’t correlated with their environments don’t arise from chemical soup without cause.
My guess is that if we want to understand “consciousness”, the best approach would be functionalist. What work is the inner listener doing? It has to be doing something, or it wouldn’t be there.
Do you feel you have an angle on that question? Would be very curious to hear more if so.
Not sure how satisfying this is, but here’s a rough sketch:
Anthropically, the meat we’re paying attention to is meat that implements an algorithm that has general cognition including the capacity of building physics theories from observations. Such meat may become more common either due to physics theories being generally useful or general cognition that does physics among other things being generally useful. The algorithm on the meat selects theories of physics that explain their observations. To explain the observations, the physics theory has to bridge between the subject matter of physics and the observational inputs to the algorithm that are used to build and apply the theory. The thing that is bridged to isn’t, according to the bridging law, identical to the subject matter of low level physics (atoms or whatever), and there also isn’t a very simple translation, although there is a complex translation. The presence of a complex but not simple load-bearing translation motivates further investigation to find a more parsimonious theory. Additionally, there are things the algorithm implemented on the meat does other than building physics theories that use similar algorithmic infrastructure to the infrastructure that builds physics theories from observations. It is therefore parsimonious to posit that there is a broader class of entity than “physical observation” that includes observations not directly used to build physical theories, due to natural category considerations. “Experience” seems a fitting name for such a class.