Seems right to me. For example, I think by most natural notions of “agency” that don’t introduce anything crazy, we should probably think of thermostats as agents since they go about making the world different based on inputs. But such deflationary notions of agency seem deeply uncomfortable to a lot of people because they violate the very human-centric notion that lots of simple things don’t have “real” agency because we understand their mechanism, whereas things with agency seem to be complex in a way that we can’t easily understand how they work.
But such deflationary notions of agency seem deeply uncomfortable to a lot of people because they violate the very human-centric notion that lots of simple things don’t have “real” agency because we understand their mechanism, whereas things with agency seem to be complex in a way that we can’t easily understand how they work.
But given that we want to understand “real” agency, not some “mysterious agency” stemming from not understanding inner workings of some glorified thermostat, would it not make sense to start with something simple?
Maybe it’s better to start with something we do understand, then, to make the contrast clear. Can we study the “real” agency of a thermometer, and if we can, what would that research program look like?
My sense is that you can study the real agency of a thermometer, but that it’s not helpful for understanding amoebas. That is, there isn’t much to study in “abstract” agency, independent of the substrate it’s implemented on. For the same reason I wouldn’t study amoebas to understand humans; they’re constructed too differently.
But it’s possible that I don’t understand what you’re trying to do.
That is, there isn’t much to study in “abstract” agency, independent of the substrate it’s implemented on
Yeah, that’s the question, is agency substrate-independent or not, and if it is, does it help to pick a specific substrate, or would one make more progress by doing it more abstractly, or maybe both?
Seems right to me. For example, I think by most natural notions of “agency” that don’t introduce anything crazy, we should probably think of thermostats as agents since they go about making the world different based on inputs. But such deflationary notions of agency seem deeply uncomfortable to a lot of people because they violate the very human-centric notion that lots of simple things don’t have “real” agency because we understand their mechanism, whereas things with agency seem to be complex in a way that we can’t easily understand how they work.
Yeah, that seems like a big part of it. I remember posting to that effect some years ago https://www.lesswrong.com/posts/NptifNqFw4wT4MuY8/agency-is-bugs-and-uncertainty
But given that we want to understand “real” agency, not some “mysterious agency” stemming from not understanding inner workings of some glorified thermostat, would it not make sense to start with something simple?
Maybe it’s better to start with something we do understand, then, to make the contrast clear. Can we study the “real” agency of a thermometer, and if we can, what would that research program look like?
My sense is that you can study the real agency of a thermometer, but that it’s not helpful for understanding amoebas. That is, there isn’t much to study in “abstract” agency, independent of the substrate it’s implemented on. For the same reason I wouldn’t study amoebas to understand humans; they’re constructed too differently.
But it’s possible that I don’t understand what you’re trying to do.
Yeah, that’s the question, is agency substrate-independent or not, and if it is, does it help to pick a specific substrate, or would one make more progress by doing it more abstractly, or maybe both?