This is a good framing for explaining the problem—you would not, in fact, try to build the same FAI for Clippies and humans, and then pass it humans as a parameter.
I expect you would build the same FAI for paperclipping (although we don’t have any Clippies to pass it as parameter), so I’d appreciate it if you did explain the problem given you believe there is one, since it’s a direction that I’m currently working.
Humans are stuff, just like any other feature of the world, that FAI would optimize, and on stuff-level it makes no difference that people prefer to be “free to optimize”. You are “free to optimize” in a deterministic universe, it’s the way this stuff is (being) arranged that makes the difference, and it’s the content of human preference that says it shouldn’t have some features like undeserved million-dollar bags falling from the sky, where undeserved is another function of stuff. An important subtlety of preference is that it makes different features of perhaps mutually exclusive possible scenarios depend on each other, so the fact that one should care about what could be and how it’s related to what could be otherwise and even to how it’s chosen what to actually realize is about scope of what preference describes, not about specific instance of preference. That is, in a manner of speaking, it’s saying that you need an Int32, not a Bool to hold this variable, but that Int32 seems big enough.
Furthermore, considering the kind of dependence you described in that post you linked seems fundamental from a certain logical standpoint, for any system (not even “AI”). If you build the ontology for FAI on its epistemology, that is you don’t consider it as already knowing anything but only as having its program that could interact with anything, then the possible futures and its own decision-making are already there (and it’s all there is, from its point of view). All it can do, on this conceptual level, is to craft proofs (plans, designs of actions) that have the property of having certain internal dependencies in them, with the AI itself being the “current snapshot” of what it’s planning. That’s enough to handle the “free to optimize” requirement, given the right program.
Hmm, I’m essentially arguing that universal-enough FAI is “computable”, that there is a program that computes a FAI for any given “creature”, within a certain class of “creatures”. I guess this problem is void, since obviously on the too-big-class side, for a small enough class this problem is in principle solvable, and for a big enough class it’ll hit problems, if not conceptual then practical.
So the real question is about the characteristics of such class of systems for which it’s easier to build an abstract FAI, that is a tool that takes a specimen of this class as a parameter and becomes a custom-made FAI for that specimen. This class needs to at least include humanity, and given the size of humanity’s values, it needs to also include a lot of other stuff, for itself to be small enough to program explicitly. I currently expect a class of parameters of a manageable abstract FAI implementation to include even rocks and trees, since I don’t see how to rigorously define and use in FAI theory the difference between these systems and us.
This also takes care of human values/humanity’s values divide: these are just different systems to parameterize the FAI with, so there is no need for a theory of “value overlaps” distinct from a theory of “systems values”. Another question is that “humanity” will probably be a bit harder to specify as parameter than some specific human or group of people.
I expect you would build the same FAI for paperclipping (although we don’t have any Clippies to pass it as parameter), so I’d appreciate it if you did explain the problem given you believe there is one, since it’s a direction that I’m currently working.
Humans are stuff, just like any other feature of the world, that FAI would optimize, and on stuff-level it makes no difference that people prefer to be “free to optimize”. You are “free to optimize” in a deterministic universe, it’s the way this stuff is (being) arranged that makes the difference, and it’s the content of human preference that says it shouldn’t have some features like undeserved million-dollar bags falling from the sky, where undeserved is another function of stuff. An important subtlety of preference is that it makes different features of perhaps mutually exclusive possible scenarios depend on each other, so the fact that one should care about what could be and how it’s related to what could be otherwise and even to how it’s chosen what to actually realize is about scope of what preference describes, not about specific instance of preference. That is, in a manner of speaking, it’s saying that you need an Int32, not a Bool to hold this variable, but that Int32 seems big enough.
Furthermore, considering the kind of dependence you described in that post you linked seems fundamental from a certain logical standpoint, for any system (not even “AI”). If you build the ontology for FAI on its epistemology, that is you don’t consider it as already knowing anything but only as having its program that could interact with anything, then the possible futures and its own decision-making are already there (and it’s all there is, from its point of view). All it can do, on this conceptual level, is to craft proofs (plans, designs of actions) that have the property of having certain internal dependencies in them, with the AI itself being the “current snapshot” of what it’s planning. That’s enough to handle the “free to optimize” requirement, given the right program.
Hmm, I’m essentially arguing that universal-enough FAI is “computable”, that there is a program that computes a FAI for any given “creature”, within a certain class of “creatures”. I guess this problem is void, since obviously on the too-big-class side, for a small enough class this problem is in principle solvable, and for a big enough class it’ll hit problems, if not conceptual then practical.
So the real question is about the characteristics of such class of systems for which it’s easier to build an abstract FAI, that is a tool that takes a specimen of this class as a parameter and becomes a custom-made FAI for that specimen. This class needs to at least include humanity, and given the size of humanity’s values, it needs to also include a lot of other stuff, for itself to be small enough to program explicitly. I currently expect a class of parameters of a manageable abstract FAI implementation to include even rocks and trees, since I don’t see how to rigorously define and use in FAI theory the difference between these systems and us.
This also takes care of human values/humanity’s values divide: these are just different systems to parameterize the FAI with, so there is no need for a theory of “value overlaps” distinct from a theory of “systems values”. Another question is that “humanity” will probably be a bit harder to specify as parameter than some specific human or group of people.