There seems to be a bootstrapping problem: In order to figure out what the precise statement is that human preference makes, we need to know how to combine preferences from different systems; in order to know how preferences should combine, we need to know what human preference says about this.
If we already have a given preference, it will only retell itself as an answer to the query “What preference should result [from combining A and B]?”, so that’s not how the game is played. “What’s a fair way of combining A and B?” may be more like it, but of questionable relevance. For now, I’m focusing on getting a better idea of what kind of mathematical structure preference should be, rather than on how to point to the particular object representing the given imperfect agent.
For now, I’m focusing on getting a better idea of what kind of mathematical structure preference should be
What is/are your approach(es) for attacking this problem, if you don’t mind sharing?
In my UDT1 post I suggested that the mathematical structure of preference could be an ordering on all possible (vectors of) execution histories of all possible computations. This seems general enough to represent any conceivable kind of preference (except preferences about uncomputable universes), but also appears rather useless for answering the question of how preferences should be merged.
For now, I’m focusing on getting a better idea of what kind of mathematical structure preference should be
What is/are your approach(es) for attacking this problem, if you don’t mind sharing?
Since I don’t have self-contained results, I can’t describe what I’m searching for concisely, and the working hypotheses and hunches are too messy to summarize in a blog comment. I’ll give some of the motivations I found towards the end of the current blog sequence, and possibly will elaborate in the next one if the ideas sufficiently mature.
In my UDT1 post I suggested that the mathematical structure of preference could be an ordering on all possible (vectors of) execution histories of all possible computations. This seems general enough to represent any conceivable kind of preference (except preferences about uncomputable universes), but also appears rather useless for answering the question of how preferences should be merged.
Yes, this is not very helpful. Consider the question: what is the difference between (1) preference, (2) strategy that the agent will follow, and the (3) whole of agent’s algorithm? Histories of the universe could play a role in semantics of (1), but they are problematic in principle, because we don’t know, nor will ever know with certainty, the true laws of the universe. And what we really want is to get to (3), not (1), but with good understanding of (1) so that we know (3) to be based on our (1).
I’ll give some of the motivations I found towards the end of the current blog sequence, and possibly will elaborate in the next one if the ideas sufficiently mature.
Thanks. I look forward to that.
Histories of the universe could play a role in semantics of (1), but they are problematic in principle, because we don’t know, nor will ever know with certainty, the true laws of the universe.
I don’t understand what you mean here, and I think maybe you misunderstood something I said earlier. Here’s what I wrote in the UDT1 post:
More generally, we can always represent your preferences as a utility function on vectors of the form where E1 is an execution history of P1, E2 is an execution history of P2, and so on.
(Note that of course this utility function has to be represented in a compressed/connotational form, otherwise it would be infinite in size.) If we consider the multiverse to be the execution of all possible programs, there is no uncertainty about the laws of the multiverse. There is uncertainty about “which universes, i.e., programs, we’re in”, but that’s a problem we already have a handle on, I think.
So, I don’t know what you’re referring to by “true laws of the universe”, and I can’t find an interpretation of it where your quoted statement makes sense to me.
If we consider the multiverse to be the execution of all possible programs, there is no uncertainty about the laws of the multiverse.
I don’t believe that directly posing this “hypothesis” is a meaningful way to go, although computational paradigm can find its way into description of the environment for the AI that in its initial implementation works from within a digital computer.
Here is a revised way of asking the question I had in mind: If our preferences determine which extraction method is the correct one (the one that results in our actual preferences), and if we cannot know or use our preferences with precision until they are extracted, then how can we find the correct extraction method?
Asking it this way, I’m no longer sure it is a real problem. I can imagine that knowing what kind of object preference is would clarify what properties a correct extraction method needs to have.
Going meta and using the (potentially) available data such as humans in form of uploads, is a step made in attempt to minimize the amount of data (given explicitly by the programmers) to the process that reconstructs human preference. Sure, it’s a bet (there are no universal preference-extraction methods that interpret every agent in a way it’d prefer to do itself, so we have to make a good enough guess), but there seems to be no other way to have a chance at preserving current preference. Also, there may turn out to be a good means of verification that the solution given by a particular preference-extraction procedure is the right one.
There seems to be a bootstrapping problem: In order to figure out what the precise statement is that human preference makes, we need to know how to combine preferences from different systems; in order to know how preferences should combine, we need to know what human preference says about this.
If we already have a given preference, it will only retell itself as an answer to the query “What preference should result [from combining A and B]?”, so that’s not how the game is played. “What’s a fair way of combining A and B?” may be more like it, but of questionable relevance. For now, I’m focusing on getting a better idea of what kind of mathematical structure preference should be, rather than on how to point to the particular object representing the given imperfect agent.
What is/are your approach(es) for attacking this problem, if you don’t mind sharing?
In my UDT1 post I suggested that the mathematical structure of preference could be an ordering on all possible (vectors of) execution histories of all possible computations. This seems general enough to represent any conceivable kind of preference (except preferences about uncomputable universes), but also appears rather useless for answering the question of how preferences should be merged.
Since I don’t have self-contained results, I can’t describe what I’m searching for concisely, and the working hypotheses and hunches are too messy to summarize in a blog comment. I’ll give some of the motivations I found towards the end of the current blog sequence, and possibly will elaborate in the next one if the ideas sufficiently mature.
Yes, this is not very helpful. Consider the question: what is the difference between (1) preference, (2) strategy that the agent will follow, and the (3) whole of agent’s algorithm? Histories of the universe could play a role in semantics of (1), but they are problematic in principle, because we don’t know, nor will ever know with certainty, the true laws of the universe. And what we really want is to get to (3), not (1), but with good understanding of (1) so that we know (3) to be based on our (1).
Thanks. I look forward to that.
I don’t understand what you mean here, and I think maybe you misunderstood something I said earlier. Here’s what I wrote in the UDT1 post:
(Note that of course this utility function has to be represented in a compressed/connotational form, otherwise it would be infinite in size.) If we consider the multiverse to be the execution of all possible programs, there is no uncertainty about the laws of the multiverse. There is uncertainty about “which universes, i.e., programs, we’re in”, but that’s a problem we already have a handle on, I think.
So, I don’t know what you’re referring to by “true laws of the universe”, and I can’t find an interpretation of it where your quoted statement makes sense to me.
I don’t believe that directly posing this “hypothesis” is a meaningful way to go, although computational paradigm can find its way into description of the environment for the AI that in its initial implementation works from within a digital computer.
Here is a revised way of asking the question I had in mind: If our preferences determine which extraction method is the correct one (the one that results in our actual preferences), and if we cannot know or use our preferences with precision until they are extracted, then how can we find the correct extraction method?
Asking it this way, I’m no longer sure it is a real problem. I can imagine that knowing what kind of object preference is would clarify what properties a correct extraction method needs to have.
Going meta and using the (potentially) available data such as humans in form of uploads, is a step made in attempt to minimize the amount of data (given explicitly by the programmers) to the process that reconstructs human preference. Sure, it’s a bet (there are no universal preference-extraction methods that interpret every agent in a way it’d prefer to do itself, so we have to make a good enough guess), but there seems to be no other way to have a chance at preserving current preference. Also, there may turn out to be a good means of verification that the solution given by a particular preference-extraction procedure is the right one.