in this post, i put forth some of my current thoughts about the shape of a formal aligned AI using QACI for its decision — “decision” in the singular here, as this is sufficient when the AI’s decision can be “run me again but with these diffirent inputs”. as it turns out, this doesn’t require solving as many things as i’d thought — it seems like QACI might be general enough to delegate picking a decision theory and solving embedded agency to the counterfactual consideration of the past user.
we’ll posit:
as a convention, we’ll use a prime
to denote counterfactual values, and we’ll be denoting questions , and answers (for “response”) to avoid confusion with for actions.
the AI is denoted
, taking as input an observation as well as the user’s original question and answer, denoted and . it returns an action from the set of all possible actions, .
will be countable sets of computable functions. in particular, will be the set of computable hypotheses for worlds, represented as non-halting programs taking no input with and being respectively the unit and bottom types.
finally, we’ll implicitely “cast” mathematical objects as natural numbers wherever appropriate, given that they’re all sampled from countable sets anyways. when they’re cast to or from natural numbers, assume a reasonable bijection between their type and
.
we’ll define the following:
a “carver” function
which returns a set of tuples of:
a function
for extracting a piece of data “in the same way” as is in but from any other world
a piece of data
that represents “everything else” than in the world
a function
for counterfactually putting another piece of data back in , alongside
this is done by splitting a world
into the piece of data , and “everything else”, denoted . in practice with arbitrary other worlds, would return “garbage” the immense majority of the time, but the hope is that given a same carving for the question , a same carving for the answer would work enough time to give a signal that would tend to beat the overall noise of the failing cases.
we’ll define
, the question-answer counterfactual interval device used to consider answers to counterfactual questions , given a world hypothesis and a known question and answer :
note how
measures the simplicity of all four functions together so as to favor them being simple but also similar, but ignores the simplicity of the “rest of the world” values..
finally, we can define our AI
as a function of but also an observation which could, in practice, be anything that lets the AI as well as the user better locate themselves in the set of possible worlds.
what’s going on here, is for each world hypothesis
weighed by its simplicity we’re finding carvings for and that let us inject a counterfactual question and get a counterfactual answer, favoring simpler carvings using . the counterfactual rerun of the past user is given the world hypothesis being considered, as well as the action — we’re not giving the user because that would make the counterfactual question guaranteed to be strictly larger than the original question, which seems like it might break things, and is not absolutely necessary. the user gets to return an arbitrary function to be evaluated, to which are actually passed. we are, however, clamping the result of inside and then multiplying it by ourselves, so as to avoid erroneous carvings getting to influence the decision process too much by returning arbitrary large values. the hope here is merely that desired carvings generally outweigh undesired carvings, and especially adverserial carvings.
ways this could go wrong include:
the carvings could fail to sufficiently select the actual instances of the past-user in the world; in particular,
could be returning so much “garbage” when given counterfactual worlds that are different from that the signal from the carvings that do work end ups completely drowned out.
the carvings locate question-answering users anywhere in the world, including in the future. this allows for adverserial intelligences to occupy most of the set of simple and coherently-answering carvings, thus steering our AI away from following the actual user’s decisions. a solution to this would be to sufficiently strongly favor carvings that select question-answering processes that are causally upstream of the AI itself; that is to say, causally upstream of
.
weird implementation details as for how turing machines are encoded could dominate most of the signal of values returned by
. in addition, the user could be unsure as to how to return a meaningful scalar given a particular potential action. these points could both be partially addressed by passing to the carvings pairs of actions for all possible pairs of different actions from , and have the counterfactual user select a preferred action, rather than relying on a scalar returned about a single action. if and agree as to which of and is the preferable action, then that’s an actual signal that it is preferable. adding more “symmetries” might make noise cancel itself out.
i don’t think we get to require
to return carvings that work for arbitrarily large counterfactual payloads, nor do we need to. in practice, i expect a constant finite (though large) bit length is to be used for .
there’s a lot of sets-of-all-programs being sampled from here, leaving a lot of room for demons in the solomonoff prior if those are a concern. typically i tend to favor decision theoritic solutions to those, and maybe a correct
implementation would return action-functions which would depend on whatever decision theory is correct, such that this can be delegated? but it feels like this system has ways to go wrong before that, in what programs get to control most of the “mass” returned by to begin with.
this is of course highly uncomputable. the intent here, is to use something like logical induction to approximate good results to this function. what makes me hopeful that a powerful AI can make helpful guesses as to what actions this process would find, if it is indeed aligned, is that even i, a mere human mind, feel like i can make some helpful guesses as to what actions this process would find.
a rough sketch of formal aligned AI using QACI
Link post
in this post, i put forth some of my current thoughts about the shape of a formal aligned AI using QACI for its decision — “decision” in the singular here, as this is sufficient when the AI’s decision can be “run me again but with these diffirent inputs”. as it turns out, this doesn’t require solving as many things as i’d thought — it seems like QACI might be general enough to delegate picking a decision theory and solving embedded agency to the counterfactual consideration of the past user.
we’ll posit:
as a convention, we’ll use a prime
to denote counterfactual values, and we’ll be denoting questions , and answers (for “response”) to avoid confusion with for actions.the AI is denoted
, taking as input an observation as well as the user’s original question and answer, denoted and . it returns an action from the set of all possible actions, .all sets of functions
will be countable sets of computable functions. in particular, will be the set of computable hypotheses for worlds, represented as non-halting programs taking no input with and being respectively the unit and bottom types.finally, we’ll implicitely “cast” mathematical objects as natural numbers wherever appropriate, given that they’re all sampled from countable sets anyways. when they’re cast to or from natural numbers, assume a reasonable bijection between their type and
.we’ll define the following:
a “carver” function
which returns a set of tuples of:a function
for extracting a piece of data “in the same way” as is in but from any other worlda piece of data
that represents “everything else” than in the worlda function
for counterfactually putting another piece of data back in , alongsidethis is done by splitting a world
into the piece of data , and “everything else”, denoted . in practice with arbitrary other worlds, would return “garbage” the immense majority of the time, but the hope is that given a same carving for the question , a same carving for the answer would work enough time to give a signal that would tend to beat the overall noise of the failing cases.we’ll define
, the question-answer counterfactual interval device used to consider answers to counterfactual questions , given a world hypothesis and a known question and answer :note how
measures the simplicity of all four functions together so as to favor them being simple but also similar, but ignores the simplicity of the “rest of the world” values..finally, we can define our AI
as a function of but also an observation which could, in practice, be anything that lets the AI as well as the user better locate themselves in the set of possible worlds.what’s going on here, is for each world hypothesis
weighed by its simplicity we’re finding carvings for and that let us inject a counterfactual question and get a counterfactual answer, favoring simpler carvings using . the counterfactual rerun of the past user is given the world hypothesis being considered, as well as the action — we’re not giving the user because that would make the counterfactual question guaranteed to be strictly larger than the original question, which seems like it might break things, and is not absolutely necessary. the user gets to return an arbitrary function to be evaluated, to which are actually passed. we are, however, clamping the result of inside and then multiplying it by ourselves, so as to avoid erroneous carvings getting to influence the decision process too much by returning arbitrary large values. the hope here is merely that desired carvings generally outweigh undesired carvings, and especially adverserial carvings.ways this could go wrong include:
the carvings could fail to sufficiently select the actual instances of the past-user in the world; in particular,
could be returning so much “garbage” when given counterfactual worlds that are different from that the signal from the carvings that do work end ups completely drowned out.the carvings locate question-answering users anywhere in the world, including in the future. this allows for adverserial intelligences to occupy most of the set of simple and coherently-answering carvings, thus steering our AI away from following the actual user’s decisions. a solution to this would be to sufficiently strongly favor carvings that select question-answering processes that are causally upstream of the AI itself; that is to say, causally upstream of
.weird implementation details as for how turing machines are encoded could dominate most of the signal of values returned by
. in addition, the user could be unsure as to how to return a meaningful scalar given a particular potential action. these points could both be partially addressed by passing to the carvings pairs of actions for all possible pairs of different actions from , and have the counterfactual user select a preferred action, rather than relying on a scalar returned about a single action. if and agree as to which of and is the preferable action, then that’s an actual signal that it is preferable. adding more “symmetries” might make noise cancel itself out.i don’t think we get to require
to return carvings that work for arbitrarily large counterfactual payloads, nor do we need to. in practice, i expect a constant finite (though large) bit length is to be used for .there’s a lot of sets-of-all-programs being sampled from here, leaving a lot of room for demons in the solomonoff prior if those are a concern. typically i tend to favor decision theoritic solutions to those, and maybe a correct
implementation would return action-functions which would depend on whatever decision theory is correct, such that this can be delegated? but it feels like this system has ways to go wrong before that, in what programs get to control most of the “mass” returned by to begin with.this is of course highly uncomputable. the intent here, is to use something like logical induction to approximate good results to this function. what makes me hopeful that a powerful AI can make helpful guesses as to what actions this process would find, if it is indeed aligned, is that even i, a mere human mind, feel like i can make some helpful guesses as to what actions this process would find.