Ah, ok. So you’re saying, “Let’s do FAI by first assuming we have an incomplete infinity of processing power to apply—thus assuming the Most Powerful Possible Agent as our ‘adversary’ to be ‘tamed’.” Hence the continual use of AIXI?
Yes, something like that, although I don’t usually think of it as an adversary. Mainly it’s so I can ask questions like “how could a FAI model its operator so that it can infer the operator’s values from their behavior?” without getting hung up on the exact representation of the model or how the model is found. We don’t have any solution to this problem, even if we had access to a probabilistic program induction black box, so it would be silly to impose the additional restriction that we can’t give the black box any induction problems that are too hard.
That said, bounded algorithms can be useful as inspiration, even for unbounded problems. For example, I’m currently looking at ways you could use probabilistic programs with nested queries to model Vingean reflection.
For example, I’m currently looking at ways you could use probabilistic programs with nested queries to model Vingean reflection.
ZOMFG, can you link to a write-up? This links up almost perfectly with a bit of research I’ve been wanting to do.
Yes, something like that, although I don’t usually think of it as an adversary.
I more meant “adversary” in crypto terms: something that can and will throw behavior at us we don’t want unless we formally demonstrate that it can’t.
That said, bounded algorithms can be useful as inspiration, even for unbounded problems.
I have a slightly different perspective on the bounded/unbounded issue. Have you ever read Jaynes’ Probability Theory? Well, I never got up to the part where he undoes paradoxes, but the way he preached about it sunk in: a paradox will often arise because you passed to the limit too early in your proof or construction. I’ve also been very impressed by the degree to which resource-rational and bounded-rational models of cognition explain facts about real minds that unbounded models either can’t explain at all or write off as “irrational”.
To quote myself (because it’s applicable here but the full text isn’t done):
The key is that AIXI evaluates K(x), the Kolmogorov complexity of each possible Turing-machine program. This function allows a Solomonoff Inducer to perfectly separate the random information in its sensory data from the structural information, yielding an optimal distribution over representations that contain nothing but causal structure. This is incomputable, or requires infinite algorithmic information—AIXI can update optimally on sensory information by falling back on its infinite computing power.
In my perspective, at least, AIXI is cheating by assuming unbounded computational power, with the result that even the “bounded” and “approximate” AIXI_{tl} runs in “optimal” time modulo an astronomically-large additive constant. So I think that a “bottom-up” theory of bounded-rational reasoning or resource-rational reasoning—one that starts with the assumption we have strictly bounded finite compute-power the same way probability theory assumes we have strictly bounded finite information—will work a lot better to explain how to scale up by “passing to the limit” at the last step.
Which then goes to that research I want to do: I think we could attack logical uncertainty and probabilistic reflection by finding a theory for how to trade finite amounts of compute time for finite amounts of algorithmic information. The structure currently in my imagination is a kind of probability mixed with domain theory: the more computing power you add, the more certain you can become about the results of computations, even if you still have to place some probability mass on \Bot (bottom). In fact, if you find over time that you place more probability mass on \Bot, then you’re acquiring a degree of belief that the computation in question won’t terminate.
I think this would then mix with probabilistic programming fairly well, and also have immediate applications to assigning rational, well-behaved degrees of belief to “weird” propositions like Goedel Sentences or Halting predicates.
(BTW: here’s a writeup of one of my ideas for writing planning queries that you might be interested in)
Often we want a model where the probability of taking action a is proportional to p(a)e^E[U(x, a)], where p is the prior over actions, x consists of some latent variables, and U is the utility function. The straightforward way of doing this fails:
query {
. a ~ p()
. x ~ P(x)
. factor(U(x, a))
}
Note that I’m assuming factor takes a log probability as its argument. This fails due to “wishful thinking”: it tends to prefer riskier actions. The problem can be reduced by taking more samples:
query {
. a ~ p()
. us = []
. for i = 1 to n
. . x_i ~ P(x)
. . us.append(U(x_i, a))
. factor(mean(us))
}
This does better, because since we took multiple samples, mean(us) is likely to be somewhat accurate. But how do we know how many samples to take? The exact query we want cannot be expressed with any finite n.
It turns out that we just need to sample n from a Poisson distribution and make some more adjustments:
query {
. a ~ p()
. n ~ Poisson(1)
. for i = 1 to n
. . x_i ~ P(x)
. . factor(log U(x_i, a))
}
Note that U must be non-negative. Why does this work? Consider:
P(a) α p(a) E[e^sum(log U(x_i, a) for i in range(n))]
= p(a) E[prod(U(x_i, a) for i in range(n))]
= p(a) E[ E[prod(U(x_i, a) for i in range(n)) | n] ]
[here use the fact that the terms in the product are independent]
= p(a) E[ E[U(x, a)]^n ]
= p(a) sum(i=0 to infinity) E[U(x, a)]^i / i!
[Taylor series!]
= p(a) e^E[U(x, a)]
Ideally, this technique would help to perform inference in planning models where we can’t enumerate all possible states.
Your model selects an action proportional to p(a) E[sigmoid(U) | a], whereas mine selects an action proportional to p(a) e^E[U | a]. I think the second is better, because it actually treats actions the same if they have the same expected utility. The sigmoid version will not take very high utilities or very low utilities into account much.
Btw it’s also possible to select an action proportional to E[U | a]^n:
query {
. a ~ p()
. for i = 1 to n
. . x_i ~ P(x)
. . factor(log U(x, a))
}
Could you explain your syntax here? What probabilistic programming language are you using?
I think the second is better, because it actually treats actions the same if they have the same expected utility.
Well so does the sigmoided version, but you are right that the sigmoid version won’t take very high or very low utilities into account. It’s meant to shoehorn unbounded utility functions into a framework where one normally works only with random variables.
ZOMFG, can you link to a write-up? This links up almost perfectly with a bit of research I’ve been wanting to do.
Well, a write-up doesn’t exist because I haven’t actually done the math yet :)
But the idea is about algorithms for doing nested queries. There’s a planning framework where you take action a proportional to p(a) e^E[U | a]. If one of these actions is “defer to your successor”, then the computation of (U | a) is actually another query that samples a different action b proportional to p(b) e^E[U | b]. In this case you can actually just go ahead and convert the resulting nested query to a 1-level query: you can convert a “softmax of softmax” into a regular softmax, if that makes sense.
This isn’t doing Vingean reflection, because it’s actually doing all the computational work that its successor would have to do. So I’m interested in ways to simplify computationally expensive nested queries into approximate computationally cheap single queries.
Here’s a simple example of why I think this might be possible. Suppose I flip a coin to decide whether the SAT problem I generate has a solution or not. Then I run a nested query to generate a SAT problem that either does or does not have a solution (depending on the original coin flip). Then I hand you the problem, and you have to guess whether it has a solution or not. I check your solution using a query to find the solution to the problem.
If you suck at solving SAT problems, your best bet might just be to guess that there’s a 50% chance that the problem is solveable. You could get this kind of answer by refactoring the complicated nested nested query model into a non-nested model and then noting that the SAT problem itself gives you very little information about whether it is solveable (subject to your computational constraints).
I’m thinking of figuring out the math here better and then applying it to things like planning queries where your successor has a higher rationality parameter than you (an agent with rationality parameter α takes action a with probability proportional to p(a) e^(α * E[U | a]) ). The goal would be to formalize some agent that, for example, generally chooses to defer to a successor who has a higher rationality parameter, unless there is some cost for deferring, in which case it may defer or not depending on some approximation of value of information.
Your project about trading computing power for algorithmic information seems interesting and potentially related, and I’d be interested in seeing any results you come up with.
even if you still have to place some probability mass on \Bot (bottom)
Is this because you assign probability mass to inconsistent theories that you don’t know are inconsistent?
When you use e-raised-to-the alpha times expectation, is that similar to the use of an exponential distribution in something like Adaboost, to take something like odds information and form a distribution over assorted weights? I have work to do, but will be giving your little write-up here a second read-through soon.
Is this because you assign probability mass to inconsistent theories that you don’t know are inconsistent?
The idea isn’t to assign probability mass to logical theories, but to the outcomes of computations in general. This is partly because computations-in-general strictly contains encodings of all possible proof systems, but also because, if we’re building algorithms that have to confront a Turing-complete environment, the environment may sometimes contain nontrivially nonhalting computations, which can’t be logically proved not to terminate. Since any realistic agent needs to be able to handle whatever its environment throws at it, it seems to follow that a realistic agent needs some resource-rational way to handle nonprovable nonhalting.
When you use e-raised-to-the alpha times expectation, is that similar to the use of an exponential distribution in something like Adaboost, to take something like odds information and form a distribution over assorted weights?
I’m not really that familiar with adaboost. The planning model is just reflecting the fact that bounded agents don’t always take the maximum expected utility action. The higher alpha is, the more bias there is towards good actions, but the more potentially expensive the computation is (e.g. if you use rejection sampling).
Since any realistic agent needs to be able to handle whatever its environment throws at it, it seems to follow that a realistic agent needs some resource-rational way to handle nonprovable nonhalting.
Ah, that makes sense! I think I see how “trading computational power for algorithmic information” makes sense in this framework.
The planning model is just reflecting the fact that bounded agents don’t always take the maximum expected utility action. The higher alpha is, the more bias there is towards good actions, but the more potentially expensive the computation is (e.g. if you use rejection sampling).
Ah, that makes sense!
Ah, that makes sense! I think I see how “trading computational power for algorithmic information” makes sense in this framework.
And before I could scribble a damned thing, Calude went and solved it six months ago. The Halting Problem, I mean.
Cool. If I get the meaning of the result well, it’s that if you run a random program for some number of steps and it doesn’t halt, then (depending on the exact numbers) it will be unlikely to halt when run on a supercomputer either, because halting times have low density. So almost all programs halt quickly or run a really really long time. Is this correct? This doesn’t quite let you approximate Chaitin’s omega, but it’s interesting that you can approximate a bounded variant of Chaitin’s omega (like what percentage of Turing machines halt when run for 10^50 steps). I can see how this would let you solve the halting problem well enough when you live in a bounded universe.
Cool. If I get the meaning of the result well, it’s that if you run a random program for some number of steps and it doesn’t halt, then (depending on the exact numbers) it will be unlikely to halt when run on a supercomputer either, because halting times have low density. So almost all programs halt quickly or run a really really long time. Is this correct?
Yep. Or, to put it a little tiny bit more accurately, you get a halting probability for your particular Turing machine, conditioned on the number of steps for which you’ve run it.
This doesn’t quite let you approximate Chaitin’s omega
Well technically, you can approximate Chaitin’s omega from below just as Chaitin himself describes in his book. You’ll just only be able to calculate finitely many digits.
I can see how this would let you solve the halting problem well enough when you live in a bounded universe.
Which we do ;-). You could run until you get past a desired threshold of probability (hypothesis testing), or you could use a bounded-rationality approach to vary your surety of nonhalting with your available processing power.
But overall, it gives you a way to “reason around” the Halting Problem, which, when we apply it to the various paradoxes of self-reference… you can see where I’m going with this.
I’m thinking of figuring out the math here better and then applying it to things like planning queries where your successor has a higher rationality parameter than you (an agent with rationality parameter α takes action a with probability proportional to p(a) e^(α * E[U | a]) ). The goal would be to formalize some agent that, for example, generally chooses to defer to a successor who has a higher rationality parameter, unless there is some cost for deferring, in which case it may defer or not depending on some approximation of value of information.
How does this deal with the Paradox of Procrastination?
Due to the planning model, the successor always has some nonzero probability of not pressing the button, so (depending on how much you value pressing it later) it’ll be worth it to press it at some point.
Ah, ok. So you’re saying, “Let’s do FAI by first assuming we have an incomplete infinity of processing power to apply—thus assuming the Most Powerful Possible Agent as our ‘adversary’ to be ‘tamed’.” Hence the continual use of AIXI?
Yes, something like that, although I don’t usually think of it as an adversary. Mainly it’s so I can ask questions like “how could a FAI model its operator so that it can infer the operator’s values from their behavior?” without getting hung up on the exact representation of the model or how the model is found. We don’t have any solution to this problem, even if we had access to a probabilistic program induction black box, so it would be silly to impose the additional restriction that we can’t give the black box any induction problems that are too hard.
That said, bounded algorithms can be useful as inspiration, even for unbounded problems. For example, I’m currently looking at ways you could use probabilistic programs with nested queries to model Vingean reflection.
ZOMFG, can you link to a write-up? This links up almost perfectly with a bit of research I’ve been wanting to do.
I more meant “adversary” in crypto terms: something that can and will throw behavior at us we don’t want unless we formally demonstrate that it can’t.
I have a slightly different perspective on the bounded/unbounded issue. Have you ever read Jaynes’ Probability Theory? Well, I never got up to the part where he undoes paradoxes, but the way he preached about it sunk in: a paradox will often arise because you passed to the limit too early in your proof or construction. I’ve also been very impressed by the degree to which resource-rational and bounded-rational models of cognition explain facts about real minds that unbounded models either can’t explain at all or write off as “irrational”.
To quote myself (because it’s applicable here but the full text isn’t done):
In my perspective, at least, AIXI is cheating by assuming unbounded computational power, with the result that even the “bounded” and “approximate” AIXI_{tl} runs in “optimal” time modulo an astronomically-large additive constant. So I think that a “bottom-up” theory of bounded-rational reasoning or resource-rational reasoning—one that starts with the assumption we have strictly bounded finite compute-power the same way probability theory assumes we have strictly bounded finite information—will work a lot better to explain how to scale up by “passing to the limit” at the last step.
Which then goes to that research I want to do: I think we could attack logical uncertainty and probabilistic reflection by finding a theory for how to trade finite amounts of compute time for finite amounts of algorithmic information. The structure currently in my imagination is a kind of probability mixed with domain theory: the more computing power you add, the more certain you can become about the results of computations, even if you still have to place some probability mass on \Bot (bottom). In fact, if you find over time that you place more probability mass on \Bot, then you’re acquiring a degree of belief that the computation in question won’t terminate.
I think this would then mix with probabilistic programming fairly well, and also have immediate applications to assigning rational, well-behaved degrees of belief to “weird” propositions like Goedel Sentences or Halting predicates.
(BTW: here’s a writeup of one of my ideas for writing planning queries that you might be interested in)
Often we want a model where the probability of taking action a is proportional to p(a)e^E[U(x, a)], where p is the prior over actions, x consists of some latent variables, and U is the utility function. The straightforward way of doing this fails:
Note that I’m assuming factor takes a log probability as its argument. This fails due to “wishful thinking”: it tends to prefer riskier actions. The problem can be reduced by taking more samples:
This does better, because since we took multiple samples, mean(us) is likely to be somewhat accurate. But how do we know how many samples to take? The exact query we want cannot be expressed with any finite n.
It turns out that we just need to sample n from a Poisson distribution and make some more adjustments:
Note that U must be non-negative. Why does this work? Consider:
Ideally, this technique would help to perform inference in planning models where we can’t enumerate all possible states.
Interesting! How does that compare to the usual implementations of planning as probabilistic inference, as exemplified below?
Your model selects an action proportional to p(a) E[sigmoid(U) | a], whereas mine selects an action proportional to p(a) e^E[U | a]. I think the second is better, because it actually treats actions the same if they have the same expected utility. The sigmoid version will not take very high utilities or very low utilities into account much.
Btw it’s also possible to select an action proportional to E[U | a]^n:
Could you explain your syntax here? What probabilistic programming language are you using?
Well so does the sigmoided version, but you are right that the sigmoid version won’t take very high or very low utilities into account. It’s meant to shoehorn unbounded utility functions into a framework where one normally works only with random variables.
It’s not a specific programming language, I guess it’s meant to look like Church. It could be written as:
It samples an action proportional to p(a) E[sigmoid(U) | a]. This can’t be written as a function of E[U | a].
Well, a write-up doesn’t exist because I haven’t actually done the math yet :)
But the idea is about algorithms for doing nested queries. There’s a planning framework where you take action a proportional to p(a) e^E[U | a]. If one of these actions is “defer to your successor”, then the computation of (U | a) is actually another query that samples a different action b proportional to p(b) e^E[U | b]. In this case you can actually just go ahead and convert the resulting nested query to a 1-level query: you can convert a “softmax of softmax” into a regular softmax, if that makes sense.
This isn’t doing Vingean reflection, because it’s actually doing all the computational work that its successor would have to do. So I’m interested in ways to simplify computationally expensive nested queries into approximate computationally cheap single queries.
Here’s a simple example of why I think this might be possible. Suppose I flip a coin to decide whether the SAT problem I generate has a solution or not. Then I run a nested query to generate a SAT problem that either does or does not have a solution (depending on the original coin flip). Then I hand you the problem, and you have to guess whether it has a solution or not. I check your solution using a query to find the solution to the problem.
If you suck at solving SAT problems, your best bet might just be to guess that there’s a 50% chance that the problem is solveable. You could get this kind of answer by refactoring the complicated nested nested query model into a non-nested model and then noting that the SAT problem itself gives you very little information about whether it is solveable (subject to your computational constraints).
I’m thinking of figuring out the math here better and then applying it to things like planning queries where your successor has a higher rationality parameter than you (an agent with rationality parameter α takes action a with probability proportional to p(a) e^(α * E[U | a]) ). The goal would be to formalize some agent that, for example, generally chooses to defer to a successor who has a higher rationality parameter, unless there is some cost for deferring, in which case it may defer or not depending on some approximation of value of information.
Your project about trading computing power for algorithmic information seems interesting and potentially related, and I’d be interested in seeing any results you come up with.
Is this because you assign probability mass to inconsistent theories that you don’t know are inconsistent?
When you use e-raised-to-the alpha times expectation, is that similar to the use of an exponential distribution in something like Adaboost, to take something like odds information and form a distribution over assorted weights? I have work to do, but will be giving your little write-up here a second read-through soon.
The idea isn’t to assign probability mass to logical theories, but to the outcomes of computations in general. This is partly because computations-in-general strictly contains encodings of all possible proof systems, but also because, if we’re building algorithms that have to confront a Turing-complete environment, the environment may sometimes contain nontrivially nonhalting computations, which can’t be logically proved not to terminate. Since any realistic agent needs to be able to handle whatever its environment throws at it, it seems to follow that a realistic agent needs some resource-rational way to handle nonprovable nonhalting.
I’m not really that familiar with adaboost. The planning model is just reflecting the fact that bounded agents don’t always take the maximum expected utility action. The higher alpha is, the more bias there is towards good actions, but the more potentially expensive the computation is (e.g. if you use rejection sampling).
Ah, that makes sense! I think I see how “trading computational power for algorithmic information” makes sense in this framework.
Ah, that makes sense!
And before I could scribble a damned thing, Calude went and solved it six months ago. The Halting Problem, I mean.
I wonder how he feels about that, because my current feeling about this is HOLY FUCKING SHIT. By GOD, my AIT textbook cannot get here fast enough.
Cool. If I get the meaning of the result well, it’s that if you run a random program for some number of steps and it doesn’t halt, then (depending on the exact numbers) it will be unlikely to halt when run on a supercomputer either, because halting times have low density. So almost all programs halt quickly or run a really really long time. Is this correct? This doesn’t quite let you approximate Chaitin’s omega, but it’s interesting that you can approximate a bounded variant of Chaitin’s omega (like what percentage of Turing machines halt when run for 10^50 steps). I can see how this would let you solve the halting problem well enough when you live in a bounded universe.
Yep. Or, to put it a little tiny bit more accurately, you get a halting probability for your particular Turing machine, conditioned on the number of steps for which you’ve run it.
Well technically, you can approximate Chaitin’s omega from below just as Chaitin himself describes in his book. You’ll just only be able to calculate finitely many digits.
Which we do ;-). You could run until you get past a desired threshold of probability (hypothesis testing), or you could use a bounded-rationality approach to vary your surety of nonhalting with your available processing power.
But overall, it gives you a way to “reason around” the Halting Problem, which, when we apply it to the various paradoxes of self-reference… you can see where I’m going with this.
You can always acquire it nautically to cut down on shipping time :P
What one did you get? I went with Li and Vitanyi, and have enjoyed my first readthrough.
The first one I bought was Calude’s, but it demands more background in pure maths than I have, so I went for Li and Vitanyi’s.
How does this deal with the Paradox of Procrastination?
Due to the planning model, the successor always has some nonzero probability of not pressing the button, so (depending on how much you value pressing it later) it’ll be worth it to press it at some point.