utilities are just measures of satisfaction. They can be associated with anything.
True. But in most economic analysis, terminal utilities are associated with outcomes; the expected utilities that become associated with actions are usually instrumental utilities.
Nevertheless, I continue to agree with you that in some circumstances, it makes sense to attach terminal utilities to actions. This shows up, for example, in discussions of morality from a deontological viewpoint. For example, suppose you have a choice of lying or telling the truth. You assess the consequences of your actions, and are amused to discover that there is no difference in the consequences—you will not be believed in any case. A utilitarian would say that there is no moral difference in this case between lying and telling the truth. A Kant disciple would disagree. And the way he would explain this disagreement to the utilitarian would be to attach a negative moral utility to the action of speaking untruthfully.
Utilities are often associated with states of the world, yes. However, here you seemed to balk at utilities that were not so associated. I think such values can still be called “utilities”—and “utility functions” can be used to describe how they are generated—and the standard economic framework accommodates this just fine.
What this idea doesn’t fit into is the von Neumann–Morgenstern system—since it typically violates the independence axiom. However, that is not the end of the world. That axiom can simply be binned—and fairly often it is.
What this idea doesn’t fit into is the von Neumann–Morgenstern system—since it typically violates the independence axiom.
Unless you supply some restrictions, it is considerably more destructive than that. All axioms based on consequentialism are blown away. You said yourself that we can assign utilities so as to rationalize any set of actions that an agent might choose. I.e. there are no irrational actions. I.e. decision theory and utility theory are roughly as useful as theology.
No, no! That is like saying that a universal computer is useless to scientists—because it can be made to predict anything!
Universal action is a useful and interesting concept partly because it allows a compact, utility-based description of arbitrary computable agents. Once you have a utility function for an agent, you can then combine and compare its utility function with that of other agents, and generally use the existing toolbox of economics to help model and analyse the agent’s behaviour. This is all surely a Good Thing.
I’ve never seen the phrase universal action before. Googling didn’t help me. It certainly sounds like it might be an interesting concept. Can you provide a link to an explanation more coherent than the one you have attempted to give here?
As to whether a “utility-based” description of an agent that does not adhere to the standard axioms of utility is a “good thing”—well I am doubtful. Surely it does not enable use of the standard toolbox of economics, because that toolbox takes for granted that the participants in the economy are (approximately) rational agents.
Part of the case for using a utility maximization framework is that we can see that many agents naturally use an internal representation of utility. This is true for companies, and other “economic” actors. It is true to some extent for animal brains—and it is true for many of the synthetic artificial agents that have been constructed. Since so many agents are naturally utility-based, that makes the framework an obvious modelling medium for intelligent agents.
I see no problem modeling computable agents without even mentioning “utility”.
Similarly, you can model serial computers without mentioning Turing machines and parallel computers without mentioning cellular automata. Yet in those cases, the general abstraction turns out to be a useful and important concept. I think this is just the same.
Universal construction and universal action have some caveats about being compatible with constraints imposted by things like physical law. “Doing anything” means something like: being able to feed arbitrary computable sequences in parallel to your motor outputs. Sequences that fail due to severing your own head don’t violate the spirit of the idea, though. As with universal computation, universal action is subject to resource limitations in practice. My coinage—AFAIK. Attribution: unpublished manuscript ;-)
Well, I’ll just ignore the fact that universal construction means to me something very different than it apparently means to you. Your claim seems to be that we can ‘program’ a machine (which is already known to maximize utility) so as to output any sequence of symbols we wish it to output; program it by the clever technique of assigning a numeric utility to each possible infinite output string, in such a way that we attach the largest numeric utility to the specific string that we want.
And you are claiming this in the same thread in which you disparage all forms of discounting the future.
According to von Neumann [18], a constructor is endowed with universal construction if it is able to construct every other automaton, i.e. an automaton of any dimensions.
The term has subsequently become overloaded, it is true.
If I understand it correctly, the rest of your comment is a quibble about infinity.
I don’t “get” that. Why not just take things one output symbol at a time?
Wow. I didn’t see that one coming. Self-reproducing cellular automata. Brings back memories.
If I understand it correctly, the rest of your comment is a quibble about infinity. I don’t “get” that. Why not just take things one output symbol at a time?
Well, it wasn’t just a quibble about infinity. There was also the dig about discount rates. ;)
But I really am mystified. Is a ‘step’ in this kind of computation to output a symbol and switch to a different state? Are there formulas for calculating utilities? What data go into the calculation?
Exactly how does computation work here? Perhaps I need an example. How would you use this ‘utility maximization as a programming language’ scheme to program the machine to compute the square root of 2? I really don’t understand how this is related to either lambda calculus or Turing machines. Why don’t you take some time, work out the details, and then produce one of your essays?
I didn’t (and still don’t) understand how discount rates were relevant—if not via considering the comment about infinite output strings.
What data go into the calculation of utilities? The available history of sense data, memories, and any current inputs. The agent’s internal state, IOW.
Exactly how does computation work here?
Just like it normally does? You just write the utility function in a Turing-complete language—which you have to do anyway if you want any generality. The only minor complication is how to get a (single-valued) “function” to output a collection of motor outputs in parallel—but serialisation provides a standard solution to this “problem”.
Universal action might get an essay one day.
...and yes, if I hear too many more times that humans don’t have utility functions (we are better than that!) - or that utility maximisation is a bad implementation plan - I might polish up a page that debunks those—ISTM—terribly-flawed concepts—so I can just refer people to that.
I would usually answer this with a measure of inclusive fitness. However, it appears here that we are just talking about the agent’s brain—so in this context what that maximises is just utility—since that is the conventional term for such a maximand.
Your options seem to be exploring how agents calculate utilities. Are those all the options? An agent usually calculates utilities associated with its possible actions—and then chooses the action associated with the highest utility. That option doesn’t seem to be on the list. It looks a bit like 1 - but that seems to specifiy no lookahead—or no lookahead of a particular kind. Future actions are usually very important influences when choosing the current action. Their utilities are usually pretty important too.
If you are trying to make sense of my views in this area, perhaps see the bits about pragmatic and ideal utility functions—here:
Yes. In fact, 2 strictly contains both 1 and 3, by virtue of setting the discount rate to either 0 or 1.
Future actions are usually very important influences when choosing the current action.
But not strictly as important as the utility of the outcome of the current action. The amount by which future actions are less important than the outcome of the current action, and the methods by which we determine that, are what we mean when we say discount rates.
Yes. In fact, 2 strictly contains both 1 and 3, by virtue of setting the discount rate to either 0 or 1.
That helps understand the options. I am not sure I had enough info to figure out what you meant before.
1 corresponds to eating chocolate gateau all day and not brushing your teeth—not very realistic as you say. 3 looks like an option containing infinite numbers—and 2 is what all practical agents actually do.
However, I don’t think this captures what we were talking about. Pragmatic utility functions are necessarily temporally discounted—due to resource limitations and other effects. The issue is more whether ideal utility functions can be expected to be so discounted. I can’t think why they should be—and can think of several reasons why they shouldn’t be—which we have already covered.
Infinity is surely not a problem—you can just maximise utility over T years and let T increase in an unbounded fashion. The uncertainty principle limits the predictions of embedded agents in practice—so T won’t ever become too large to deal with.
However, I don’t think this captures what we were talking about. Pragmatic utility functions are necessarily temporally discounted—due to resource limitations and other effects.
My understanding is that “pragmatic utility functions” are supposed to be approximations to “ideal utility functions”—preferable only because the “pragmatic” are effectively computable whereas the ideal are not.
Our argument is that we see nothing constraining ideal utility functions to be finite unless you allow discounting at the ideal level. And if ideal utilities are infinite, then pragmatic utilities that approximate them must be infinite too. And comparison of infinite utilities in the hope of detecting finite differences cannot usefully guide choice.
Hence, we believe that discounting at the ideal level is inevitable. Particularly if we are talking about potentially immortal agents (or mortal agents who care about an infinite future).
Your last paragraph made no sense. Are you claiming that the consequence of actions made today must inevitably have negligible effect upon the distant future? A rather fatalistic stance to find in a forum dealing with existential risk. And not particularly realistic, either.
You seem obsessed with infinity :-( What about the universal heat death? Forget about infinity—just consider whether we want to discount on a scale of 1 year, 10 years, 100 years, 1,000 years, 10,000 years—or whatever.
I think “ideal” short-term discounting is potentially problematical. Once we are out to discounting on a billion year timescale, that is well into the “how many angels dance on the head of a pin” territory—from my perspective.
Some of the causes of instrumental discounting look very difficult to overcome—even for a superintelligence. The future naturally gets discounted to the extent that you can’t predict and control it—and many phenomena (e.g. the weather) are very challenging to predict very far into the future—unless you can bring them actively under your control.
Are you claiming that the consequence of actions made today must inevitably have negligible effect upon the distant future?
No, The idea was that predicting those consequences is often hard—and it gets harder the further out you go. Long term predictions thus often don’t add much to what short-term ones give you.
True. But in most economic analysis, terminal utilities are associated with outcomes; the expected utilities that become associated with actions are usually instrumental utilities.
Nevertheless, I continue to agree with you that in some circumstances, it makes sense to attach terminal utilities to actions. This shows up, for example, in discussions of morality from a deontological viewpoint. For example, suppose you have a choice of lying or telling the truth. You assess the consequences of your actions, and are amused to discover that there is no difference in the consequences—you will not be believed in any case. A utilitarian would say that there is no moral difference in this case between lying and telling the truth. A Kant disciple would disagree. And the way he would explain this disagreement to the utilitarian would be to attach a negative moral utility to the action of speaking untruthfully.
Utilities are often associated with states of the world, yes. However, here you seemed to balk at utilities that were not so associated. I think such values can still be called “utilities”—and “utility functions” can be used to describe how they are generated—and the standard economic framework accommodates this just fine.
What this idea doesn’t fit into is the von Neumann–Morgenstern system—since it typically violates the independence axiom. However, that is not the end of the world. That axiom can simply be binned—and fairly often it is.
Unless you supply some restrictions, it is considerably more destructive than that. All axioms based on consequentialism are blown away. You said yourself that we can assign utilities so as to rationalize any set of actions that an agent might choose. I.e. there are no irrational actions. I.e. decision theory and utility theory are roughly as useful as theology.
No, no! That is like saying that a universal computer is useless to scientists—because it can be made to predict anything!
Universal action is a useful and interesting concept partly because it allows a compact, utility-based description of arbitrary computable agents. Once you have a utility function for an agent, you can then combine and compare its utility function with that of other agents, and generally use the existing toolbox of economics to help model and analyse the agent’s behaviour. This is all surely a Good Thing.
I’ve never seen the phrase universal action before. Googling didn’t help me. It certainly sounds like it might be an interesting concept. Can you provide a link to an explanation more coherent than the one you have attempted to give here?
As to whether a “utility-based” description of an agent that does not adhere to the standard axioms of utility is a “good thing”—well I am doubtful. Surely it does not enable use of the standard toolbox of economics, because that toolbox takes for granted that the participants in the economy are (approximately) rational agents.
You have an alternative model of arbitrary computable agents to propose?
You don’t think the ability to model an arbitrary computable agent is useful?
What is the problem here? Surely a simple utility-based framework for modelling the computable agent of your choice is an obvious Good Thing.
I see no problem modeling computable agents without even mentioning “utility”.
I don’t yet see how modeling them as irrational utility maximizers is useful, since a non-utility-based approach will probably be simpler.
Part of the case for using a utility maximization framework is that we can see that many agents naturally use an internal representation of utility. This is true for companies, and other “economic” actors. It is true to some extent for animal brains—and it is true for many of the synthetic artificial agents that have been constructed. Since so many agents are naturally utility-based, that makes the framework an obvious modelling medium for intelligent agents.
Similarly, you can model serial computers without mentioning Turing machines and parallel computers without mentioning cellular automata. Yet in those cases, the general abstraction turns out to be a useful and important concept. I think this is just the same.
Universal action is named after universal computation and universal construction.
Universal construction and universal action have some caveats about being compatible with constraints imposted by things like physical law. “Doing anything” means something like: being able to feed arbitrary computable sequences in parallel to your motor outputs. Sequences that fail due to severing your own head don’t violate the spirit of the idea, though. As with universal computation, universal action is subject to resource limitations in practice. My coinage—AFAIK. Attribution: unpublished manuscript ;-)
Well, I’ll just ignore the fact that universal construction means to me something very different than it apparently means to you. Your claim seems to be that we can ‘program’ a machine (which is already known to maximize utility) so as to output any sequence of symbols we wish it to output; program it by the clever technique of assigning a numeric utility to each possible infinite output string, in such a way that we attach the largest numeric utility to the specific string that we want.
And you are claiming this in the same thread in which you disparage all forms of discounting the future.
What am I missing here?
For my usage, see:
http://carg2.epfl.ch/Publications/2004/PhysicaD04-Mange.pdf
The term has subsequently become overloaded, it is true.
If I understand it correctly, the rest of your comment is a quibble about infinity. I don’t “get” that. Why not just take things one output symbol at a time?
Wow. I didn’t see that one coming. Self-reproducing cellular automata. Brings back memories.
Well, it wasn’t just a quibble about infinity. There was also the dig about discount rates. ;)
But I really am mystified. Is a ‘step’ in this kind of computation to output a symbol and switch to a different state? Are there formulas for calculating utilities? What data go into the calculation?
Exactly how does computation work here? Perhaps I need an example. How would you use this ‘utility maximization as a programming language’ scheme to program the machine to compute the square root of 2? I really don’t understand how this is related to either lambda calculus or Turing machines. Why don’t you take some time, work out the details, and then produce one of your essays?
I didn’t (and still don’t) understand how discount rates were relevant—if not via considering the comment about infinite output strings.
What data go into the calculation of utilities? The available history of sense data, memories, and any current inputs. The agent’s internal state, IOW.
Just like it normally does? You just write the utility function in a Turing-complete language—which you have to do anyway if you want any generality. The only minor complication is how to get a (single-valued) “function” to output a collection of motor outputs in parallel—but serialisation provides a standard solution to this “problem”.
Universal action might get an essay one day.
...and yes, if I hear too many more times that humans don’t have utility functions (we are better than that!) - or that utility maximisation is a bad implementation plan - I might polish up a page that debunks those—ISTM—terribly-flawed concepts—so I can just refer people to that.
What is it that the agent acts so as to maximize?
The utility of the next action (ignoring the utility of expected future actions)
The utility of the next action plus a discounted expectation of future utilities.
The simple sum of all future expected utilities.
To me, only the first two options make mathematical sense, but the first doesn’t really make sense as a model of human motivation.
I would usually answer this with a measure of inclusive fitness. However, it appears here that we are just talking about the agent’s brain—so in this context what that maximises is just utility—since that is the conventional term for such a maximand.
Your options seem to be exploring how agents calculate utilities. Are those all the options? An agent usually calculates utilities associated with its possible actions—and then chooses the action associated with the highest utility. That option doesn’t seem to be on the list. It looks a bit like 1 - but that seems to specifiy no lookahead—or no lookahead of a particular kind. Future actions are usually very important influences when choosing the current action. Their utilities are usually pretty important too.
If you are trying to make sense of my views in this area, perhaps see the bits about pragmatic and ideal utility functions—here:
http://timtyler.org/expected_utility_maximisers/
Yes. In fact, 2 strictly contains both 1 and 3, by virtue of setting the discount rate to either 0 or 1.
But not strictly as important as the utility of the outcome of the current action. The amount by which future actions are less important than the outcome of the current action, and the methods by which we determine that, are what we mean when we say discount rates.
That helps understand the options. I am not sure I had enough info to figure out what you meant before.
1 corresponds to eating chocolate gateau all day and not brushing your teeth—not very realistic as you say. 3 looks like an option containing infinite numbers—and 2 is what all practical agents actually do.
However, I don’t think this captures what we were talking about. Pragmatic utility functions are necessarily temporally discounted—due to resource limitations and other effects. The issue is more whether ideal utility functions can be expected to be so discounted. I can’t think why they should be—and can think of several reasons why they shouldn’t be—which we have already covered.
Infinity is surely not a problem—you can just maximise utility over T years and let T increase in an unbounded fashion. The uncertainty principle limits the predictions of embedded agents in practice—so T won’t ever become too large to deal with.
My understanding is that “pragmatic utility functions” are supposed to be approximations to “ideal utility functions”—preferable only because the “pragmatic” are effectively computable whereas the ideal are not.
Our argument is that we see nothing constraining ideal utility functions to be finite unless you allow discounting at the ideal level. And if ideal utilities are infinite, then pragmatic utilities that approximate them must be infinite too. And comparison of infinite utilities in the hope of detecting finite differences cannot usefully guide choice. Hence, we believe that discounting at the ideal level is inevitable. Particularly if we are talking about potentially immortal agents (or mortal agents who care about an infinite future).
Your last paragraph made no sense. Are you claiming that the consequence of actions made today must inevitably have negligible effect upon the distant future? A rather fatalistic stance to find in a forum dealing with existential risk. And not particularly realistic, either.
You seem obsessed with infinity :-( What about the universal heat death? Forget about infinity—just consider whether we want to discount on a scale of 1 year, 10 years, 100 years, 1,000 years, 10,000 years—or whatever.
I think “ideal” short-term discounting is potentially problematical. Once we are out to discounting on a billion year timescale, that is well into the “how many angels dance on the head of a pin” territory—from my perspective.
Some of the causes of instrumental discounting look very difficult to overcome—even for a superintelligence. The future naturally gets discounted to the extent that you can’t predict and control it—and many phenomena (e.g. the weather) are very challenging to predict very far into the future—unless you can bring them actively under your control.
No, The idea was that predicting those consequences is often hard—and it gets harder the further out you go. Long term predictions thus often don’t add much to what short-term ones give you.
Flippantly: we’re going to have billions of years to find a solution to that problem.