You seem engaged in pointless hair-splitting. The Roomba’s designers wanted it to clean floors. It does clean floors. That is what it is for. That is its aim, its goal.
It has sensors enough to allow it to attain that goal. It can’t tell if a whole room is clean—but I never claimed it could do that. You don’t need to have such sensors to be effective at cleaning rooms.
As for me having to exhibit a whole model of a Roomba to illustrate that such a model could be built—that is crazy talk. You might as well argue that I have to exhibit a model of a suspension bridge to illustrate that such a model could be built.
The utility maximiser framework can model the actions of any computable intelligent agent—including a Roomba. That is, so long as the utility function may be expressed in a Turing-complete language.
You seem engaged in pointless hair-splitting. The Roomba’s designers wanted it to clean floors. It does clean floors. That is what it is for. That is its aim, its goal.
To me, the distinction between a purposive machine’s own purposes, and the purposes of its designers and users is something that it is esssential to be clear about. It is very like the distinction between fitness-maximising and adaptation-executing.
As for me having to exhibit a whole model of a Roomba to illustrate that such a model could be built—that is crazy talk. You might as well argue that I have to exhibit a model of a suspension bridge to illustrate that such a model could be built.
As a matter of fact, you would have to do just that (or build an actual one), had suspension bridges not already been built, and having already well-known principles of operation, allowing us to stand on the shoulders of those who first worked out the design. That is, you would have to show that the scheme of suspending the deck by hangers from cables strung between towers would actually do the job. Typically, using one of these when it comes to the point of working out an actual design and predicting how it will respond to stresses.
If you’re not actually going to build it then a BOTE calculation may be enough to prove the concept. But there must be a technical explanation or it’s just armchair verbalising.
The utility maximiser framework can model the actions of any computable intelligent agent—including a Roomba. That is, so long as the utility function may be expressed in a Turing-complete language.
If this is a summary of something well-known, please point me to a web link. I am familiar with stuff like this and see there no basis for this sweeping claim. The word “intelligent” in the above also needs clarifying.
What is a Roomba’s utility function? Or if a Roomba is too complicated, what is a room thermostat’s utility function? Or is that an unintelligent agent and therefore outside the scope of your claim?
By all means distingush between a machine’s purpose, and that which its makers intended for it.
Those ideas are linked, though. Designers want to give the intended purpose of intelligent machines to the machines themselves—so that they do what they were intended to.
“If the utility function is expressed as in a Turing-complete lanugage, the framework represents a remarkably-general model of intelligent agents—one which is capable of representing any pattern of behavioural responses that can itself be represented computationally.”
If expections are not enforced, this can be seen by considering the I/O streams of an agent—and considering the utility function to be a function that computes the agent’s motor outputs, given its state and sensory inputs. The possible motor outputs are ranked, assigned utilities—and then the action with the highest value is taken.
That handles any computable relationship between inputs and outputs—and it’s what I mean when I say that you can model a Roomba as a utility maximiser.
The framework handles thermostats too. The utility function produces its motor outputs in response to its sensory inputs. With, say, a bimetallic strip, the function is fairly simple, since the output (deflection) is proportional to the input (temperature).
If expections are not enforced, this can be seen by considering the I/O streams of an agent—and considering the utility function to be a function that computes the agent’s motor outputs, given its state and sensory inputs. The possible motor outputs are ranked, assigned utilities—and then the action with the highest value is taken.
That handles any computable relationship between inputs and outputs—and it’s what I mean when I say that you can model a Roomba as a utility maximiser.
The framework handles thermostats too.
I really don’t see how, Roombas or thermostats, so let’s take the thermostat as it’s simpler.
The utility function produces its motor outputs in response to its sensory inputs. With, say, a bimetallic strip, the function is fairly simple, since the output (deflection) is proportional to the input (temperature).
What, precisely, is that utility function?
You can tautologically describe any actor as maximising utility, just by defining the utility of whatever action it takes as 1 and the utility of everything else as zero. I don’t see any less trivial ascription of a utility function to a thermostat. The thermostat simply turns the heating on and off (or up and down continuously) according to the temperature it senses. How do you read the computation of a utility function, and decision between alternative of differing utility, into that apparatus?
The Pythagorean theorem is “tautological” too—but that doesn’t mean it is not useful.
Decomposing an agent into its utility function and its beliefs tells you which part of the agent is fixed, and which part is subject to environmental influences. It lets you know which region the agent wants to steer the future towards.
There’s a good reason why humans are interested in people’s motivations—they are genuinely useful for understanding another system’s behaviour. The same idea illustrates why knowing a system’s utility function is interesting.
There’s a good reason why humans are interested in people’s motivations—they are genuinely useful for understanding another system’s behaviour. The same idea illustrates why knowing a system’s utility function is interesting.
That doesn’t follow. The reason why we find it useful to know people’s motivations is because they are capable of a very wide range of behavior. With such a wide range of behavior, we need a way to quickly narrow down the set of things we will expect them to do. Knowing that they’re motivated to achieve result R, we can then look at just the set of actions or events that are capable of bringing about R.
Given the huge set of things humans can do, this is a huge reduction in the search space.
OTOH, if I want to predict the behavior of a thermostat, it does not help to know the utility function you have imputed to it, because this would not significantly reduce the search space compared to knowing its few pre-programmed actions. It can only do a few things in the first place, so I don’t need to think in terms of “what are all the ways it can achieve R?”—the thermostat’s form already tells me that.
Nevertheless, despite my criticism of this parallel, I think you have shed some light on when it is useful to describe a system in terms of a utility function, at least for me.
The Pythagorean theorem is “tautological” too—but that doesn’t mean it is not useful.
What’s that, weak Bayesian evidence that tautological, epiphenomenal utility functions are useful?
Decomposing an agent into its utility function and its beliefs tells you which part of the agent is fixed, and which part is subject to environmental influences.
Supposing for the sake of argument that there even is any such thing as a utility function, both it and beliefs are subject to environmental influences. No part of any biological agent is fixed. As for man-made ones, they are constituted however they were designed, which may or may not include utility functions and beliefs. Show me this decomposition for a thermostat, which you keep on claiming has a utility function, but which you have still not exhibited.
What you do changes who you are. Is your utility function the same as it was ten years ago? Twenty? Thirty? Yesterday? Before you were born?
Thanks for your questions. However, this discussion seems to have grown too tedious and boring to continue—bye.
Well, quite. Starting from here the conversation went:
“They exist.”
“Show me.”
“They exist.”
“Show me.”
“They exist.”
“Show me.”
“Kthxbye.”
It would have been more interesting if you had shown the utility functions that you claim these simple systems embody. At the moment they look like invisible dragons.
You seem engaged in pointless hair-splitting. The Roomba’s designers wanted it to clean floors. It does clean floors. That is what it is for. That is its aim, its goal.
It has sensors enough to allow it to attain that goal. It can’t tell if a whole room is clean—but I never claimed it could do that. You don’t need to have such sensors to be effective at cleaning rooms.
As for me having to exhibit a whole model of a Roomba to illustrate that such a model could be built—that is crazy talk. You might as well argue that I have to exhibit a model of a suspension bridge to illustrate that such a model could be built.
The utility maximiser framework can model the actions of any computable intelligent agent—including a Roomba. That is, so long as the utility function may be expressed in a Turing-complete language.
To me, the distinction between a purposive machine’s own purposes, and the purposes of its designers and users is something that it is esssential to be clear about. It is very like the distinction between fitness-maximising and adaptation-executing.
As a matter of fact, you would have to do just that (or build an actual one), had suspension bridges not already been built, and having already well-known principles of operation, allowing us to stand on the shoulders of those who first worked out the design. That is, you would have to show that the scheme of suspending the deck by hangers from cables strung between towers would actually do the job. Typically, using one of these when it comes to the point of working out an actual design and predicting how it will respond to stresses.
If you’re not actually going to build it then a BOTE calculation may be enough to prove the concept. But there must be a technical explanation or it’s just armchair verbalising.
If this is a summary of something well-known, please point me to a web link. I am familiar with stuff like this and see there no basis for this sweeping claim. The word “intelligent” in the above also needs clarifying.
What is a Roomba’s utility function? Or if a Roomba is too complicated, what is a room thermostat’s utility function? Or is that an unintelligent agent and therefore outside the scope of your claim?
By all means distingush between a machine’s purpose, and that which its makers intended for it.
Those ideas are linked, though. Designers want to give the intended purpose of intelligent machines to the machines themselves—so that they do what they were intended to.
As I put it on:
http://timtyler.org/expected_utility_maximisers/
“If the utility function is expressed as in a Turing-complete lanugage, the framework represents a remarkably-general model of intelligent agents—one which is capable of representing any pattern of behavioural responses that can itself be represented computationally.”
If expections are not enforced, this can be seen by considering the I/O streams of an agent—and considering the utility function to be a function that computes the agent’s motor outputs, given its state and sensory inputs. The possible motor outputs are ranked, assigned utilities—and then the action with the highest value is taken.
That handles any computable relationship between inputs and outputs—and it’s what I mean when I say that you can model a Roomba as a utility maximiser.
The framework handles thermostats too. The utility function produces its motor outputs in response to its sensory inputs. With, say, a bimetallic strip, the function is fairly simple, since the output (deflection) is proportional to the input (temperature).
I really don’t see how, Roombas or thermostats, so let’s take the thermostat as it’s simpler.
What, precisely, is that utility function?
You can tautologically describe any actor as maximising utility, just by defining the utility of whatever action it takes as 1 and the utility of everything else as zero. I don’t see any less trivial ascription of a utility function to a thermostat. The thermostat simply turns the heating on and off (or up and down continuously) according to the temperature it senses. How do you read the computation of a utility function, and decision between alternative of differing utility, into that apparatus?
The Pythagorean theorem is “tautological” too—but that doesn’t mean it is not useful.
Decomposing an agent into its utility function and its beliefs tells you which part of the agent is fixed, and which part is subject to environmental influences. It lets you know which region the agent wants to steer the future towards.
There’s a good reason why humans are interested in people’s motivations—they are genuinely useful for understanding another system’s behaviour. The same idea illustrates why knowing a system’s utility function is interesting.
That doesn’t follow. The reason why we find it useful to know people’s motivations is because they are capable of a very wide range of behavior. With such a wide range of behavior, we need a way to quickly narrow down the set of things we will expect them to do. Knowing that they’re motivated to achieve result R, we can then look at just the set of actions or events that are capable of bringing about R.
Given the huge set of things humans can do, this is a huge reduction in the search space.
OTOH, if I want to predict the behavior of a thermostat, it does not help to know the utility function you have imputed to it, because this would not significantly reduce the search space compared to knowing its few pre-programmed actions. It can only do a few things in the first place, so I don’t need to think in terms of “what are all the ways it can achieve R?”—the thermostat’s form already tells me that.
Nevertheless, despite my criticism of this parallel, I think you have shed some light on when it is useful to describe a system in terms of a utility function, at least for me.
See also
What’s that, weak Bayesian evidence that tautological, epiphenomenal utility functions are useful?
Supposing for the sake of argument that there even is any such thing as a utility function, both it and beliefs are subject to environmental influences. No part of any biological agent is fixed. As for man-made ones, they are constituted however they were designed, which may or may not include utility functions and beliefs. Show me this decomposition for a thermostat, which you keep on claiming has a utility function, but which you have still not exhibited.
What you do changes who you are. Is your utility function the same as it was ten years ago? Twenty? Thirty? Yesterday? Before you were born?
Thanks for your questions. However, this discussion seems to have grown too tedious and boring to continue—bye.
Well, quite. Starting from here the conversation went:
“They exist.”
“Show me.”
“They exist.”
“Show me.”
“They exist.”
“Show me.”
“Kthxbye.”
It would have been more interesting if you had shown the utility functions that you claim these simple systems embody. At the moment they look like invisible dragons.