I am saying that without heuristics or intuitions what is the basis for any desires? If an agent is a software agent without built in heuristics and intuitions then what are its desires, what are its needs, and why would it desire to survive, to find out more about the world, to do anything? Where do the axioms it uses to think that it can modify the world or conclude anything come from?
Our built in heuristics and intuitions are what allow us to start building models of the world on which to reason in the first place and removing any of them demonstrably makes it harder to function in normal society or to act normally. Things that appear reasonable to almost everyone are utter nonsense and seem pointless to those that are missing some of the basic heuristics and intuitions.
If all such heuristics (e.g. no limits of human hardware or software) are taken away then what is left to build on?
I’ll jump in this conversation here, because I was going to respond with something very similar. (I thought about my response, and then was reading through the comments to see if it had already been said.)
And, imagine what an agent could do without the limits of human hardware or software.
I sometimes imagine this, and what I imagine is that without the limits (constraints) of our hardware and software, we wouldn’t have any goals or desires.
Here on Less Wrong, when I assimilated the idea that there is no objective value, I expected I would spiral into a depression in which I realized nothing mattered, since all my goals and desires were finally arbitrary with no currency behind them. But that’s not what happened—I continued to care about my immediate physical comfort, interacting with people, and the well-being of the people I loved. I consider that my built-in biological hardware and software came to the rescue. There is no reason to value the things I do, but they are built into my organism. Since I believe that it was being an organism that saved me (and by this I mean the product of evolution), I do not believe the organism (and her messy goals) can be separated from me.
I feel like this experiment helped me identify which goals are built in and which are abstract and more fully ‘chosen’. For example, I believe I did lose some of my values, I guess the ones that are most cerebral. (I only doubt this because with a spiteful streak and some lingering anger about the nonexistence of objective values, I could be expressing this anger by rejecting values that seem least immediate). I imagine with a heightened ability to edit my own values, I would attenuate them all, especially wherever there were inconsistencies.
These thoughts apply to humans only (that is, me) but I also imagine (entirely baselessly) that any creature without hardware and software constraints would have a tough time valuing anything. For this, I am mainly drawing on intuition I developed that if a species was truly immortal, they would be hard pressed to think of anything to do, or any reason to do it. Maybe, some values of artistry or curiosity could be left over from an evolutionary past.
Depends what kind of agent you have in mind. An advanced type of artificial agent has its goals encoded in a utility function. It desires to survive because surviving helps it achieve utility. Read chapter 2 of AIMA for an intro to artificial agents.
Precisely, that utility function is heuristic or intuition. Further survival can only be desired according to prior knowledge of the environment, so again a heuristic or intuition. It is also dependent on the actions that it is aware that it can perform (intuition or heuristic). One can only be an agent when placed in an environment, given some set of desires (heuristic) (and ways to measure accomplishing those desires), and given a basic understanding of what actions are possible (intuition), as well as whatever basic understanding of the environment is needed to be able to reason about the environment (intuition).
I assume chapter 2 of the 2nd edition is sufficiently close to chapter 2 of the 3rd edition?
And, imagine what an agent could do without the limits of human hardware or software. Now that would really be something.
Such an agent may not have the limits of human hardware or software but such an agent does require a similar amount of restrictions and (from the agents point of view) irrational assumptions and desires or it is my opinion that the agent will not do anything.
It could learn and practice body language, fashion, salesmanship, seduction, the laws of money, and domain-specific skills and win in every sphere of life without constant defeat by human hangups.
The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn’t have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing. It is easier to win if one is endowed with the required intuitions and heuristics to make practicing such things both desirable and natural.
a real agent with the power to reliably do things it believed would fulfill its desires
There is no accounting for preferences (or desires) meaning such things are not usually rationally chosen and when they are there is still a base of non-rational assumptions. Homo economicus is just as dependent on intuition and heuristics as anyone else. The only place that it is different, at least as classically understood, is its ability to access near perfect information and to calculate exactly its preferences and probabilities.
edit
Also
You do not have much cognitive access to your motivations.
This is said as a bad thing when it is a necessary thing.
Such an agent may not have the limits of human hardware or software but such an agent does require a similar amount of restrictions and (from the agents point of view) irrational assumptions and desires or it is my opinion that the agent will not do anything.
Desires/goals/utility functions are non-rational, but I don’t know what you mean by saying that an artificial agent needs restrictions and assumptions in order to do something. Are you just saying that it will need heuristics rather than (say) AIXI in order to be computationally tractable? If so, I agree. But that doesn’t mean it needs to operate under anything like the limits of humans hardware and software, which is all I claimed.
The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn’t have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing.
Sure, but I think a superintelligence could figure it out, the same way a superintelligence could figure out quantum computing or self-replicating probes.
There is no accounting for preferences (or desires) meaning such things are not usually rationally chosen and when they are there is still a base of non-rational assumptions
Agreed. This is the Humean theory of motivation, which I agree with. I don’t see how anything I said disagrees with the Humean theory of motivation.
This is said as a bad thing when it is a necessary thing.
I didn’t say it as a bad thing, but a correcting thing. People think they have more access to their motivations than they really do. Also, it’s not a necessary thing that we don’t have much cognitive access to our motivations. In fact, as neuroscience progresses, I expect us to gain much more access to our motivations.
JohnH, I kept asking what you meant because the claims I interpreted from your posts were so obviously false that I kept assuming I was interpreting you incorrectly. I’m still mostly assuming that, actually.
Sure, but I think a superintelligence could figure it out, the same way a superintelligence could figure out quantum computing or self-replicating probes.
Again, the superintelligence would need to have some reasons to desire to figure out any such thing and to think that it can figure out such things.
In fact, as neuroscience progresses, I expect us to gain much more access to our motivations.
Even if this is true any motivation to modify our motivations would itself be based on our motivations.
the claims I interpreted from your posts were so obviously false that I kept assuming I was interpreting you incorrectly.
I do not see how anything I said is obviously false. Please explain this.
Again, the superintelligence would need to have some reasons to desire to figure out any such thing and to think that it can figure out such things.
Sure. Like, its utility function. How does anything you’re saying contradict what I claimed in my original post?
Sorry, I still haven’t gotten any value out of this thread. We seem to be talking past each other. I must turn my attention to more productive tasks now...
Hang on, you are going to claim that my comments are obviously false then argue over definitions and when definitions are agreed upon walk away without stating what is obviously false?
I seriously feel that I gotten the run around from you rather than at any time a straight answer. My only possible conclusions are you are being evasive or you have inconsistent beliefs about the subject (or both).
You seem to have used the words ‘heuristic’ and ‘intuition’ to refer to terminal values (eg. a utility function) and perhaps occam priors, as opposed to the usually understood meaning “a computationally tractable approximation to the correct decision making process (full bayesian updating or whatever)”. It looks like you and lukeprog actually agree on everything that is relevant, but without generating any feeling of agreement. As I see it, you said something like “but such an agent won’t do anything without an occam prior and terminal values”, to which lukeprog responded “but clearly anything you can do with an approximation you can do with full bayesian updating and decision theory”.
Basically, I suggest you Taboo “intuition” and “heuristic” (and/or read over your own posts with “computationally tractable approximation” substituted for “intuition” and “heuristic”, to see what lukeprog thinks is ‘obviously false’).
Luke isn’t arguing over definitions as far as I could see, he was checking to see if there was a possibility of communication.
A heuristic is a quick and dirty way of getting an approximation to what you want, when getting a more accurate estimate would not be worth the extra effort/energy/whatever it would cost. As I see it the confusion here arises from the fact that you believe this has something to do with goals and utility functions. It doesn’t. These can be arbitrary for all we care. But, any intelligence no matter it’s goals or utility function will want to achieve things, after all that’s what it means to have goals. If it has sufficient computational power handy it’ll use an accurate estimator, if not a heuristic.
Heuristics have nothing to do with goals, adaptations not ends
The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn’t have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing. It is easier to win if one is endowed with the required intuitions and heuristics to make practicing such things both desirable and natural.
Yeah, you probably do want to let the elephant be in charge of fighting or mating with other elephants, once the rider has decided it’s a good idea to do so.
Intuitions are usually defined as being inexplicable. Apriori claims are usually explicable in terms of axioms, although axioms may be chosen for their intuitive appeal.
This is what I am reacting to, especially when combined with what I previously quoted.
Oh. So… are you suggesting that a software agent can’t learn body language, fashion, seduction, networking, etc.? I’m not sure what you’re saying.
I am saying that without heuristics or intuitions what is the basis for any desires? If an agent is a software agent without built in heuristics and intuitions then what are its desires, what are its needs, and why would it desire to survive, to find out more about the world, to do anything? Where do the axioms it uses to think that it can modify the world or conclude anything come from?
Our built in heuristics and intuitions are what allow us to start building models of the world on which to reason in the first place and removing any of them demonstrably makes it harder to function in normal society or to act normally. Things that appear reasonable to almost everyone are utter nonsense and seem pointless to those that are missing some of the basic heuristics and intuitions.
If all such heuristics (e.g. no limits of human hardware or software) are taken away then what is left to build on?
I’ll jump in this conversation here, because I was going to respond with something very similar. (I thought about my response, and then was reading through the comments to see if it had already been said.)
I sometimes imagine this, and what I imagine is that without the limits (constraints) of our hardware and software, we wouldn’t have any goals or desires.
Here on Less Wrong, when I assimilated the idea that there is no objective value, I expected I would spiral into a depression in which I realized nothing mattered, since all my goals and desires were finally arbitrary with no currency behind them. But that’s not what happened—I continued to care about my immediate physical comfort, interacting with people, and the well-being of the people I loved. I consider that my built-in biological hardware and software came to the rescue. There is no reason to value the things I do, but they are built into my organism. Since I believe that it was being an organism that saved me (and by this I mean the product of evolution), I do not believe the organism (and her messy goals) can be separated from me.
I feel like this experiment helped me identify which goals are built in and which are abstract and more fully ‘chosen’. For example, I believe I did lose some of my values, I guess the ones that are most cerebral. (I only doubt this because with a spiteful streak and some lingering anger about the nonexistence of objective values, I could be expressing this anger by rejecting values that seem least immediate). I imagine with a heightened ability to edit my own values, I would attenuate them all, especially wherever there were inconsistencies.
These thoughts apply to humans only (that is, me) but I also imagine (entirely baselessly) that any creature without hardware and software constraints would have a tough time valuing anything. For this, I am mainly drawing on intuition I developed that if a species was truly immortal, they would be hard pressed to think of anything to do, or any reason to do it. Maybe, some values of artistry or curiosity could be left over from an evolutionary past.
Depends what kind of agent you have in mind. An advanced type of artificial agent has its goals encoded in a utility function. It desires to survive because surviving helps it achieve utility. Read chapter 2 of AIMA for an intro to artificial agents.
Precisely, that utility function is heuristic or intuition. Further survival can only be desired according to prior knowledge of the environment, so again a heuristic or intuition. It is also dependent on the actions that it is aware that it can perform (intuition or heuristic). One can only be an agent when placed in an environment, given some set of desires (heuristic) (and ways to measure accomplishing those desires), and given a basic understanding of what actions are possible (intuition), as well as whatever basic understanding of the environment is needed to be able to reason about the environment (intuition).
I assume chapter 2 of the 2nd edition is sufficiently close to chapter 2 of the 3rd edition?
I don’t understand you. We must be using the terms ‘heuristic’ and ‘intuition’ to mean different things.
A pre-programed set of assumptions or desires that are not chosen rationally by the agent in question.
edit: perhaps you should look up 37 ways that words can be wrong
Also, you appear to be familiar with some philosophy so one could say they are A Priori models and desires in the sense of Plato or Kant.
If this is where you’re going, then I don’t understand the connection to my original post.
Which sentence(s) of my original post do you disagree with, and why?
I have already gone over this.
Such an agent may not have the limits of human hardware or software but such an agent does require a similar amount of restrictions and (from the agents point of view) irrational assumptions and desires or it is my opinion that the agent will not do anything.
The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn’t have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing. It is easier to win if one is endowed with the required intuitions and heuristics to make practicing such things both desirable and natural.
There is no accounting for preferences (or desires) meaning such things are not usually rationally chosen and when they are there is still a base of non-rational assumptions. Homo economicus is just as dependent on intuition and heuristics as anyone else. The only place that it is different, at least as classically understood, is its ability to access near perfect information and to calculate exactly its preferences and probabilities.
edit Also
This is said as a bad thing when it is a necessary thing.
Desires/goals/utility functions are non-rational, but I don’t know what you mean by saying that an artificial agent needs restrictions and assumptions in order to do something. Are you just saying that it will need heuristics rather than (say) AIXI in order to be computationally tractable? If so, I agree. But that doesn’t mean it needs to operate under anything like the limits of humans hardware and software, which is all I claimed.
Sure, but I think a superintelligence could figure it out, the same way a superintelligence could figure out quantum computing or self-replicating probes.
Agreed. This is the Humean theory of motivation, which I agree with. I don’t see how anything I said disagrees with the Humean theory of motivation.
I didn’t say it as a bad thing, but a correcting thing. People think they have more access to their motivations than they really do. Also, it’s not a necessary thing that we don’t have much cognitive access to our motivations. In fact, as neuroscience progresses, I expect us to gain much more access to our motivations.
JohnH, I kept asking what you meant because the claims I interpreted from your posts were so obviously false that I kept assuming I was interpreting you incorrectly. I’m still mostly assuming that, actually.
You need to assume inductive priors. Otherwise you’re pretty much screwed.
wedrifid has explained the restriction part well.
Again, the superintelligence would need to have some reasons to desire to figure out any such thing and to think that it can figure out such things.
Even if this is true any motivation to modify our motivations would itself be based on our motivations.
I do not see how anything I said is obviously false. Please explain this.
Sure. Like, its utility function. How does anything you’re saying contradict what I claimed in my original post?
Sorry, I still haven’t gotten any value out of this thread. We seem to be talking past each other. I must turn my attention to more productive tasks now...
Hang on, you are going to claim that my comments are obviously false then argue over definitions and when definitions are agreed upon walk away without stating what is obviously false?
I seriously feel that I gotten the run around from you rather than at any time a straight answer. My only possible conclusions are you are being evasive or you have inconsistent beliefs about the subject (or both).
You seem to have used the words ‘heuristic’ and ‘intuition’ to refer to terminal values (eg. a utility function) and perhaps occam priors, as opposed to the usually understood meaning “a computationally tractable approximation to the correct decision making process (full bayesian updating or whatever)”. It looks like you and lukeprog actually agree on everything that is relevant, but without generating any feeling of agreement. As I see it, you said something like “but such an agent won’t do anything without an occam prior and terminal values”, to which lukeprog responded “but clearly anything you can do with an approximation you can do with full bayesian updating and decision theory”.
Basically, I suggest you Taboo “intuition” and “heuristic” (and/or read over your own posts with “computationally tractable approximation” substituted for “intuition” and “heuristic”, to see what lukeprog thinks is ‘obviously false’).
Thank you for that, I will check over it.
Luke isn’t arguing over definitions as far as I could see, he was checking to see if there was a possibility of communication.
A heuristic is a quick and dirty way of getting an approximation to what you want, when getting a more accurate estimate would not be worth the extra effort/energy/whatever it would cost. As I see it the confusion here arises from the fact that you believe this has something to do with goals and utility functions. It doesn’t. These can be arbitrary for all we care. But, any intelligence no matter it’s goals or utility function will want to achieve things, after all that’s what it means to have goals. If it has sufficient computational power handy it’ll use an accurate estimator, if not a heuristic.
Heuristics have nothing to do with goals, adaptations not ends
Yeah, you probably do want to let the elephant be in charge of fighting or mating with other elephants, once the rider has decided it’s a good idea to do so.
Intuitions are usually defined as being inexplicable. Apriori claims are usually explicable in terms of axioms, although axioms may be chosen for their intuitive appeal.
precisely.