And, imagine what an agent could do without the limits of human hardware or software. Now that would really be something.
Such an agent may not have the limits of human hardware or software but such an agent does require a similar amount of restrictions and (from the agents point of view) irrational assumptions and desires or it is my opinion that the agent will not do anything.
It could learn and practice body language, fashion, salesmanship, seduction, the laws of money, and domain-specific skills and win in every sphere of life without constant defeat by human hangups.
The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn’t have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing. It is easier to win if one is endowed with the required intuitions and heuristics to make practicing such things both desirable and natural.
a real agent with the power to reliably do things it believed would fulfill its desires
There is no accounting for preferences (or desires) meaning such things are not usually rationally chosen and when they are there is still a base of non-rational assumptions. Homo economicus is just as dependent on intuition and heuristics as anyone else. The only place that it is different, at least as classically understood, is its ability to access near perfect information and to calculate exactly its preferences and probabilities.
edit
Also
You do not have much cognitive access to your motivations.
This is said as a bad thing when it is a necessary thing.
Such an agent may not have the limits of human hardware or software but such an agent does require a similar amount of restrictions and (from the agents point of view) irrational assumptions and desires or it is my opinion that the agent will not do anything.
Desires/goals/utility functions are non-rational, but I don’t know what you mean by saying that an artificial agent needs restrictions and assumptions in order to do something. Are you just saying that it will need heuristics rather than (say) AIXI in order to be computationally tractable? If so, I agree. But that doesn’t mean it needs to operate under anything like the limits of humans hardware and software, which is all I claimed.
The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn’t have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing.
Sure, but I think a superintelligence could figure it out, the same way a superintelligence could figure out quantum computing or self-replicating probes.
There is no accounting for preferences (or desires) meaning such things are not usually rationally chosen and when they are there is still a base of non-rational assumptions
Agreed. This is the Humean theory of motivation, which I agree with. I don’t see how anything I said disagrees with the Humean theory of motivation.
This is said as a bad thing when it is a necessary thing.
I didn’t say it as a bad thing, but a correcting thing. People think they have more access to their motivations than they really do. Also, it’s not a necessary thing that we don’t have much cognitive access to our motivations. In fact, as neuroscience progresses, I expect us to gain much more access to our motivations.
JohnH, I kept asking what you meant because the claims I interpreted from your posts were so obviously false that I kept assuming I was interpreting you incorrectly. I’m still mostly assuming that, actually.
Sure, but I think a superintelligence could figure it out, the same way a superintelligence could figure out quantum computing or self-replicating probes.
Again, the superintelligence would need to have some reasons to desire to figure out any such thing and to think that it can figure out such things.
In fact, as neuroscience progresses, I expect us to gain much more access to our motivations.
Even if this is true any motivation to modify our motivations would itself be based on our motivations.
the claims I interpreted from your posts were so obviously false that I kept assuming I was interpreting you incorrectly.
I do not see how anything I said is obviously false. Please explain this.
Again, the superintelligence would need to have some reasons to desire to figure out any such thing and to think that it can figure out such things.
Sure. Like, its utility function. How does anything you’re saying contradict what I claimed in my original post?
Sorry, I still haven’t gotten any value out of this thread. We seem to be talking past each other. I must turn my attention to more productive tasks now...
Hang on, you are going to claim that my comments are obviously false then argue over definitions and when definitions are agreed upon walk away without stating what is obviously false?
I seriously feel that I gotten the run around from you rather than at any time a straight answer. My only possible conclusions are you are being evasive or you have inconsistent beliefs about the subject (or both).
You seem to have used the words ‘heuristic’ and ‘intuition’ to refer to terminal values (eg. a utility function) and perhaps occam priors, as opposed to the usually understood meaning “a computationally tractable approximation to the correct decision making process (full bayesian updating or whatever)”. It looks like you and lukeprog actually agree on everything that is relevant, but without generating any feeling of agreement. As I see it, you said something like “but such an agent won’t do anything without an occam prior and terminal values”, to which lukeprog responded “but clearly anything you can do with an approximation you can do with full bayesian updating and decision theory”.
Basically, I suggest you Taboo “intuition” and “heuristic” (and/or read over your own posts with “computationally tractable approximation” substituted for “intuition” and “heuristic”, to see what lukeprog thinks is ‘obviously false’).
Luke isn’t arguing over definitions as far as I could see, he was checking to see if there was a possibility of communication.
A heuristic is a quick and dirty way of getting an approximation to what you want, when getting a more accurate estimate would not be worth the extra effort/energy/whatever it would cost. As I see it the confusion here arises from the fact that you believe this has something to do with goals and utility functions. It doesn’t. These can be arbitrary for all we care. But, any intelligence no matter it’s goals or utility function will want to achieve things, after all that’s what it means to have goals. If it has sufficient computational power handy it’ll use an accurate estimator, if not a heuristic.
Heuristics have nothing to do with goals, adaptations not ends
The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn’t have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing. It is easier to win if one is endowed with the required intuitions and heuristics to make practicing such things both desirable and natural.
Yeah, you probably do want to let the elephant be in charge of fighting or mating with other elephants, once the rider has decided it’s a good idea to do so.
If this is where you’re going, then I don’t understand the connection to my original post.
Which sentence(s) of my original post do you disagree with, and why?
I have already gone over this.
Such an agent may not have the limits of human hardware or software but such an agent does require a similar amount of restrictions and (from the agents point of view) irrational assumptions and desires or it is my opinion that the agent will not do anything.
The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn’t have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing. It is easier to win if one is endowed with the required intuitions and heuristics to make practicing such things both desirable and natural.
There is no accounting for preferences (or desires) meaning such things are not usually rationally chosen and when they are there is still a base of non-rational assumptions. Homo economicus is just as dependent on intuition and heuristics as anyone else. The only place that it is different, at least as classically understood, is its ability to access near perfect information and to calculate exactly its preferences and probabilities.
edit Also
This is said as a bad thing when it is a necessary thing.
Desires/goals/utility functions are non-rational, but I don’t know what you mean by saying that an artificial agent needs restrictions and assumptions in order to do something. Are you just saying that it will need heuristics rather than (say) AIXI in order to be computationally tractable? If so, I agree. But that doesn’t mean it needs to operate under anything like the limits of humans hardware and software, which is all I claimed.
Sure, but I think a superintelligence could figure it out, the same way a superintelligence could figure out quantum computing or self-replicating probes.
Agreed. This is the Humean theory of motivation, which I agree with. I don’t see how anything I said disagrees with the Humean theory of motivation.
I didn’t say it as a bad thing, but a correcting thing. People think they have more access to their motivations than they really do. Also, it’s not a necessary thing that we don’t have much cognitive access to our motivations. In fact, as neuroscience progresses, I expect us to gain much more access to our motivations.
JohnH, I kept asking what you meant because the claims I interpreted from your posts were so obviously false that I kept assuming I was interpreting you incorrectly. I’m still mostly assuming that, actually.
You need to assume inductive priors. Otherwise you’re pretty much screwed.
wedrifid has explained the restriction part well.
Again, the superintelligence would need to have some reasons to desire to figure out any such thing and to think that it can figure out such things.
Even if this is true any motivation to modify our motivations would itself be based on our motivations.
I do not see how anything I said is obviously false. Please explain this.
Sure. Like, its utility function. How does anything you’re saying contradict what I claimed in my original post?
Sorry, I still haven’t gotten any value out of this thread. We seem to be talking past each other. I must turn my attention to more productive tasks now...
Hang on, you are going to claim that my comments are obviously false then argue over definitions and when definitions are agreed upon walk away without stating what is obviously false?
I seriously feel that I gotten the run around from you rather than at any time a straight answer. My only possible conclusions are you are being evasive or you have inconsistent beliefs about the subject (or both).
You seem to have used the words ‘heuristic’ and ‘intuition’ to refer to terminal values (eg. a utility function) and perhaps occam priors, as opposed to the usually understood meaning “a computationally tractable approximation to the correct decision making process (full bayesian updating or whatever)”. It looks like you and lukeprog actually agree on everything that is relevant, but without generating any feeling of agreement. As I see it, you said something like “but such an agent won’t do anything without an occam prior and terminal values”, to which lukeprog responded “but clearly anything you can do with an approximation you can do with full bayesian updating and decision theory”.
Basically, I suggest you Taboo “intuition” and “heuristic” (and/or read over your own posts with “computationally tractable approximation” substituted for “intuition” and “heuristic”, to see what lukeprog thinks is ‘obviously false’).
Thank you for that, I will check over it.
Luke isn’t arguing over definitions as far as I could see, he was checking to see if there was a possibility of communication.
A heuristic is a quick and dirty way of getting an approximation to what you want, when getting a more accurate estimate would not be worth the extra effort/energy/whatever it would cost. As I see it the confusion here arises from the fact that you believe this has something to do with goals and utility functions. It doesn’t. These can be arbitrary for all we care. But, any intelligence no matter it’s goals or utility function will want to achieve things, after all that’s what it means to have goals. If it has sufficient computational power handy it’ll use an accurate estimator, if not a heuristic.
Heuristics have nothing to do with goals, adaptations not ends
Yeah, you probably do want to let the elephant be in charge of fighting or mating with other elephants, once the rider has decided it’s a good idea to do so.