Yudkowsky’s definition of ‘intelligence’ is about the ability to achieve goals in general, not about the ability to achieve the system’s goals. That’s why you can’t increase a system’s intelligence by lowering its standards, i.e., making its preferences easier to satisfy.
Actually I do define intelligence as ability to hit a narrow outcome target relative to your own goals, but if your goals are very relaxed then the volume of outcome space with equal or greater utility will be very large. However one would expect that many of the processes involved in hitting a narrow target in outcome space (such that few other outcomes are rated equal or greater in the agent’s preference ordering), such as building a good epistemic model or running on a fast computer, would generalize across many utility functions; this is why we can speak of properties apt to intelligence apart from particular utility functions.
Actually I do define intelligence as ability to hit a narrow outcome target relative to your own goals
Hmm. But this just sounds like optimization power to me. You’ve defined intelligence in the past as “efficient cross-domain optimization”. The “cross-domain” part I’ve taken to mean that you’re able to hit narrow targets in general, not just ones you happen to like. So you can become more intelligent by being better at hitting targets you hate, or by being better at hitting targets you like.
The former are harder to test, but something you’d hate doing now could become instrumentally useful to know how to do later. And your intelligence level doesn’t change when the circumstance shifts which part of your skillset is instrumentally useful. For that matter, I’m missing why it’s useful to think that your intelligence level could drastically shift if your abilities remained constant but your terminal values were shifted. (E.g., if you became pickier.)
No, “cross-domain” means that I can optimize across instrumental domains. Like, I can figure out how to go through water, air, or space if that’s the fastest way to my destination, I am not limited to land like a ground sloth.
Measured intelligence shouldn’t shift if you become pickier—if you could previously hit a point such that only 1/1000th of the space was more preferred than it, we’d still expect you to hit around that narrow a volume of the space given your intelligence even if you claimed afterward that a point like that only corresponded to 0.25 utility on your 0-1 scale instead of 0.75 utility due to being pickier ([expected] utilities sloping more sharply downward with increasing distance from the optimum).
Actually I do define intelligence as ability to hit a narrow outcome target relative to your own goals, but if your goals are very relaxed then the volume of outcome space with equal or greater utility will be very large. However one would expect that many of the processes involved in hitting a narrow target in outcome space (such that few other outcomes are rated equal or greater in the agent’s preference ordering), such as building a good epistemic model or running on a fast computer, would generalize across many utility functions; this is why we can speak of properties apt to intelligence apart from particular utility functions.
Hmm. But this just sounds like optimization power to me. You’ve defined intelligence in the past as “efficient cross-domain optimization”. The “cross-domain” part I’ve taken to mean that you’re able to hit narrow targets in general, not just ones you happen to like. So you can become more intelligent by being better at hitting targets you hate, or by being better at hitting targets you like.
The former are harder to test, but something you’d hate doing now could become instrumentally useful to know how to do later. And your intelligence level doesn’t change when the circumstance shifts which part of your skillset is instrumentally useful. For that matter, I’m missing why it’s useful to think that your intelligence level could drastically shift if your abilities remained constant but your terminal values were shifted. (E.g., if you became pickier.)
No, “cross-domain” means that I can optimize across instrumental domains. Like, I can figure out how to go through water, air, or space if that’s the fastest way to my destination, I am not limited to land like a ground sloth.
Measured intelligence shouldn’t shift if you become pickier—if you could previously hit a point such that only 1/1000th of the space was more preferred than it, we’d still expect you to hit around that narrow a volume of the space given your intelligence even if you claimed afterward that a point like that only corresponded to 0.25 utility on your 0-1 scale instead of 0.75 utility due to being pickier ([expected] utilities sloping more sharply downward with increasing distance from the optimum).