Interesting discussion. What do want to do with this measure once you have it? I can see how it would be elegant to have it, but it seems useful to think about how you would use this measure to inform your decisions.
Those building optimization algorithms need measures of their performance on the target problem. They generally measure space-time resources to find a solution. AFAIK, few bother with measuring solution quality Eliezer-style. Counting the better solutions not found is often extremely expensive. Dividing by the size of the soultion space is usually pointless.
Well, it’s always good to have a measure of intelligence, it we’re worrying about high intelligence beings. Also, I was hoping that it might give a way of formulating “reduced impact AI”. Alas, it seems to be insufficient.
Interesting discussion. What do want to do with this measure once you have it? I can see how it would be elegant to have it, but it seems useful to think about how you would use this measure to inform your decisions.
Those building optimization algorithms need measures of their performance on the target problem. They generally measure space-time resources to find a solution. AFAIK, few bother with measuring solution quality Eliezer-style. Counting the better solutions not found is often extremely expensive. Dividing by the size of the soultion space is usually pointless.
Well, it’s always good to have a measure of intelligence, it we’re worrying about high intelligence beings. Also, I was hoping that it might give a way of formulating “reduced impact AI”. Alas, it seems to be insufficient.