An AI that comes up with a solution that is ten thousand bits more complicated to find, but that is only a tiny bit better than the human solution, is not one to fear.
Wouldn’t AI effectiveness be different from optimization power? I mean, if a solution is ten thousand times harder to find, and only a tiny bit better, that just means the universe doesn’t allow much optimization in that direction. I think noticing that is a feature of the “optimization power” criteria, not a bug.
That’s exactly it. Just specifying the problem to be solved specifies the relationship between the value of a solution, and its percentile ranking, although you may not know this relationship without fully solving the problem. If all solutions have value between 0 and 1 (e.g. you’re trying to maximize your chances of succeeding at something) and half of all solutions have value at least 0.99 (so that it takes 1 bit of optimization power to get one of these) then an extra 100 bits of optimization power won’t do much. It’s not that the AI you ask to solve the problem isn’t good enough. It’s that the problem is inherently easy to find approximately optimal solutions to.
Wouldn’t AI effectiveness be different from optimization power? I mean, if a solution is ten thousand times harder to find, and only a tiny bit better, that just means the universe doesn’t allow much optimization in that direction. I think noticing that is a feature of the “optimization power” criteria, not a bug.
That’s exactly it. Just specifying the problem to be solved specifies the relationship between the value of a solution, and its percentile ranking, although you may not know this relationship without fully solving the problem. If all solutions have value between 0 and 1 (e.g. you’re trying to maximize your chances of succeeding at something) and half of all solutions have value at least 0.99 (so that it takes 1 bit of optimization power to get one of these) then an extra 100 bits of optimization power won’t do much. It’s not that the AI you ask to solve the problem isn’t good enough. It’s that the problem is inherently easy to find approximately optimal solutions to.