The objective function that measures the value of the solution should be a part of the problem: it’s part of figuring out what counts as a solution in the first place. Maybe in some cases the solution is binary: if you’re solving an equation, you either get a root or you don’t.
In some cases, the value of solution is complicated to assess: how do you trade off a cure for cancer that fails in 10% of all cases, versus a cure for cancer that gives the patient a permanent headache? But either way you need an objective function to tell the AI (or the human) what a “cure for cancer” is; possibly your intuitive understanding of this is incomplete, but that’s another problem.
Edit: Your objection does have some merit, though, because you could have two different utility functions that yield the same optimization problem. For instance, you could be playing the stock market to optimize $$ or log($$), and the ranking of solutions would be the same (although expected values would be thrown off, but that’s another issue); however, the concept of a solution that’s “a tiny bit better” is different.
The objective function that measures the value of the solution should be a part of the problem: it’s part of figuring out what counts as a solution in the first place. Maybe in some cases the solution is binary: if you’re solving an equation, you either get a root or you don’t.
In some cases, the value of solution is complicated to assess: how do you trade off a cure for cancer that fails in 10% of all cases, versus a cure for cancer that gives the patient a permanent headache? But either way you need an objective function to tell the AI (or the human) what a “cure for cancer” is; possibly your intuitive understanding of this is incomplete, but that’s another problem.
Edit: Your objection does have some merit, though, because you could have two different utility functions that yield the same optimization problem. For instance, you could be playing the stock market to optimize $$ or log($$), and the ranking of solutions would be the same (although expected values would be thrown off, but that’s another issue); however, the concept of a solution that’s “a tiny bit better” is different.
Only pure solutions—they would rank lotteries differently.