Is anyone at all working on classes of “unfair” problems, such as ones that give different utilities based upon the amount of time spent computing? Or ones that take into consideration any type of resource used to make that decision (energy or memory). This class seems important to me and less arbitrary than “unfair” problems that punish specific algorithms.
Wei Dai has a tentative decision theory that covers some of those cases. I didn’t find it very convincing, but it’s likely that I overlooked something. Any work on such problems would be very welcome, of course.
I’ll have a think. An optimal decision maker for all scenarios seems impossible if your utility is reduced by an amount proportional to the time take to make the decision (“solving death” has this structure, less people die if you solve it earlier). The best in general I can think of is an infinite table mapping scenarios to a the decision computed by something like your UDT + oracle for that scenario. And this can be beaten in each individual scenario by a specialised algorithm for that scenario, that needs no look up.
And it still has an infinite quantity which I don’t like in my theories that I might want to connect to the real world one day (and requires an infinite amount of precomputation).
I wonder if there is a quality apart from strict optimality that we need to look for. Making the optimal decision in most problems( what is the correct weighting of scenarios)? Making the right decision eventually?
Anyway I’ll think some more. It is definitely thornier and nastier than “fair” problems.
Is anyone at all working on classes of “unfair” problems, such as ones that give different utilities based upon the amount of time spent computing? Or ones that take into consideration any type of resource used to make that decision (energy or memory). This class seems important to me and less arbitrary than “unfair” problems that punish specific algorithms.
Wei Dai has a tentative decision theory that covers some of those cases. I didn’t find it very convincing, but it’s likely that I overlooked something. Any work on such problems would be very welcome, of course.
I’ll have a think. An optimal decision maker for all scenarios seems impossible if your utility is reduced by an amount proportional to the time take to make the decision (“solving death” has this structure, less people die if you solve it earlier). The best in general I can think of is an infinite table mapping scenarios to a the decision computed by something like your UDT + oracle for that scenario. And this can be beaten in each individual scenario by a specialised algorithm for that scenario, that needs no look up.
And it still has an infinite quantity which I don’t like in my theories that I might want to connect to the real world one day (and requires an infinite amount of precomputation).
I wonder if there is a quality apart from strict optimality that we need to look for. Making the optimal decision in most problems( what is the correct weighting of scenarios)? Making the right decision eventually?
Anyway I’ll think some more. It is definitely thornier and nastier than “fair” problems.
I recently made some progress on your question. Section 4 seems to be the most relevant.