I think some clarity for “minimal”, “optimization”, “hard”, and “different conditions” would help.
I’ll take your problem “definition” using a distribution D, a reward function R, and some circuit C and and Expectation E over R(x, C(x)).
Do we want the minimal C that maximizes E? Or do we want the minimal C that satisfies E > 0? These are not necessarily equivalent because max(E) might be non-computable while E > 0 not. Simple example would be: R(x, C(x)) is the number of 1s that the Turing Machine with Gödel number C(x) writes before halting (and −1 for diverging TMs) if it uses at most x states, −1 else. E > 0 just means outputting any TM that halts and writes at least one 1. The smallest circuit for that should be easy to find. max(E) computes the Busy Beaver number which is notoriously non-computable.
Should R be computable, semi-decidable, or arbitrary? The R function in (1) is non-computable (has to solve halting problem) but finding E > 0 is computable.
What does different conditions mean? From your problem definition it could mean changing D or changing R (otherwise you couldn’t really reuse the same circuit). I’m especially unclear about what a daemon would be in this scenario. “Slightly changing D would result in E < 0” seems to be a candidate. But then “minimal C that satisfies E > 0″ is probably a bad candidate: the chances that a benign, minimal C goes from E > 0 to E < 0 when slightly changing D seem to be pretty high. Maybe C should be (locally) continuous(-ish, no idea how to formulate that for boolean circuits) with respect to D—small changes in D should not trigger large changes in E.
I skimmed through your linked paper for obfuscation and those are only obfuscated with respect to some (bounded) complexity class. Classical boolean circuit minimization is in PSPACE and EXPTIME (if I’m not mistaken) and even in your problem statement you can easily (from a computability standpoint) check if two circuits are the same: just check if C(x) == C’(x) for all x (which are finite). It’s just not efficient.
My intuition tells me that your problems mainly arise because we want to impose some reasonable complexity constraints somewhere. But I’m not sure where in the problem formulation would be a good place. A lot of optimization problems have well-defined global maxima which are utterly intractable to compute or even to approximate. Probably most of the interesting ones. Even if you can show that minimal circuits are not daemons (however you’ll define them), that will not actually help you: given some circuit C if you cannot compute a corresponding minimal circuit you cannot check if C could be a daemon. Even if you were given the minimal circuit C’ you cannot check in polynomial time if C == C’ (due to SAT I guess).
I think some clarity for “minimal”, “optimization”, “hard”, and “different conditions” would help.
I’ll take your problem “definition” using a distribution D, a reward function R, and some circuit C and and Expectation E over R(x, C(x)).
Do we want the minimal C that maximizes E? Or do we want the minimal C that satisfies E > 0? These are not necessarily equivalent because max(E) might be non-computable while E > 0 not. Simple example would be: R(x, C(x)) is the number of 1s that the Turing Machine with Gödel number C(x) writes before halting (and −1 for diverging TMs) if it uses at most x states, −1 else. E > 0 just means outputting any TM that halts and writes at least one 1. The smallest circuit for that should be easy to find. max(E) computes the Busy Beaver number which is notoriously non-computable.
Should R be computable, semi-decidable, or arbitrary? The R function in (1) is non-computable (has to solve halting problem) but finding E > 0 is computable.
What does different conditions mean? From your problem definition it could mean changing D or changing R (otherwise you couldn’t really reuse the same circuit). I’m especially unclear about what a daemon would be in this scenario. “Slightly changing D would result in E < 0” seems to be a candidate. But then “minimal C that satisfies E > 0″ is probably a bad candidate: the chances that a benign, minimal C goes from E > 0 to E < 0 when slightly changing D seem to be pretty high. Maybe C should be (locally) continuous(-ish, no idea how to formulate that for boolean circuits) with respect to D—small changes in D should not trigger large changes in E.
I skimmed through your linked paper for obfuscation and those are only obfuscated with respect to some (bounded) complexity class. Classical boolean circuit minimization is in PSPACE and EXPTIME (if I’m not mistaken) and even in your problem statement you can easily (from a computability standpoint) check if two circuits are the same: just check if C(x) == C’(x) for all x (which are finite). It’s just not efficient.
My intuition tells me that your problems mainly arise because we want to impose some reasonable complexity constraints somewhere. But I’m not sure where in the problem formulation would be a good place. A lot of optimization problems have well-defined global maxima which are utterly intractable to compute or even to approximate. Probably most of the interesting ones. Even if you can show that minimal circuits are not daemons (however you’ll define them), that will not actually help you: given some circuit C if you cannot compute a corresponding minimal circuit you cannot check if C could be a daemon. Even if you were given the minimal circuit C’ you cannot check in polynomial time if C == C’ (due to SAT I guess).
(First time posting here, hope to contribute)