Well, cooperating in the Prisoner’s Dilemma against an identical agent is a simple special case of UDT reasoning, and people are writing math papers about it as you can see. Besides, if UDT is trivial, how come it has unsolved problems? Off the top of my head: spurious proofs, agent simulates predictor, logical uncertainty… The fact that already solved problems look trivial isn’t strong evidence to me, because I know how much effort it takes to get something like Counterfactual Mugging from the “impossible” pile into the “trivial” pile. Actually I’m quite flattered when people on LW now consider UDT solutions trivial, e.g. in the discussion of Psy-Kosh’s problem.
One simple heuristic formula for approximate problem difficulty I heard is this:
D = log(T)*N
D is difficulty, T is length of time the problem stayed open after being posed, N is the average number of people working on it.
Rationalle:
Since we expect P != NP, finding proofs is an exponential search problem. That means in time T we can only cover problems that have about log(T) shortest proof length. Assuming linear scaling with adding more processing, we have a factor of N (in reality, scaling is sublinear, but this is a heuristic..).
The point is that just because a field has open problems, doesn’t mean those problems are necessarily very hard. It could be that either they haven’t been open very long, or not enough people care.
If D is the proof length, surely your reasoning should lead to something like log(T*N) rather than log(T)*N? It seems to me that N people working for T days is T*N person-days, so the two variables should be mostly exchangeable, it’s weird to see only one of them under the logarithm...
Since we expect P != NP, finding proofs is an exponential search problem.
P != NP does not imply that one can’t do better than an exponential search for finding proofs. For instance an O(n^log(n)) algorithm for finding proofs would not contradict P != NP.
Well, cooperating in the Prisoner’s Dilemma against an identical agent is a simple special case of UDT reasoning, and people are writing math papers about it as you can see. Besides, if UDT is trivial, how come it has unsolved problems? Off the top of my head: spurious proofs, agent simulates predictor, logical uncertainty… The fact that already solved problems look trivial isn’t strong evidence to me, because I know how much effort it takes to get something like Counterfactual Mugging from the “impossible” pile into the “trivial” pile. Actually I’m quite flattered when people on LW now consider UDT solutions trivial, e.g. in the discussion of Psy-Kosh’s problem.
One simple heuristic formula for approximate problem difficulty I heard is this:
D = log(T)*N
D is difficulty, T is length of time the problem stayed open after being posed, N is the average number of people working on it.
Rationalle:
Since we expect P != NP, finding proofs is an exponential search problem. That means in time T we can only cover problems that have about log(T) shortest proof length. Assuming linear scaling with adding more processing, we have a factor of N (in reality, scaling is sublinear, but this is a heuristic..).
The point is that just because a field has open problems, doesn’t mean those problems are necessarily very hard. It could be that either they haven’t been open very long, or not enough people care.
If D is the proof length, surely your reasoning should lead to something like log(T*N) rather than log(T)*N? It seems to me that N people working for T days is T*N person-days, so the two variables should be mostly exchangeable, it’s weird to see only one of them under the logarithm...
P != NP does not imply that one can’t do better than an exponential search for finding proofs. For instance an O(n^log(n)) algorithm for finding proofs would not contradict P != NP.