Seek Mistakes in the Space Between Math and Reality
Epistemic status: Cranky grumbling about the kids these days and all of their math and how they are using it wrong
Note: I’ll be using a few examples as foils below. This isn’t intended as criticism of those foils, it’s because they provide good examples of what I think is a broader issue to be aware of.
Girl, I knew you were heaven-sent
When I’d explored the mate-space thirty-seven percent
Call off the search, come home with me
You’re the highest-rated option after one-over-e
The last plane trip of Keltham’s first life starts out uneventful. He boards the aircraft, strolls a third of the way down the aisle with his eyes assessing all he passes, and then sits next to the first person who looks like a more promising seat-partner than all of the previous people he passed.
(mad investor chaos and the woman of Asmodeus, first lines).
The Secretary Problem is a famous result in mathematics. If you want to select the best option from a set, and you are presented these options one at a time and must choose either to accept or to reject, your optimal strategy is to auto-reject the first 1/e (about 37%) of options, and then accept the first option you see that is better than all previous options.
This is obviously a valuable real-world strategy in various fields, such as dating. As you can see, it has been mathematically proven that the correct strategy is to spend 37% of your dating life rejecting anyone you see, no matter how desirable they may appear, and then accept people you meet thereafter only if they are more desirable than anyone you saw before. If you are currently in a relationship, and are young enough to have less than a third of your adult life spent, you should immediately break up with your partner.
What? What do you mean, that doesn’t sound right? It’s been mathematically proven! Surely you don’t think mathematicians have gotten this famous result wrong? Show me the flaw in the proof, then!
You can’t find a flaw? Well, in that case, I reiterate, it has been proven mathematically that marrying your college sweetheart is a foolish idea, that there exists a superior strategy. Sadly many foolish people, perhaps as a result of insufficient math education, implement such mathematically suboptimal strategies, claiming to find ‘true love’ even though they objectively reduce their odds of success. Perhaps some sort of intervention is required in order to correctly optimize their behavior.
This position is obviously stupid. However, the precise way in which it is stupid will be illuminating. It’s not that it’s wrong about the math. The math is correct. Instead, it’s wrong about the linkage between the math and the reality. The math uses certain assumptions that will turn out not to be true in reality, and without those assumptions the math doesn’t hold.
Assumptions made in the Secretary Problem, and ways in which they fail to align with reality:
The solution assumes that the only thing you care about is maximizing the probability that you end up with the best possible candidate. In reality this is not true:
You are not indifferent between the second-best possible candidate and the worst. Given a choice between ‘you are guaranteed the second-best candidate’ and ‘you have a 1% chance of getting the best candidate and a 99% chance of getting the worst’, you would pick the first. The math assumes you would prefer the second, and optimizes for the second.
There are costs to the process of searching (both costs of the search itself, and costs of the delay in selection). The math assumes you’re indifferent between finding your partner at the very beginning of the search process, or at the very end.
The solution assumes that you have no information regarding the distribution of candidates, and the only way you can evaluate candidates is by comparison to other candidates you have seen. In reality this is not true:
You have a reasonable idea of the distribution of candidates you may encounter. This lets you say ‘this is a strong candidate’ or ‘this is a weak candidate’ by reference not to other candidates but to your known distribution.
You may well be able to see more problems along these lines. I haven’t bothered to put much effort into exhaustively listing them—but I don’t need to, because one problem like this is sufficient to sink the whole argument.
Math proofs are fragile. If your math is entirely valid but you made one little mistake in the assumptions, your proof is not ‘nearly right’, it is entirely wrong. The Secretary Problem proof is an interesting piece of mathematics, and may be useful in some contexts, but it is completely worthless as a guideline for how to conduct yourself in real-world situations.
This is a useful illustration of a more general rule:
When someone says ‘Mathematical Result X proves Counter-Intuitive Thing Y’, do not look for problems in Mathematical Result X. First, there are unlikely to be problems with it; and second, it’s likely to be a very complicated proof, and may in fact be beyond your abilities to evaluate on any level beyond “Yup, Wikipedia sure says that that’s a theorem”.
Instead, look for problems in the linkage between that result and the real world.
An incomplete subset of things I’m thinking of here:
Claims that Arrow’s Impossibility Theorem proves that ‘the only stable form of government is a dictatorship’.
Every single real-world usage of the phrase ‘Modigliani-Miller’.
And a bonus complaint for mad investor chaos readers:
Consider a Prisoner’s Dilemma agent that Cooperates if it can prove its opponent will Cooperate with it, and Defects otherwise. Call it ‘Suspicious-FairBot.’
I assert that this agent will Cooperate with itself. When you ask me why, I mutter something about the ‘Assumable Provability Theorem’, and declare that this means ‘if something is provable within a proofsystem, starting from the premise that the quoted statement is provable inside the quoted proofsystem, then it’s thereby provable within the system’, and therefore that Suspicious-FairBot will Cooperate with itself.
Now consider another agent that Defects if it can prove its opponent will Defect against it, and Cooperates otherwise. Call it ‘Naive-FairBot’.
Naive-FairBot is strictly more Cooperative than Suspicious-FairBot—it will Cooperate unless it is proven that its opponent Defects, while Suspicious-FairBot will Defect unless it is proven that its opponent Cooperates. Both behave the same if their opponent’s move can be proven, but if the opponent’s move cannot be proven Naive-FairBot will Cooperate while Suspicious-FairBot will Defect.
It is therefore very strange that the Assumable Provability Theorem argument declares that Suspicious-FairBot will Cooperate with itself, while Naive-FairBot will Defect against itself.
Do not get me started on Neutral-FairBot, which Cooperates if it can prove its opponent Cooperates, Defects if it can prove its opponent Defects, and therefore must both Cooperate and Defect against itself.
What’s going on here? Is the math wrong? Does the math uncover a secret power by which we can guarantee Cooperation through Suspicion?
Or is there at least one difference between a proof system and a prisoner’s dilemma agent?
- 2 Mar 2022 19:47 UTC; 4 points) 's comment on [David Chapman] Resisting or embracing meta-rationality by (
- 4 Mar 2022 11:59 UTC; 3 points) 's comment on A Warm Up Problem for Yudkowsky’s New Fiction by (
This is too strong a claim, I think. Following the algorithm strictly to the letter seems silly—if you see a fantastic option early on you should probably take it. But the algorithm still reflects some more general insights, as is often the case.
(I agree with your general point, though. The Arrow’s Impossibility Theorem example at least is pretty good.)
Applied Divinity Studies wrote a related post that might be of interest: How Long Should you Take to Decide on a Career? They consider a modified version of the secretary problem that accounts for the 2 problematic assumptions you noted (binary payoff and ignorance of opportunity cost); you can play with the Colab notebook if you’re curious. Interestingly, varying the parameters tends to pull the optimal starting point earlier (contra my initial intuition), sometimes by a lot. The optimal solution is so parameter-dependent that it made me instinctively want to retreat to intuition, but of course ADS anticipates this and argues against it: “formal models may be bad, but at least they can disabuse us of even worse intuitive mental models”.
I suspect that many people have already met 37% of all potential partners they will ever meet by the end of college. Which could actually make this the mathematically best strategy.
There is also the consideration that the earlier you pick a partner, you get to enjoy the benefits of having a partner for longer.
1.
The math isn’t ‘correct’, it’s about something else:
You are a company making a decision in a hypothetical situation which is rather unlikely.
2.
The argument in 1. is what makes the approach overall seem untenable.
This is problem arises multiple time in rationalism. For instance, Auman’s agreement theorem is true of idealised agents communicating using an idealised notion of “information” ,… and if those assumptions hold, then it’s true. But it is not true of realistic agents who can’t even agree what “evidence” is. It is therefore a useful rationality skill to be able to spot the anti-pattern, rather just naively assume that “mathematically correct” is the same thing as “real world correct”.
(There’s a fuller list of examples here: https://www.lesswrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality
The idea that there is a simple yet powerful theoretical framework which describes human intelligence and/or intelligence in general. (I don’t count brute force approaches like AIXI for the same reason I don’t consider physics a simple yet powerful description of biology).
The idea that there is an “ideal” decision theory.
The idea that AGI will very likely be an “agent”.
The idea that Turing machines and Kolmogorov complexity are foundational for epistemology.
The idea that, given certain evidence for a proposition, there’s an “objective” level of subjective credence which you should assign to it, even under computational constraints.
The idea that Aumann’s agreement theorem is relevant to humans.
The idea that morality is quite like mathematics, in that there are certain types of moral reasoning that are just correct.)
The idea that decision time does not affect possible outcomes.
It’s not an idea, it’s not explicit(, that X does not affect Y). It’s the absence of recognition (that X does affect Y).
You’re misunderstanding the math.
Your Suspicious-FairBot is isomorphic to the standard FairBot. Your Naive-FairBot is not strictly more cooperative than Suspicious-FairBot, as it will defect against FairBot, while Suspicious-FairBot will not. I’m pretty sure Neutral-FairBot is just inconsistent.
Going back to your larger point, I think that Löbian cooperation is pointing to a deep underlying principle, in a way which your other examples aren’t doing as much.
I’m not sure to what extent we actually disagree vs to what extent I’ve communicated poorly.
I’m not saying the math is wrong. The math seems counter-intuitive, but (as the whole point of the post says), I don’t disagree with the math. I’m not an expert in this kind of math, whenever I think about it it makes my head hurt, I’m too lazy to put much effort into it, and I don’t really expect people to lie about it.
I disagree with the claim that the math can be linked to real-world questions.
If you are considering playing a Prisoner’s Dilemma, and given the choice between these two opponents:
Alice says ‘I will Cooperate if I know you’ll Cooperate, otherwise I’ll Defect’
Bob says ‘I will Defect if I know you’ll Defect, otherwise I’ll Cooperate’
Then as an empirical matter I think it’s pretty clear you should choose Bob as your opponent.
And if two people like Bob play a Prisoners’ Dilemma, I think they’re much more likely to Cooperate than two people like Alice.
I don’t think the math is wrong, but I think that the attempt to extend from the math to “two people who both say ‘I’ll cooperate if I know you’ll cooperate’ will therefore cooperate” is invalid.
Not entirely my field, but I think what’s going on is that it could be defined in two different ways, with if-statements in different orders—either this (#1):
or this (#2):
and I think (though open to correction) the mathematical answer is that #1 will cooperate with itself and #2 will defect against itself (which, again, is counterintuitive, and even if the math works out that way I deny that the math has any useful application to the real-world problem of what a person who says “I’ll Defect if I know you’ll Defect and Cooperate if I know you’ll Cooperate” will do).
Third option:
(Kindly ignore the terrible pseudocode.)
It’s been ages since I studied provability logic, but those bots look suspicious to me. Have anybody actually formalized them? Like the definition of FairBot involves itself, so it is not actually a definition. Is it a statement that we consider true? Is it just a statement that we consider provable? Why won’t adding something like this to GL result in contradiction?
Ah, you are part of today’s Lucky 10,000. Quines are a thing.
It is possible to write a program that does arbitrary computation on its own source code, by doing variants of the above. (That being said, you have to be careful to avoid running into Halting-problem variants.)
Thank you for treating it as a “today’s lucky 10,000” event. I am aware about quines (though not much more than just ‘aware’) and what I am worried about is whether people that created FairBot were careful enough.
I haven’t spotted any inherent contradictions in the FairBot variants thus far.
That being said, now that I look at it there’s no guarantee that said FairBot variants will ever halt… in which case they aren’t proper agents, in much the same way that while (true) {} isn’t a proper agent.
I’ve seen ‘they run for some large amount of time/possible proofs, then they switch to their codepath for no proof found’. Not in code, but in a paper that proved or tried to prove something like this:
except with the ‘searching for a longtime/lots of proofs condition’, rather than searching forever.
It might have been this one:
http://intelligence.org/files/ParametricBoundedLobsTheorem.pdf
Sure… in which case there’s a third option, where they there is no proof that they cooperate/don’t cooperate with themselves precisely because they have limited runtime and so stop looking at proofs ‘prematurely’.
(So, for instance, there might be a proof that an agent that searched for 22000 clock cycles would cooperate with itself, and so it satisfies Lob’s Theorem… but no easier-to-find proof.)
Your quoted bit about Lob’s Theorem says nothing about how complex said proof will be.
Yes, you can play with them yourself: https://github.com/machine-intelligence/provability
Impredicative definitions are quite common in mathematics.
“Definition” was probably a wrong word to use. Since we are talking in the context of provability, I meant “a short string of text that replaces a longer string of text for ease of human reader, but is parsed as a longer string of text when you actually work with it”. Impredicative definitions are indeed quite common, but they go hand in hand with proofs of their consistency, like proof that a functional equation have a solution, or example of a group to prove that group axioms are consistent, or more generally a model of some axiom system.
Sadly I am not familiar with Haskell, so your link is of limited use to me. But it seems to contain a lot of code and no proofs, so it is probably not what I am looking for anyway.
What I am looking for probably looks like a proof of “GL⊢(∃f ∀g (f(g)=□ g(f)))”. I am in many ways uncertain about whether this is the right formula (is GL a right system to use here (does it even support quantifiers over functional symbols? if not then there should be an extension that does support it); is “does f exist” the right question to be asking here; does “f(g)=□g(f)” correctly describe what we want rom FairBot). But some proof of that kind should exists, overwise why should we think that such FairBot exists/is consistent?
Is that the 5 and 10 problem where you can prove both that you’d take $5 and that you’d take $10?
Looks like it’s different:
It responds with the behavior matching that of the first proof it finds. The question of what it does given its own source code is a special case. There isn’t a divergence between proof and behavior because it does whatever the proof says.
There’s no guarantee that any of these FairBot variants will ever halt.
There’s also discussion here that seems to build elaborate schemes around null sets[1].
(By which I mean, building interesting theories that e.g. require agents solve the Halting problem.)
This is another case where the math is correct, it just doesn’t quite say what you thought it did. A implies B only says something about B if A is true, after all.
(I think this forum would benefit from studying impossibility results more, and computability in general.)
“Assume you have some X satisfying property P”, where it turns out that there is no X satisfying property P.