Don’t expect your life partner to be better than your exes in more than one way: a mathematical model
In this post, I analyze a multidimensional version of the dating problem (more commonly called the secretary problem). If you are familiar with this problem you can search for “For this post, I want to make a slight change to the model” and skip to that. Otherwise, here is an intro to the usual dating problem:
Suppose you are a serially monogamous dater and your goal is to eventually find someone to marry. Then you will have to make decisions of the type: should I settle with my current partner or reject them in the hope of finding someone better?
This is the setup for the dating problem. In this model, we assume there is some maximal number, n, of partners you can meet. Here “meet” can mean different things. You could for example interpret it to mean “going on a single date” or “been dating for a year”. Depending on the interpretation, n will have very different sizes. The model does not take into account that information is gradually revealed over time.
As soon as you have “met” someone, you know how good they are compared to previous people you have met, but you have no further information about how good they are compared to people you have not met yet. You will meet the partners in completely random order. You have to reject one partner before you can meet another one, and you can never go back to a person you have rejected. How do you maximize the probability of settling with the best partner?
The optimal strategy is to reject the k first people, where k is around n/e, and e is the base of the natural logarithm. After this exploration phase, you settle down with the next person you meet who is better than all previous partners.
Two types of failures can happen when following this algorithm:
Your best partner is among the k first people you meet, so you rejected them.
You never meet your best partner. Instead, you settle down with someone else who is better than all the first k candidates but you met before you have explored enough to meet your best partner.
For this post, I want to make a slight change to the model: we have an infinite sequence of potential partners (yeah, I know, Tinder has messed up my world model) of random identically distributed attractiveness. You have dated the first k and plan to settle down with the next person you meet who is better than all previous partners. You know there are better potential partners out there, but all you care about is that you settle down with someone better than all the previous ones. Now there is only one type of failure
a. You run out of time before you meet someone better that the k first
Here, running out of time can mean dying or getting too old to have kids (in case you want kids). If you only have time to meet n people, this is just a new version of failure 1 above since your the best partner among the n you met, will have been among the k first. Again the probability of this failure is the same: k/n. Although the probability of eventually settling is 1, the expected time before settling is given by
so it is infinite.
Some comments on the assumptions
This is clearly an unrealistic model:
We assume that the attractiveness of the partners are identically distributed. In reality, it would change as you go through life. Even your way of judging the attractiveness of partners will change.
We assume that you can always choose to keep a partner. In real life, the partner might decide to leave you. In that case, you can still use the model, we just redefine “met” to only include people who have not rejected you. If someone rejects you, then for the purpose of this model, you simply pretend you have never met them. Easier said than done, I know.
We assume that the “attractiveness” of a partner is something that is revealed in an instant rather than slowly revealed.
In this post, I want to look at a different assumption of the model.
The multidimensional problem
Previously, we assumed that there is a single number (or at least, an element is a totally ordered set) that defined the attractiveness of a partner. In reality, different people have different strengths and weaknesses. Let’s assume that there are two parameters you care the most about, X and Y. These parameters can be anything, concrete or abstract, objective or subjective, for example, how good a partner do you think they would be overall, how good a parent do you think they will be, how much you do enjoy spending time with them, how good your sex with them is, how well-aligned are your values or how good looking are they.
An interesting case is when X and Y are different estimates of the same variable, for example, X = how good you feel about this partner on average when you wake up together in the morning, Y= how good you feel about this partner on average when spending time with their family.
Suppose your settling strategy is to reject the first person and afterward settle down with the first person you meet who is better at both X and Y than any previous partner you have met. What will happen?
We still have the same failure type as before
a. You run out of time before meeting the first person who satisfies the stopping criterion.
But now there is also a new type of failure.
b. Even if you had time to meet infinitely many partners, you would never settle.
How likely is this? It depends if X and Y are correlated. First, let us assume that they are independent.
We have and by independence . So the probability that you will not settle with person 2 is 3⁄4. Similar for person 3 the probability that you won’t settle (given that you haven’t settled before meeting this person) is 8⁄9, and for person n it is . We can now compute the probability that you would never settle even given infinite time:
This is if you consider settling already after k=1. If you only consider settling after having rejected k partners, the probability that you will never settle with this strategy becomes .
If you consider a higher number of parameters than two, your odds become a lot worse. Here is a table of the probabilities for a given number of parameters and the number of exes after which you consider settling. The rows are the number of parameters (p) and the columns are the number of exes you have already rejected (k).
p\k | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
1 | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
2 | 50.0% | 33.3% | 25.0% | 20.0% | 16.7% | 14.3% | 12.5% | 11.1% | 10.0% | 9.1% |
3 | 19.1% | 7.5% | 3.9% | 2.4% | 1.6% | 1.2% | 0.9% | 0.7% | 0.6% | 0.5% |
4 | 8.1% | 2.0% | 0.7% | 0.4% | 0.2% | 0.1% | 0.1% | 0.1% | 0.0% | 0.0% |
5 | 3.7% | 0.6% | 0.2% | 0.1% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% |
6 | 1.7% | 0.2% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% |
The math also gets worse. I did not find a simple formula for the values for more than two parameters. I simply ended the computation after 999 meetings.
For correlated parameters, I haven’t tried to find a closed formula. Instead, I have simulated 20.000 sequences each of up to 200.000 meetings. I assumed all parameters were normally distributed with variance 1 and covariance of either 0.2, 0.5, or 0.8. Here are the tables.
0.2:
p\k | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
1 | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
2 | 62.5% | 47.5% | 39.3% | 33.6% | 29.9% | 27.0% | 24.7% | 22.8% | 21.2% | 19.9% |
3 | 31.1% | 16.6% | 10.9% | 7.6% | 5.8% | 4.7% | 4.0% | 3.4% | 3.0% | 2.6% |
4 | 17.5% | 6.8% | 3.7% | 2.3% | 1.5% | 1.1% | 0.8% | 0.7% | 0.6% | 0.5% |
5 | 10.8% | 3.3% | 1.5% | 0.7% | 0.4% | 0.3% | 0.2% | 0.2% | 0.1% | 0.1% |
0.5:
p\k | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
1 | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
2 | 84.3% | 75.5% | 69.2% | 64.8% | 61.4% | 58.5% | 56.1% | 54.0% | 52.2% | 50.7% |
3 | 58.0% | 42.8% | 34.5% | 29.3% | 25.7% | 23.0% | 20.8% | 19.1% | 17.7% | 16.6% |
4 | 42.3% | 26.9% | 19.8% | 15.7% | 13.2% | 11.3% | 10.0% | 8.9% | 8.0% | 7.3% |
5 | 32.6% | 18.6% | 12.9% | 9.7% | 7.9% | 6.5% | 5.6% | 4.8% | 4.3% | 3.9% |
0.8:
p\k | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
1 | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
2 | 99.1% | 98.2% | 97.4% | 96.9% | 96.2% | 95.7% | 95.3% | 94.8% | 94.3% | 93.8% |
3 | 93.4% | 88.9% | 85.6% | 83.1% | 80.9% | 78.8% | 77.1% | 75.7% | 74.4% | 73.2% |
4 | 86.3% | 78.4% | 73.1% | 69.5% | 66.3% | 63.5% | 61.4% | 59.5% | 57.8% | 56.3% |
5 | 79.7% | 69.6% | 63.2% | 58.8% | 55.1% | 52.1% | 49.9% | 48.0% | 46.2% | 44.7% |
The table for 0.8 is probably not relevant for most pairs of traits people care about, but it could be relevant for the case where the parameters are estimates of the same variable.
More about assumptions
These probabilities are under the assumption that you first choose the parameters X and Y that you care about and then start dating.
If instead you first find a life partner and only afterward choose which parameters to look at, you will of course be likely to find many parameters where they are the best you have ever met, since people vary in thousands of ways.
If instead you already have been dating, and then pick parameters that you have been missing in your partners, and then start looking for a partner who is record high on all those parameters, your odds may be better than suggested above. Conversely, if you pick some parameters that one of your exes was good at, your odds would be lower.
We assumed that the values of a partner’s parameters were all revealed at the same time. Instead, we could consider a model where they are only revealed one at a time: If a partner’s X is not record high, you reject them before learning about their Y. In this model, you have probability 1 of eventually meeting someone who has a better X than anyone else you have met and higher Y than anyone whose Y you have had revealed. However, the expected number of such record (X, Y)s you will meet among the first n people you will meet, now grows much slower. It grows as ln(ln(n)) instead of as ln(n) in the one-parameter model. Similarly, for 3 parameters, the expected number of records now grows as ln(ln(ln(n))) and so on. Even if you are only looking for one record, this could take a long time to find.
So what to do?
What is the takeaway life advice from this?
Don’t go for records on several parameters
This is the obvious one. I didn’t create this model because I consciously endorse this settling strategy, but because it might capture how I make decisions to reject when I’m too critical.
Try not to rank people
Despite having written a long post assuming that we can rank people according to various parameters, I think it is reasonably good advice to not do that. Or at least, that is a direction to aspire to. Meet people as individuals and try not to compare too much to your past. This post is not intended to say “this is how you should think about the people you are dating”. On the contrary, it is saying “there is a part of me that thinks like this. I should try to quiet that part a bit because if it gets too much say, I’m going to end up lonely”.
Change the distribution of partners you meet
If you want to meet someone who is an outlier in several ways compared to people you have met in the past, you need to change the distribution of people you are meeting. This could be either by trying to improve yourself or by looking for partners in a different place than you usually do.
There is also a winner’s curse risk: if a person is too good, s-he could have some secret disadvantage or may leave me quickly as s-he will have many better options than me. It puts a cap on the level above median which I should look at. Therefore first few attempts have to establish the medial level of available for me people.
Another problem is that any trial run has a cost, like years and money spent. If I did searching too long, I will spent less time with my final partner.