At one point I compiled a list of conundrums relating to rationality that come up in my life. Instead of solving them, I thought I’d write up a selection of them, since that’s easier and maybe other people will solve them.
Punishing honesty vs no punishment
In some cases, you might want people to comply with some rule that they might otherwise wish to break, but the only way to check if they have complied is to ask them and hope that they’re honest (or perhaps there’s another, much more expensive, way to check). Examples:
A sperm bank might only want donors without congenital abnormalities that they might not be able to easily observe or test for.
I might not want my housemates to go into my room and look at all my stuff when I’m not there.
There’s a dilemma: how should one enforce such a rule? If you just ask people, and punish them if they say that they didn’t comply, then you’re incentivising people to lie to you. But if you don’t ask, the rule doesn’t get enforced. Abstractly, it seems like you just can’t enforce such a rule at all, but it seems to me that often people are able to be honest in the face of punishment, so not all hope is lost. How should I think about these situations? In practice, how should I decide the enforcement mechanism?
According to David Friedman’s recent book on legal systems, in saga-period Iceland, there was a much larger penalty for killing somebody if you failed to confess as soon as was practical. This suggests one solution: estimate the likelihood of discovery of violation of a rule conditioned on the violater being dishonest, and set the punishment of that high enough that it’s worth it for rule violaters to be honest. But this leaves open the question of how in practice to estimate this probability, calculate the appropriate punishment level, and how much effort to put into detection of rule violations when nobody has confessed to a violation.
‘The anime thing’
Once, a friend of mine observed that he couldn’t talk about how he didn’t like anime without a bunch of people rushing in to tell him that anime was actually good and recommending anime for him to watch, even when he explicitly asked them not to. Similarly, another friend of mine went to a coding bootcamp, only to discover that she intensely disliked coding, and would basically be unable to do it as a career, causing her to decide to switch to her previous worse-paying job. When she talked about this, often other people would suggest coding jobs for her to take, or remind her that coding pays much better than her other options.
I think that the responses that my friends received are instances of the same phenomenon, which I’ll call ‘the anime thing’ (since I came across the anime example first, and don’t have better name). Why does the anime thing happen? In what other situations might it happen? If one wanted it to not happen, how would one go about that?
When and how to increase neuroticism
Many people have advice on how to become more relaxed, calm, and happy. But presumably it’s possible to be too relaxed, calm, and/or happy, and one should instead be anxious, angry, and/or sad. How can I tell when this is the case, and what should I do to increase my neuroticism in-the-moment? Or could it really be true that humans are universally biased towards feeling unpleasant emotions?
Virtue of bicycles
It seems to me that bicycles are an unusually wonderful device.
You can just look at them with your eyes, think a little, and then you’ll know basically how they work.
They are very efficient in converting energy into forward motion.
By making transportation easier, they make people more free in one of the most concrete ways possible.
They let you go very fast, while still being in full contact with the air and ground.
I want more of that in my life. How should I get it? Should I be deriving any deep lessons from how great bicycles are?
Does my sleepy self know whether I should be sleeping?
When I’ve just woken up from sleeping, often I’ll have a strong impression that it would be a good idea to go back to sleep, or at least stay in bed and daydream. It seems plausible that this is a bad idea—as Marcus Aurelius reminded himself in hisjournal:
At dawn, when you have trouble getting out of bed, tell yourself: “I have to go to work—as a human being. What do I have to complain of, if I’m going to do what I was born for—the things I was brought into the world to do? Or is this what I was created for? To huddle under the blankets and stay warm?”
So you were born to feel “nice”? Instead of doing things and experiencing them? Don’t you see the plants, the birds, the ants and spiders and bees going about their individual tasks, putting the world in order, as best they can? And you’re not willing to do your job as a human being? Why aren’t you running to do what your nature demands?
You don’t love yourself enough. Or you’d love your nature too, and what it demands of you.
On the other hand, I gather that sleep is in fact important for us biological humans. And probably the way my body lets me know that is by making me sleepy.
On the third hand, I just woke up of my own accord (I rarely perceive my waking up as being due to light or sound), which you’d think would be a sign that now would be a good time to be awake. I know my waking self can be wrong about whether or not I should be awake, why should my sleeping self be all that different? Also, when I’ve just woken up, I am in some important senses less intelligent than literally any other waking moment.
Unfortunately, thinking hard about this problem in the moment makes sleep more difficult, meaning that a policy-level solution is necessary. The solution is likely ‘try both ways for a week, see how you do on a cognitive battery’, but it would be nice to reason the answer from first principles.
A Personal Rationality Wishlist
Link post
At one point I compiled a list of conundrums relating to rationality that come up in my life. Instead of solving them, I thought I’d write up a selection of them, since that’s easier and maybe other people will solve them.
Punishing honesty vs no punishment
In some cases, you might want people to comply with some rule that they might otherwise wish to break, but the only way to check if they have complied is to ask them and hope that they’re honest (or perhaps there’s another, much more expensive, way to check). Examples:
A sperm bank might only want donors without congenital abnormalities that they might not be able to easily observe or test for.
I might not want my housemates to go into my room and look at all my stuff when I’m not there.
There’s a dilemma: how should one enforce such a rule? If you just ask people, and punish them if they say that they didn’t comply, then you’re incentivising people to lie to you. But if you don’t ask, the rule doesn’t get enforced. Abstractly, it seems like you just can’t enforce such a rule at all, but it seems to me that often people are able to be honest in the face of punishment, so not all hope is lost. How should I think about these situations? In practice, how should I decide the enforcement mechanism?
According to David Friedman’s recent book on legal systems, in saga-period Iceland, there was a much larger penalty for killing somebody if you failed to confess as soon as was practical. This suggests one solution: estimate the likelihood of discovery of violation of a rule conditioned on the violater being dishonest, and set the punishment of that high enough that it’s worth it for rule violaters to be honest. But this leaves open the question of how in practice to estimate this probability, calculate the appropriate punishment level, and how much effort to put into detection of rule violations when nobody has confessed to a violation.
‘The anime thing’
Once, a friend of mine observed that he couldn’t talk about how he didn’t like anime without a bunch of people rushing in to tell him that anime was actually good and recommending anime for him to watch, even when he explicitly asked them not to. Similarly, another friend of mine went to a coding bootcamp, only to discover that she intensely disliked coding, and would basically be unable to do it as a career, causing her to decide to switch to her previous worse-paying job. When she talked about this, often other people would suggest coding jobs for her to take, or remind her that coding pays much better than her other options.
I think that the responses that my friends received are instances of the same phenomenon, which I’ll call ‘the anime thing’ (since I came across the anime example first, and don’t have better name). Why does the anime thing happen? In what other situations might it happen? If one wanted it to not happen, how would one go about that?
When and how to increase neuroticism
Many people have advice on how to become more relaxed, calm, and happy. But presumably it’s possible to be too relaxed, calm, and/or happy, and one should instead be anxious, angry, and/or sad. How can I tell when this is the case, and what should I do to increase my neuroticism in-the-moment? Or could it really be true that humans are universally biased towards feeling unpleasant emotions?
Virtue of bicycles
It seems to me that bicycles are an unusually wonderful device.
You can just look at them with your eyes, think a little, and then you’ll know basically how they work.
They are very efficient in converting energy into forward motion.
By making transportation easier, they make people more free in one of the most concrete ways possible.
They let you go very fast, while still being in full contact with the air and ground.
I want more of that in my life. How should I get it? Should I be deriving any deep lessons from how great bicycles are?
Does my sleepy self know whether I should be sleeping?
When I’ve just woken up from sleeping, often I’ll have a strong impression that it would be a good idea to go back to sleep, or at least stay in bed and daydream. It seems plausible that this is a bad idea—as Marcus Aurelius reminded himself in his journal:
On the other hand, I gather that sleep is in fact important for us biological humans. And probably the way my body lets me know that is by making me sleepy.
On the third hand, I just woke up of my own accord (I rarely perceive my waking up as being due to light or sound), which you’d think would be a sign that now would be a good time to be awake. I know my waking self can be wrong about whether or not I should be awake, why should my sleeping self be all that different? Also, when I’ve just woken up, I am in some important senses less intelligent than literally any other waking moment.
Unfortunately, thinking hard about this problem in the moment makes sleep more difficult, meaning that a policy-level solution is necessary. The solution is likely ‘try both ways for a week, see how you do on a cognitive battery’, but it would be nice to reason the answer from first principles.