On circularity and what wins, the crux to me in spots like this is whether you do better by actually fooling yourself and actually assume you can solve the problem, or whether you want to take a certain attitude like ‘I am going to attack this problem like it is solvable,’ while not forgetting in case it matters elsewhere that you don’t actually know that - which in some cases that I think includes this one matters a lot, in others not as much. I think we agree that you want to at least do that second one in many situations, given typical human limitations.
My current belief is that fooling oneself for real is at most second-best as a solution, unless you are being punished/rewarded via interpretability.
On circularity and what wins, the crux to me in spots like this is whether you do better by actually fooling yourself and actually assume you can solve the problem
As per my comment I think “fooling yourself” is the wrong ontology here, it’s more like “devote x% of your time to thinking about what happens if you fail” where x is very small. (Analogously, someone with strong growth mindset might only rarely consider what happens if they can’t ever do better than they’re currently doing—but wouldn’t necessarily deny that it’s a possibility.)
Or another analogy: what percentage of their time should a startup founder spend thinking about whether or not to shut down their company? At the beginning, almost zero. (They should plausibly spend a lot of time figuring out whether to pivot or not, but I expect Ilya also does that.)
That is such an interesting example because if I had to name my biggest mistake (of which there were many) when founding MetaMed, it was failing to think enough about whether to shut down the company, and doing what I could to keep it going rather than letting things gracefully fail (or, if possible, taking what I could get). We did think a bunch about various pivots.
Your proposed ontology is strange to me, but I suppose one could say that one can hold such things as ‘I don’t know and don’t have a guess’ if it need not impact one’s behavior.
Whether or not it makes sense for Ilya to think about what happens if he fails is a good question. In some ways it seems very important for him to be aware he might fail and to ensure that such failure is graceful if it happens. In others, it’s fine to leave that to the future or someone else. I do want him aware enough to check for the difference.
With growth mindset, I try to cultivate it a bunch, but also it’s important to recognize where growth is too expensive to make sense or actually impossible—for me, for example, learning to give up trying to learn foreign languages.
On circularity and what wins, the crux to me in spots like this is whether you do better by actually fooling yourself and actually assume you can solve the problem, or whether you want to take a certain attitude like ‘I am going to attack this problem like it is solvable,’ while not forgetting in case it matters elsewhere that you don’t actually know that - which in some cases that I think includes this one matters a lot, in others not as much. I think we agree that you want to at least do that second one in many situations, given typical human limitations.
My current belief is that fooling oneself for real is at most second-best as a solution, unless you are being punished/rewarded via interpretability.
As per my comment I think “fooling yourself” is the wrong ontology here, it’s more like “devote x% of your time to thinking about what happens if you fail” where x is very small. (Analogously, someone with strong growth mindset might only rarely consider what happens if they can’t ever do better than they’re currently doing—but wouldn’t necessarily deny that it’s a possibility.)
Or another analogy: what percentage of their time should a startup founder spend thinking about whether or not to shut down their company? At the beginning, almost zero. (They should plausibly spend a lot of time figuring out whether to pivot or not, but I expect Ilya also does that.)
That is such an interesting example because if I had to name my biggest mistake (of which there were many) when founding MetaMed, it was failing to think enough about whether to shut down the company, and doing what I could to keep it going rather than letting things gracefully fail (or, if possible, taking what I could get). We did think a bunch about various pivots.
Your proposed ontology is strange to me, but I suppose one could say that one can hold such things as ‘I don’t know and don’t have a guess’ if it need not impact one’s behavior.
Whether or not it makes sense for Ilya to think about what happens if he fails is a good question. In some ways it seems very important for him to be aware he might fail and to ensure that such failure is graceful if it happens. In others, it’s fine to leave that to the future or someone else. I do want him aware enough to check for the difference.
With growth mindset, I try to cultivate it a bunch, but also it’s important to recognize where growth is too expensive to make sense or actually impossible—for me, for example, learning to give up trying to learn foreign languages.