Coming back to this, I realize that though I summarized this post in Alignment Newsletter #33, I didn’t post my opinion here, so doing that now:
The model in Racing to the precipice had the unintuitive conclusion that if teams have more information (i.e. they know their own or other’s capabilities), then we become less safe, which puzzled me for a while. Their explanation is that with maximal information, the top team takes as much risk as necessary in order to guarantee that they beat the second team, which can be quite a lot of risk if the two teams are close. While this is true, the explanation from this post is more satisfying—since the model has a discontinuity that rewards taking on risk, anything that removes the discontinuity and makes it more continuous will likely improve the prospects for safety, such as not having full information. I claim that in reality these discontinuities mostly don’t exist, since (1) we’re uncertain about who will win and (2) we will probably have a multipolar scenario where even if you aren’t first-to-market you can still capture a lot of value. This suggests that it likely isn’t a problem for teams to have more information about each other on the margin.
That said, these models are still very simplistic, and I mainly try to derive qualitative conclusions from them that my intuition agrees with in hindsight.
I think this is a great summary (EDIT: this should read “I think the summary in the newsletter was great”).
That said, these models are still very simplistic, and I mainly try to derive qualitative conclusions from them that my intuition agrees with in hindsight.
Yes, I agree. The best indicator I had of making a mathematical mistake was whether my intuition agreed in hindsight
(Fyi, this was only my opinion, the summary is in the newsletter. I usually don’t post the summary on the post itself, since it is typically repeating the post in a manner that doesn’t generate new insights.)
Coming back to this, I realize that though I summarized this post in Alignment Newsletter #33, I didn’t post my opinion here, so doing that now:
The model in Racing to the precipice had the unintuitive conclusion that if teams have more information (i.e. they know their own or other’s capabilities), then we become less safe, which puzzled me for a while. Their explanation is that with maximal information, the top team takes as much risk as necessary in order to guarantee that they beat the second team, which can be quite a lot of risk if the two teams are close. While this is true, the explanation from this post is more satisfying—since the model has a discontinuity that rewards taking on risk, anything that removes the discontinuity and makes it more continuous will likely improve the prospects for safety, such as not having full information. I claim that in reality these discontinuities mostly don’t exist, since (1) we’re uncertain about who will win and (2) we will probably have a multipolar scenario where even if you aren’t first-to-market you can still capture a lot of value. This suggests that it likely isn’t a problem for teams to have more information about each other on the margin.
That said, these models are still very simplistic, and I mainly try to derive qualitative conclusions from them that my intuition agrees with in hindsight.
I think this is a great summary (EDIT: this should read “I think the summary in the newsletter was great”).
Yes, I agree. The best indicator I had of making a mathematical mistake was whether my intuition agreed in hindsight
(Fyi, this was only my opinion, the summary is in the newsletter. I usually don’t post the summary on the post itself, since it is typically repeating the post in a manner that doesn’t generate new insights.)