Due to updates about simulation shutdown risk and the difficulty of FAI philosophy (I think it’s easier than I used to believe, though still very hard), I think an FAI team is a better idea than I thought four months ago.
Can you elaborate on this? Specifically, what did you learn about simulation shutdown risk, what do you mean by FAI team, and what does one have to do with the other?
Kawoomba bumped your comment to my attention, but unfortunately I don’t now recall the details of the updates you’re asking for more info about. (I don’t recall the “three major ways” I was positively surprised by CFAR, either.)
Can you elaborate on this? Specifically, what did you learn about simulation shutdown risk, what do you mean by FAI team, and what does one have to do with the other?
Kawoomba bumped your comment to my attention, but unfortunately I don’t now recall the details of the updates you’re asking for more info about. (I don’t recall the “three major ways” I was positively surprised by CFAR, either.)