Six months ago I thought CFAR was probably a bad idea. Now I think it’s worth the investment, and have been positively surprised in three major ways in the last two months about the positive effects of already-done CFAR work.
Three months ago I thought Amy + some professionals could organize the Summit without much work from the rest of SingInst; I no longer think that’s true.
Due to updates about simulation shutdown risk and the difficulty of FAI philosophy (I think it’s easier than I used to believe, though still very hard), I think an FAI team is a better idea than I thought four months ago.
I’ve downgraded my estimation of my own rationality several times in the past three months.
In the past two weeks I switched from thinking CFAR should be a “standard-template” non-profit to thinking it should be a non-profit that acts almost entirely like a for-profit company (it’s very complicated, but basically: it will scale better and faster that way).
I’ve updated several times in favor of thinking I can grow into a pretty good long-term executive.
Due to updates about simulation shutdown risk and the difficulty of FAI philosophy (I think it’s easier than I used to believe, though still very hard), I think an FAI team is a better idea than I thought four months ago.
Can you elaborate on this? Specifically, what did you learn about simulation shutdown risk, what do you mean by FAI team, and what does one have to do with the other?
Kawoomba bumped your comment to my attention, but unfortunately I don’t now recall the details of the updates you’re asking for more info about. (I don’t recall the “three major ways” I was positively surprised by CFAR, either.)
Six months ago I thought CFAR was probably a bad idea. Now I think it’s worth the investment, and have been positively surprised in three major ways in the last two months about the positive effects of already-done CFAR work.
I just updated in favor of CFAR being a misleading acronym. Took me a while to work out that this means Center For Applied Rationality, not this. That may become less significant once google actually knows about it.
Six months ago I thought CFAR was probably a bad idea. Now I think it’s worth the investment, and have been positively surprised in three major ways in the last two months about the positive effects of already-done CFAR work.
Three months ago I thought Amy + some professionals could organize the Summit without much work from the rest of SingInst; I no longer think that’s true.
Due to updates about simulation shutdown risk and the difficulty of FAI philosophy (I think it’s easier than I used to believe, though still very hard), I think an FAI team is a better idea than I thought four months ago.
I’ve downgraded my estimation of my own rationality several times in the past three months.
In the past two weeks I switched from thinking CFAR should be a “standard-template” non-profit to thinking it should be a non-profit that acts almost entirely like a for-profit company (it’s very complicated, but basically: it will scale better and faster that way).
I’ve updated several times in favor of thinking I can grow into a pretty good long-term executive.
Can you elaborate on this? Specifically, what did you learn about simulation shutdown risk, what do you mean by FAI team, and what does one have to do with the other?
Kawoomba bumped your comment to my attention, but unfortunately I don’t now recall the details of the updates you’re asking for more info about. (I don’t recall the “three major ways” I was positively surprised by CFAR, either.)
I just updated in favor of CFAR being a misleading acronym. Took me a while to work out that this means Center For Applied Rationality, not this. That may become less significant once google actually knows about it.
Bumping this comment.