I also think it’s fair to have low expectations here. Although I generally like SEP, I also have experienced Gell-Mann moments with it enough times that now I think of it differently.
It’s not really like a regular encyclopedia written by anonymous authors with a “view from nowhere”. Each article has an author, those authors are allowed to have clear authorial bias, the whole thing has lots of selection bias (articles only exist because someone was willing to write them and they do count as publications though they are strangely influential and not at the same time because they are read by lots of people but most of those people avoid citing them and instead cite the work referenced by the SEP article based on my anecdotal data), and as a result you get things like, for example, articles written by people only because they are against some position not for it but no proponent of the position had written the article and those articles often fail to pass the ITT.
Combine this with the already low expectations around writing about AI safety topics in general and it makes it nicely surprising that this one turned out pretty good.
I also think it’s fair to have low expectations here. Although I generally like SEP, I also have experienced Gell-Mann moments with it enough times that now I think of it differently.
It’s not really like a regular encyclopedia written by anonymous authors with a “view from nowhere”. Each article has an author, those authors are allowed to have clear authorial bias, the whole thing has lots of selection bias (articles only exist because someone was willing to write them and they do count as publications though they are strangely influential and not at the same time because they are read by lots of people but most of those people avoid citing them and instead cite the work referenced by the SEP article based on my anecdotal data), and as a result you get things like, for example, articles written by people only because they are against some position not for it but no proponent of the position had written the article and those articles often fail to pass the ITT.
Combine this with the already low expectations around writing about AI safety topics in general and it makes it nicely surprising that this one turned out pretty good.