I think this is true and good advice in general, but recently I’ve been thinking that there is a class of value-like claims which are more reliable. I will call them error claims.
When an optimized system does something bad (e.g. a computer program crashes when trying to use one of its features), one can infer that this badness is an error (e.g. caused by a bug). We could perhaps formalize this as saying that it is a difference from how the system would ideally act (though I think this formalization is intractable in various ways, so I suspect a better formalization would be something along the lines of “there is a small, sparse change to the system which can massively improve this outcome”—either way, it’s clearly value-laden).
The main way of reasoning about error claims is that an error must always be caused by an error. So if we stay with the example of the bug, you typically first reproduce it and then backchain through the code until you find a place to fix it.
For an intentionally designed system that’s well-documented, error claims are often directly verifiable and objective, based on how the system is supposed to work. Error claims are also less subject to the memetic driver, since often it’s less relevant to tell non-experts about them (though error claims can degenerate into less-specific value claims and become memetic parasites that way).
(I think there’s a dual to error claims that could be called “opportunity claims”, where one says that there is a sparse good thing which could be exploited using dense actions? But opportunity claims don’t seem as robust as error claims are.)
I think this is true and good advice in general, but recently I’ve been thinking that there is a class of value-like claims which are more reliable. I will call them error claims.
When an optimized system does something bad (e.g. a computer program crashes when trying to use one of its features), one can infer that this badness is an error (e.g. caused by a bug). We could perhaps formalize this as saying that it is a difference from how the system would ideally act (though I think this formalization is intractable in various ways, so I suspect a better formalization would be something along the lines of “there is a small, sparse change to the system which can massively improve this outcome”—either way, it’s clearly value-laden).
The main way of reasoning about error claims is that an error must always be caused by an error. So if we stay with the example of the bug, you typically first reproduce it and then backchain through the code until you find a place to fix it.
For an intentionally designed system that’s well-documented, error claims are often directly verifiable and objective, based on how the system is supposed to work. Error claims are also less subject to the memetic driver, since often it’s less relevant to tell non-experts about them (though error claims can degenerate into less-specific value claims and become memetic parasites that way).
(I think there’s a dual to error claims that could be called “opportunity claims”, where one says that there is a sparse good thing which could be exploited using dense actions? But opportunity claims don’t seem as robust as error claims are.)