ah yeah, so the claim is something like ‘if we think other humans have ‘bad values’, maybe in fact our values are the same and one of us is mistaken, and we’ll get less mistaken over time’?
I tend to want to split “value drift” into “change in the mapping from (possible beliefs about logical and empirical questions) to (implied values)” and “change in beliefs about logical and empirical questions”, instead of lumping both into “change in values”.
ah yeah, so the claim is something like ‘if we think other humans have ‘bad values’, maybe in fact our values are the same and one of us is mistaken, and we’ll get less mistaken over time’?
I guess I was kind of subsuming this into ‘benevolent values have become more common’
I tend to want to split “value drift” into “change in the mapping from (possible beliefs about logical and empirical questions) to (implied values)” and “change in beliefs about logical and empirical questions”, instead of lumping both into “change in values”.