A couple of quick points about “reflective equilibrium”:
I just recently noticed that when philosophers (and at least some LWers including Yvain) talk about “reflective equilibrium”, they’re (usually?) talking about a temporary state of coherence among one’s considered judgement or intuitions (“There need be no assurance the reflective equilibrium is stable—we may modify it as new elements arise in our thinking”), whereas many other LWers (such as Eliezer) use it to refer to an eventual and stable state of coherence, for example after one has considered all possible moral arguments. I’ve personally always been assuming the latter meaning, and as a result have misinterpreted a number of posts and comments that meant to refer to the former. This seems worth pointing out in case anyone else has been similarly confused without realizing it.
I often wonder and ask others what non-trivial properties we can state about moral reasoning (i.e., besides that theoretically it must be some sort of an algorithm). One thing that I don’t think we know yet is that for any given human, their moral judgments/intuitions are guaranteed to converge to some stable and coherent set as time goes to infinity. It may well be the case that there are multiple eventual equilibria that depend on the order in which one considers arguments, or none if for example their conclusions keep wandering chaotically among several basins of attraction as they review previously considered arguments. So I think the singular term “reflective equilibrium” is currently unjustified when talking about someone’s eventual conclusions, and we should instead use “the possibly null set of eventual reflective equilibria”. (Unless someone can come up with a pithier term that has similar connotations and denotations.)
It may well be the case that there are multiple eventual equilibria that depend on the order in which one considers arguments
Another way to get several equilibria would be moral judgements whose “correctness” depends on whether other people share them. I find it likely that there would be some like that, since you get those in social norms and laws (like, on which side of the road you drive, or whether you should address strangers by their first or last name), and there’s a bit of a fuzzy continuum between laws, social norms, and morality.
A couple of quick points about “reflective equilibrium”:
I just recently noticed that when philosophers (and at least some LWers including Yvain) talk about “reflective equilibrium”, they’re (usually?) talking about a temporary state of coherence among one’s considered judgement or intuitions (“There need be no assurance the reflective equilibrium is stable—we may modify it as new elements arise in our thinking”), whereas many other LWers (such as Eliezer) use it to refer to an eventual and stable state of coherence, for example after one has considered all possible moral arguments. I’ve personally always been assuming the latter meaning, and as a result have misinterpreted a number of posts and comments that meant to refer to the former. This seems worth pointing out in case anyone else has been similarly confused without realizing it.
I often wonder and ask others what non-trivial properties we can state about moral reasoning (i.e., besides that theoretically it must be some sort of an algorithm). One thing that I don’t think we know yet is that for any given human, their moral judgments/intuitions are guaranteed to converge to some stable and coherent set as time goes to infinity. It may well be the case that there are multiple eventual equilibria that depend on the order in which one considers arguments, or none if for example their conclusions keep wandering chaotically among several basins of attraction as they review previously considered arguments. So I think the singular term “reflective equilibrium” is currently unjustified when talking about someone’s eventual conclusions, and we should instead use “the possibly null set of eventual reflective equilibria”. (Unless someone can come up with a pithier term that has similar connotations and denotations.)
Another way to get several equilibria would be moral judgements whose “correctness” depends on whether other people share them. I find it likely that there would be some like that, since you get those in social norms and laws (like, on which side of the road you drive, or whether you should address strangers by their first or last name), and there’s a bit of a fuzzy continuum between laws, social norms, and morality.