I’m wondering how much flexibility any of us have in really changing our internal satisfaction points.
Probably not much.
For me, reasoning “this is really for this purpose, so I can bypass it” …
This is what I was having trouble with. It seems like a convincing argument against a bias to know a better way to accomplish its goals and why it’s done that way, but then it breaks down on other things that are closer to values.
I’ve solved the problem for myself by dissolving the qualitative distinction between bias and value. Put them all on a bias-value space arranged by how much we like it and how much it interferes with achieving the other biases/values. If something interferes a lot (like a cognitive error), we call it a bias because following it lowers total value, if something doesn’t interfere with much and seems really important (like love or beauty), we call it a value. These labels are fuzzy and transient; desire for beauty may become a bias when designing a system that may be harmed by beauty.
For me, reasoning “this is really for this purpose, so I can bypass it” …
This is what I was having trouble with. It seems like a convincing argument against a bias to know a better way to accomplish its goals and why it’s done that way, but then it breaks down on other things that are closer to values.
One approach is to make this the definition of the difference between bias and value.
Probably not much.
This is what I was having trouble with. It seems like a convincing argument against a bias to know a better way to accomplish its goals and why it’s done that way, but then it breaks down on other things that are closer to values.
I’ve solved the problem for myself by dissolving the qualitative distinction between bias and value. Put them all on a bias-value space arranged by how much we like it and how much it interferes with achieving the other biases/values. If something interferes a lot (like a cognitive error), we call it a bias because following it lowers total value, if something doesn’t interfere with much and seems really important (like love or beauty), we call it a value. These labels are fuzzy and transient; desire for beauty may become a bias when designing a system that may be harmed by beauty.
See the new conclusion on the OP.
One approach is to make this the definition of the difference between bias and value.
This is a good idea, but I’m leaning towards dissolving the difference in favor of a bias-value spectrum based on value minus sabotage or something.