I’m not sure I actually understand any of your comments. Can you clarify?
Why should you care about the imputed purpose, instead of directly optimizing your satisfaction, serving those purposes only as necessary?
This is generally my philosophy about these things, but then I find myself deciding that justice is a stupid heuristic for social order maintenance, but that line of thought carries through to also say things that I actually value are stupid heuristics. So I guess the point here is that there must be something missing in my understanding of why I’m convinced that some things are undesirable biases, because the current understanding also implicates values.
Really it does just come down to what I value, but I just think I ought to be able to understand it.
Sorry I couldn’t be clearer. I try to have something definite in mind whenever I write, but I don’t do a good job communicating the complete context.
I’m wondering how much flexibility any of us have in really changing our internal satisfaction points. For me, reasoning “this is really for this purpose, so I can bypass it” seems merely as plausible as any other placebo belief—thus my emphasis is on trying to really live that way for a while, rather than forming elaborate beliefs about how we should work.
It’s true that there’s lots of variation between individuals in what self-concepts they mark as important. And some people seem to be genuinely plastic—amenable to introspective self-therapy. Those few who are can end up in interesting places if they’re intelligent and striving toward improving the world, or even just their understanding of it. But as with hypnosis, I always wonder: is the explanation merely in convincing people that they’re changed, or are they really changed?
I’m wondering how much flexibility any of us have in really changing our internal satisfaction points.
Probably not much.
For me, reasoning “this is really for this purpose, so I can bypass it” …
This is what I was having trouble with. It seems like a convincing argument against a bias to know a better way to accomplish its goals and why it’s done that way, but then it breaks down on other things that are closer to values.
I’ve solved the problem for myself by dissolving the qualitative distinction between bias and value. Put them all on a bias-value space arranged by how much we like it and how much it interferes with achieving the other biases/values. If something interferes a lot (like a cognitive error), we call it a bias because following it lowers total value, if something doesn’t interfere with much and seems really important (like love or beauty), we call it a value. These labels are fuzzy and transient; desire for beauty may become a bias when designing a system that may be harmed by beauty.
For me, reasoning “this is really for this purpose, so I can bypass it” …
This is what I was having trouble with. It seems like a convincing argument against a bias to know a better way to accomplish its goals and why it’s done that way, but then it breaks down on other things that are closer to values.
One approach is to make this the definition of the difference between bias and value.
I’m not sure I actually understand any of your comments. Can you clarify?
This is generally my philosophy about these things, but then I find myself deciding that justice is a stupid heuristic for social order maintenance, but that line of thought carries through to also say things that I actually value are stupid heuristics. So I guess the point here is that there must be something missing in my understanding of why I’m convinced that some things are undesirable biases, because the current understanding also implicates values.
Really it does just come down to what I value, but I just think I ought to be able to understand it.
Sorry I couldn’t be clearer. I try to have something definite in mind whenever I write, but I don’t do a good job communicating the complete context.
I’m wondering how much flexibility any of us have in really changing our internal satisfaction points. For me, reasoning “this is really for this purpose, so I can bypass it” seems merely as plausible as any other placebo belief—thus my emphasis is on trying to really live that way for a while, rather than forming elaborate beliefs about how we should work.
It’s true that there’s lots of variation between individuals in what self-concepts they mark as important. And some people seem to be genuinely plastic—amenable to introspective self-therapy. Those few who are can end up in interesting places if they’re intelligent and striving toward improving the world, or even just their understanding of it. But as with hypnosis, I always wonder: is the explanation merely in convincing people that they’re changed, or are they really changed?
Probably not much.
This is what I was having trouble with. It seems like a convincing argument against a bias to know a better way to accomplish its goals and why it’s done that way, but then it breaks down on other things that are closer to values.
I’ve solved the problem for myself by dissolving the qualitative distinction between bias and value. Put them all on a bias-value space arranged by how much we like it and how much it interferes with achieving the other biases/values. If something interferes a lot (like a cognitive error), we call it a bias because following it lowers total value, if something doesn’t interfere with much and seems really important (like love or beauty), we call it a value. These labels are fuzzy and transient; desire for beauty may become a bias when designing a system that may be harmed by beauty.
See the new conclusion on the OP.
One approach is to make this the definition of the difference between bias and value.
This is a good idea, but I’m leaning towards dissolving the difference in favor of a bias-value spectrum based on value minus sabotage or something.