It was not my intention to imply any hostility or resentment. I thought ‘anthropomorphic’ is valid terminology in such a discussion. I was also not agreeing with you. If you are an expert and have been offended by implying that what you said might be due to an anthropomorphic bias, then accept my apology, I was merely trying to communicate my perception of the subject matter.
I had wedrifid telling me the same yesterday, that my tone isn’t appropriate when I wrote about his superior and rational use of the reputation system here, when I was actually just being honest. I’m not good at social signaling, sorry.
An optimizing system, given a path that leads it to bypass a constraint, will not necessarily discard that path. Why would it?
I think we are talking past each other. The way I see it is that a constraint is part of the design specifications of that which is optimized. Disregarding certain specifications will not allow it to optimize whatever it is optimizing with maximal efficiency.
What was puzzling me was that I said in the first place that I was reasoning by analogy to humans and that this was a tricky thing to do, so when you classified this as anthropomorphic my reaction was “well, yes, that’s what I said.”
Since it seemed to me you were repeating something I’d said, I assumed your intention was to agree with me, though it didn’t sound like it (and as it turned out, you weren’t).
And, yes, I’ve noticed that tone is a problem in a lot of your exchanges, which is why I’m basically disregarding tone in this one, as I said before.
The way I see it is that a constraint is part of the design specifications of that which is optimized.
Ah! In that case, I think we agree.
Yes, embedding everything we care about into the optimization target, rather than depending on something outside the optimization process to do important work, is the way to go.
You seemed to be defending the “failsafes” model, which I understand to be importantly different from this, which is where the divergence came from, I think. Apparently I (and, I suspect, some others) misunderstood what you were defending.
It was not my intention to imply any hostility or resentment. I thought ‘anthropomorphic’ is valid terminology in such a discussion. I was also not agreeing with you. If you are an expert and have been offended by implying that what you said might be due to an anthropomorphic bias, then accept my apology, I was merely trying to communicate my perception of the subject matter.
I had wedrifid telling me the same yesterday, that my tone isn’t appropriate when I wrote about his superior and rational use of the reputation system here, when I was actually just being honest. I’m not good at social signaling, sorry.
I think we are talking past each other. The way I see it is that a constraint is part of the design specifications of that which is optimized. Disregarding certain specifications will not allow it to optimize whatever it is optimizing with maximal efficiency.
Not an expert, and not offended.
What was puzzling me was that I said in the first place that I was reasoning by analogy to humans and that this was a tricky thing to do, so when you classified this as anthropomorphic my reaction was “well, yes, that’s what I said.”
Since it seemed to me you were repeating something I’d said, I assumed your intention was to agree with me, though it didn’t sound like it (and as it turned out, you weren’t).
And, yes, I’ve noticed that tone is a problem in a lot of your exchanges, which is why I’m basically disregarding tone in this one, as I said before.
Ah! In that case, I think we agree.
Yes, embedding everything we care about into the optimization target, rather than depending on something outside the optimization process to do important work, is the way to go.
You seemed to be defending the “failsafes” model, which I understand to be importantly different from this, which is where the divergence came from, I think. Apparently I (and, I suspect, some others) misunderstood what you were defending.
Sorry! Glad we worked that out, though.