perhaps she has social anxiety or paranoia she’s trying to overcome
That’s not the case where she shouldn’t trust her hardware—that’s the case where her software has a known bug.
In order to confirm or disconfirm that belief, I need to alter my behavior; if I don’t greet Mallory, then I don’t get any evidence!
Sure, so you have to trade off your need to discover more evidence against the cost of doing so. Sometimes it’s worth it, sometimes not.
where the PoC argues that you’ve probably overestimated the probability that you’ll waste effort.
Really? For a randomly sampled person, my prior already is that talking to him/her will be wasted effort. And if in addition to that he offers evidence of stupidity, well… I think you underappreciate opportunity costs—there are a LOT of people around and most of them aren’t very interesting.
I think that “don’t be an idiot” is far too terse a package.
Yes, but properly unpacking it will take between one and several books at best :-/
That’s not the case where she shouldn’t trust her hardware—that’s the case where her software has a known bug.
For people, is there a meaningful difference between the two? The primary difference between “your software is buggy” and “your hardware is untrustworthy” that I see is that the first suggests the solution is easier: just patch the bug! It is rarely enough to just know that the problem exists, or what steps you should take to overcome the problem; generally one must train themselves into being someone who copes effectively with the problem (or, rarely, into someone who does not have the problem).
I think you underappreciate opportunity costs—there are a LOT of people around and most of them aren’t very interesting.
I agree there are opportunity costs; I see value in walled gardens. But just because there is value doesn’t mean you’re not overestimating that value, and we’re back to the my root issue that your response to “your judgment of other people might be flawed” seems to be “but I’ve judged them already, why should I do it twice?”
Yes, but properly unpacking it will take between one and several books at best :-/
Indeed; I have at least a shelf and growing devoted to decision-making and ameliorative psychology.
For people, is there a meaningful difference between the two?
Of course. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.
“but I’ve judged them already, why should I do it twice?”
I said I will update on the evidence. The difference seems to be that you consider that insufficient—you want me to actively seek new evidence and I think it’s rarely worthwhile.
. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.
I don’t think this is a meaningful distinction for people. People can (and often do) have personality changes (and other changes of ‘mind’) after a stroke.
I don’t think this is a meaningful distinction for people.
You don’t think it’s meaningful to model people as having a hardware layer and a software layer? Why?
People can (and often do) have personality changes (and other changes of ‘mind’) after a stroke.
Why are you surprised that changes (e.g. failures) in hardware affect the software? That seems to be the way these things work, both in biological brains and in digital devices. In fact, humans are unusual in that for them the causality goes both ways: software can and does affect the hardware, too. But hardware affects the software in pretty much every situation where it makes sense to speak of hardware and software.
In more general terms, hardware = brain and software = mind.
Echoing the others, this is more dualistic than I’m comfortable with. It looks to me that in people, you just have ‘wetware’ that is both hardware and software simultaneously, rather than the crisp distinction that exists between them in silicon.
you want me to actively seek new evidence and I think it’s rarely worthwhile.
Correct. I do hope that you noticed that this still relies on a potentially biased judgment (I think it’s rarely worthwhile is a counterfactual prediction about what would happen if you did apply the PoC), but beyond that I think we’re at mutual understanding.
Echoing the others, this is more dualistic than I’m comfortable with
To quote myself, we’re talking about “model[ing] people as having a hardware layer and a software layer”. And to quote Monty Python, it’s only a model. It is appropriate for some uses and inappropriate for others. For example, I think it’s quite appropriate for a neurosurgeon. But it’s probably not as useful for thinking about biofeedback, to give another example.
I do hope that you noticed that this still relies on a potentially biased judgment
Of course, but potentially biased judgments is all I have. They are still all I have even if I were to diligently apply PoC everywhere.
That’s not the case where she shouldn’t trust her hardware—that’s the case where her software has a known bug.
Sure, so you have to trade off your need to discover more evidence against the cost of doing so. Sometimes it’s worth it, sometimes not.
Really? For a randomly sampled person, my prior already is that talking to him/her will be wasted effort. And if in addition to that he offers evidence of stupidity, well… I think you underappreciate opportunity costs—there are a LOT of people around and most of them aren’t very interesting.
Yes, but properly unpacking it will take between one and several books at best :-/
For people, is there a meaningful difference between the two? The primary difference between “your software is buggy” and “your hardware is untrustworthy” that I see is that the first suggests the solution is easier: just patch the bug! It is rarely enough to just know that the problem exists, or what steps you should take to overcome the problem; generally one must train themselves into being someone who copes effectively with the problem (or, rarely, into someone who does not have the problem).
I agree there are opportunity costs; I see value in walled gardens. But just because there is value doesn’t mean you’re not overestimating that value, and we’re back to the my root issue that your response to “your judgment of other people might be flawed” seems to be “but I’ve judged them already, why should I do it twice?”
Indeed; I have at least a shelf and growing devoted to decision-making and ameliorative psychology.
Of course. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.
I said I will update on the evidence. The difference seems to be that you consider that insufficient—you want me to actively seek new evidence and I think it’s rarely worthwhile.
I don’t think this is a meaningful distinction for people. People can (and often do) have personality changes (and other changes of ‘mind’) after a stroke.
You don’t think it’s meaningful to model people as having a hardware layer and a software layer? Why?
Why are you surprised that changes (e.g. failures) in hardware affect the software? That seems to be the way these things work, both in biological brains and in digital devices. In fact, humans are unusual in that for them the causality goes both ways: software can and does affect the hardware, too. But hardware affects the software in pretty much every situation where it makes sense to speak of hardware and software.
Echoing the others, this is more dualistic than I’m comfortable with. It looks to me that in people, you just have ‘wetware’ that is both hardware and software simultaneously, rather than the crisp distinction that exists between them in silicon.
Correct. I do hope that you noticed that this still relies on a potentially biased judgment (I think it’s rarely worthwhile is a counterfactual prediction about what would happen if you did apply the PoC), but beyond that I think we’re at mutual understanding.
To quote myself, we’re talking about “model[ing] people as having a hardware layer and a software layer”. And to quote Monty Python, it’s only a model. It is appropriate for some uses and inappropriate for others. For example, I think it’s quite appropriate for a neurosurgeon. But it’s probably not as useful for thinking about biofeedback, to give another example.
Of course, but potentially biased judgments is all I have. They are still all I have even if I were to diligently apply PoC everywhere.
Huh, I don’t think I’ve ever understood that metaphor before. Thanks. It’s oddly dualist.