For people, is there a meaningful difference between the two?
Of course. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.
“but I’ve judged them already, why should I do it twice?”
I said I will update on the evidence. The difference seems to be that you consider that insufficient—you want me to actively seek new evidence and I think it’s rarely worthwhile.
. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.
I don’t think this is a meaningful distinction for people. People can (and often do) have personality changes (and other changes of ‘mind’) after a stroke.
I don’t think this is a meaningful distinction for people.
You don’t think it’s meaningful to model people as having a hardware layer and a software layer? Why?
People can (and often do) have personality changes (and other changes of ‘mind’) after a stroke.
Why are you surprised that changes (e.g. failures) in hardware affect the software? That seems to be the way these things work, both in biological brains and in digital devices. In fact, humans are unusual in that for them the causality goes both ways: software can and does affect the hardware, too. But hardware affects the software in pretty much every situation where it makes sense to speak of hardware and software.
In more general terms, hardware = brain and software = mind.
Echoing the others, this is more dualistic than I’m comfortable with. It looks to me that in people, you just have ‘wetware’ that is both hardware and software simultaneously, rather than the crisp distinction that exists between them in silicon.
you want me to actively seek new evidence and I think it’s rarely worthwhile.
Correct. I do hope that you noticed that this still relies on a potentially biased judgment (I think it’s rarely worthwhile is a counterfactual prediction about what would happen if you did apply the PoC), but beyond that I think we’re at mutual understanding.
Echoing the others, this is more dualistic than I’m comfortable with
To quote myself, we’re talking about “model[ing] people as having a hardware layer and a software layer”. And to quote Monty Python, it’s only a model. It is appropriate for some uses and inappropriate for others. For example, I think it’s quite appropriate for a neurosurgeon. But it’s probably not as useful for thinking about biofeedback, to give another example.
I do hope that you noticed that this still relies on a potentially biased judgment
Of course, but potentially biased judgments is all I have. They are still all I have even if I were to diligently apply PoC everywhere.
Of course. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.
I said I will update on the evidence. The difference seems to be that you consider that insufficient—you want me to actively seek new evidence and I think it’s rarely worthwhile.
I don’t think this is a meaningful distinction for people. People can (and often do) have personality changes (and other changes of ‘mind’) after a stroke.
You don’t think it’s meaningful to model people as having a hardware layer and a software layer? Why?
Why are you surprised that changes (e.g. failures) in hardware affect the software? That seems to be the way these things work, both in biological brains and in digital devices. In fact, humans are unusual in that for them the causality goes both ways: software can and does affect the hardware, too. But hardware affects the software in pretty much every situation where it makes sense to speak of hardware and software.
Echoing the others, this is more dualistic than I’m comfortable with. It looks to me that in people, you just have ‘wetware’ that is both hardware and software simultaneously, rather than the crisp distinction that exists between them in silicon.
Correct. I do hope that you noticed that this still relies on a potentially biased judgment (I think it’s rarely worthwhile is a counterfactual prediction about what would happen if you did apply the PoC), but beyond that I think we’re at mutual understanding.
To quote myself, we’re talking about “model[ing] people as having a hardware layer and a software layer”. And to quote Monty Python, it’s only a model. It is appropriate for some uses and inappropriate for others. For example, I think it’s quite appropriate for a neurosurgeon. But it’s probably not as useful for thinking about biofeedback, to give another example.
Of course, but potentially biased judgments is all I have. They are still all I have even if I were to diligently apply PoC everywhere.
Huh, I don’t think I’ve ever understood that metaphor before. Thanks. It’s oddly dualist.