(Sorry for delay, and thanks for the formatting note.)
My knowledge is not very up to date re. machine medicine, but I did get to play with some of the commercially available systems, and I wasn’t hugely impressed. There may be a lot more impressive results yet to be released commercially but (appealing back to my priors) I think I would have heard of it as it would be a gamechanger for global health. Also, if fairly advanced knowledge work of primary care can be done by computer, I’d expect a lot of jobs without the protective features of medicine to be automated.
I agree that machine medicine along the lines you suggest will be superior to human performance, and I anticipate this to be achieved (even if I am right and it hasn’t already happened) fairly soon. I think medicine will survive less by the cognitive skill required, but rather though technical facility and social interactions, where machines comparably lag (of course, I anticipate they will steadily get better at this too).
I grant a hansonian account can accomodate this sort of ‘guided by efficacy’ data I suggest by ‘pretending to actually try’ considerations, but I would suggest this almost becomes an epicycle: any data which supports medicine being about healing can be explained away by the claim that they’re only pretending to be about healing as a circuitous route to signalling. I would say the general ethos of medicine (EBM, profileration of trials) looks like pro tanto reasons in favour about being about healing, and divergence from this (e.g. what happened to semmelweis, other lags) is better explained by doctors being imperfect and selfish, and patients irrational, rather than both parties adeptly following a signalling account.
But I struggle to see what evidence could neatly distinguish between these cases. If you have an idea, I’d be keen to hear it. :)
I agree with the selection worry re. Metamed’s customers: they also are assumedly selected from people who modern medicine didn’t help, which may also have some effects (not to mention making Metameds task harder, as their pool will be harder to treat than unselected-for-failure cases who see the doctor ‘first line’). I’d also (with all respect meant to the staff of Metamed) suggest staff of Metamed may not be the most objective sources of why it failed: I’d guess people would prefer to say their startups failed because of the market or product market fit, rather than ‘actually, our product was straight worse than our competitors’.
But I struggle to see what evidence could neatly distinguish between these cases. If you have an idea, I’d be keen to hear it. :)
I’m not sure there’s much of a difference between the “doctors care about healing, but run into imperfection and seflishness” interpretation and the “doctors optimize for signalling, but that requires some healing as a side effect” interpretation besides which piece goes before the ‘but’ and which piece goes after.
The main difference I do see is that if ‘selfishness’ means ‘status’ then we might see different defection than if ‘selfishness’ means ‘greed.’ I’m not sure there’s enough difference between them for a clear comparison to be made, though. Greedy doctors will push for patients to do costly but unnecessary procedures, but status-seeking doctors will also push for patients to do costly but unnecessary procedures because it makes them seem more important and necessary.
(Sorry for delay, and thanks for the formatting note.)
My knowledge is not very up to date re. machine medicine, but I did get to play with some of the commercially available systems, and I wasn’t hugely impressed. There may be a lot more impressive results yet to be released commercially but (appealing back to my priors) I think I would have heard of it as it would be a gamechanger for global health. Also, if fairly advanced knowledge work of primary care can be done by computer, I’d expect a lot of jobs without the protective features of medicine to be automated.
I agree that machine medicine along the lines you suggest will be superior to human performance, and I anticipate this to be achieved (even if I am right and it hasn’t already happened) fairly soon. I think medicine will survive less by the cognitive skill required, but rather though technical facility and social interactions, where machines comparably lag (of course, I anticipate they will steadily get better at this too).
I grant a hansonian account can accomodate this sort of ‘guided by efficacy’ data I suggest by ‘pretending to actually try’ considerations, but I would suggest this almost becomes an epicycle: any data which supports medicine being about healing can be explained away by the claim that they’re only pretending to be about healing as a circuitous route to signalling. I would say the general ethos of medicine (EBM, profileration of trials) looks like pro tanto reasons in favour about being about healing, and divergence from this (e.g. what happened to semmelweis, other lags) is better explained by doctors being imperfect and selfish, and patients irrational, rather than both parties adeptly following a signalling account.
But I struggle to see what evidence could neatly distinguish between these cases. If you have an idea, I’d be keen to hear it. :)
I agree with the selection worry re. Metamed’s customers: they also are assumedly selected from people who modern medicine didn’t help, which may also have some effects (not to mention making Metameds task harder, as their pool will be harder to treat than unselected-for-failure cases who see the doctor ‘first line’). I’d also (with all respect meant to the staff of Metamed) suggest staff of Metamed may not be the most objective sources of why it failed: I’d guess people would prefer to say their startups failed because of the market or product market fit, rather than ‘actually, our product was straight worse than our competitors’.
I’m not sure there’s much of a difference between the “doctors care about healing, but run into imperfection and seflishness” interpretation and the “doctors optimize for signalling, but that requires some healing as a side effect” interpretation besides which piece goes before the ‘but’ and which piece goes after.
The main difference I do see is that if ‘selfishness’ means ‘status’ then we might see different defection than if ‘selfishness’ means ‘greed.’ I’m not sure there’s enough difference between them for a clear comparison to be made, though. Greedy doctors will push for patients to do costly but unnecessary procedures, but status-seeking doctors will also push for patients to do costly but unnecessary procedures because it makes them seem more important and necessary.