I appreciate your feedback and take it in the spirit it is intended. You are in no danger of shitting on my idea because it’s not my idea. It’s happening with or without me.
My idea is to cast a broad net looking for strategies for harm reduction and risk mitigation within these constraints.
I’m with you that machines practising medicine autonomously is an bad idea, as do doctors. Because, idealistically, they got into this work in order to help people, and cynically, they don’t want to be rendered redundant.
The primary focus looks like workflow management, not diagnoses. E.g. how to reduce the amount of time various requests sit in a queue by figuring out which humans are most likely the ones who should be reading them.
Also, predictive modelling, e.g. which patients are at elevated risk for bad outcomes. Or how many nurses to schedule for a particular shift. Though these don’t necessarily need AI/ML and long predate AI/ML.
Then there are auto-suggestor/auto-reminder use-cases: “You coded this patient as having diabetes without complications, but the text notes suggest diabetes with nephropathy, are you sure you didn’t mean to use that more specific code?”
So, at least in the short term, AI apps will not have the opportunity to screw up in the immediately obvious ways like incorrect diagnoses or incorrect orders. It’s the more subtle screw-ups that I’m worried about at the moment.
I appreciate your feedback and take it in the spirit it is intended. You are in no danger of shitting on my idea because it’s not my idea. It’s happening with or without me.
My idea is to cast a broad net looking for strategies for harm reduction and risk mitigation within these constraints.
I’m with you that machines practising medicine autonomously is an bad idea, as do doctors. Because, idealistically, they got into this work in order to help people, and cynically, they don’t want to be rendered redundant.
The primary focus looks like workflow management, not diagnoses. E.g. how to reduce the amount of time various requests sit in a queue by figuring out which humans are most likely the ones who should be reading them.
Also, predictive modelling, e.g. which patients are at elevated risk for bad outcomes. Or how many nurses to schedule for a particular shift. Though these don’t necessarily need AI/ML and long predate AI/ML.
Then there are auto-suggestor/auto-reminder use-cases: “You coded this patient as having diabetes without complications, but the text notes suggest diabetes with nephropathy, are you sure you didn’t mean to use that more specific code?”
So, at least in the short term, AI apps will not have the opportunity to screw up in the immediately obvious ways like incorrect diagnoses or incorrect orders. It’s the more subtle screw-ups that I’m worried about at the moment.