I genuinely think it could be one of the most harmful and dangerous ideas known to man. I consider it to be a second head on the hydra of AI/LLMs.
Consider the fact that we already have multiple scandals of fake research coming from prestigious universities (papers that were referenced by other papers, and so on). This builds an entire tree of fake knowledge, which, if left unaudited, would have been seen as a legitimate foundation of epistemology upon which to teach future students, scholars and practitioners.
Now imagine applying this to something like healthcare. Instead of having human eyes who (while mistakes can be made, they’re usually for reasons other than pre-programmed generalisations) scan over, absorb the information and adapt accordingly, we have an AI/LLM. Such an entity may be correct 80% of the time in analysing whatever cancer growth, or disease it’s been trained on over millions of generations. What about the other 20%?
What implications does this have for insurance claims where an AI makes a presumption built on flawed data about the degree of risk in a person’s health? What impact does this have on triage? Who takes responsibility when the AI makes a mistake? (And I know of no single legal practitioner held in high regard who is yet to substantively tackle this consciousness problem in law).
It’s also pretty clear that AI companies don’t give a damn about privacy. They may claim to, but they don’t. At the end of the day, these AI companies are fortified behind oppressive terms & conditions, layers of technicalities, and huge all-star lawyer teams that take hundreds of thousands of dollars to defeat at minimum. Accountability is an ideal put beyond reach by strong-arm litigation on the ‘little guy’, or, the average citizen.
I’m not shitting on your idea. I’m merely outlining the reality of things at the moment.
When it comes to AI; what can be used for evil, will be used for evil.
I appreciate your feedback and take it in the spirit it is intended. You are in no danger of shitting on my idea because it’s not my idea. It’s happening with or without me.
My idea is to cast a broad net looking for strategies for harm reduction and risk mitigation within these constraints.
I’m with you that machines practising medicine autonomously is an bad idea, as do doctors. Because, idealistically, they got into this work in order to help people, and cynically, they don’t want to be rendered redundant.
The primary focus looks like workflow management, not diagnoses. E.g. how to reduce the amount of time various requests sit in a queue by figuring out which humans are most likely the ones who should be reading them.
Also, predictive modelling, e.g. which patients are at elevated risk for bad outcomes. Or how many nurses to schedule for a particular shift. Though these don’t necessarily need AI/ML and long predate AI/ML.
Then there are auto-suggestor/auto-reminder use-cases: “You coded this patient as having diabetes without complications, but the text notes suggest diabetes with nephropathy, are you sure you didn’t mean to use that more specific code?”
So, at least in the short term, AI apps will not have the opportunity to screw up in the immediately obvious ways like incorrect diagnoses or incorrect orders. It’s the more subtle screw-ups that I’m worried about at the moment.
I genuinely think it could be one of the most harmful and dangerous ideas known to man. I consider it to be a second head on the hydra of AI/LLMs.
Consider the fact that we already have multiple scandals of fake research coming from prestigious universities (papers that were referenced by other papers, and so on). This builds an entire tree of fake knowledge, which, if left unaudited, would have been seen as a legitimate foundation of epistemology upon which to teach future students, scholars and practitioners.
Now imagine applying this to something like healthcare. Instead of having human eyes who (while mistakes can be made, they’re usually for reasons other than pre-programmed generalisations) scan over, absorb the information and adapt accordingly, we have an AI/LLM. Such an entity may be correct 80% of the time in analysing whatever cancer growth, or disease it’s been trained on over millions of generations. What about the other 20%?
What implications does this have for insurance claims where an AI makes a presumption built on flawed data about the degree of risk in a person’s health? What impact does this have on triage? Who takes responsibility when the AI makes a mistake? (And I know of no single legal practitioner held in high regard who is yet to substantively tackle this consciousness problem in law).
It’s also pretty clear that AI companies don’t give a damn about privacy. They may claim to, but they don’t. At the end of the day, these AI companies are fortified behind oppressive terms & conditions, layers of technicalities, and huge all-star lawyer teams that take hundreds of thousands of dollars to defeat at minimum. Accountability is an ideal put beyond reach by strong-arm litigation on the ‘little guy’, or, the average citizen.
I’m not shitting on your idea. I’m merely outlining the reality of things at the moment.
When it comes to AI; what can be used for evil, will be used for evil.
I appreciate your feedback and take it in the spirit it is intended. You are in no danger of shitting on my idea because it’s not my idea. It’s happening with or without me.
My idea is to cast a broad net looking for strategies for harm reduction and risk mitigation within these constraints.
I’m with you that machines practising medicine autonomously is an bad idea, as do doctors. Because, idealistically, they got into this work in order to help people, and cynically, they don’t want to be rendered redundant.
The primary focus looks like workflow management, not diagnoses. E.g. how to reduce the amount of time various requests sit in a queue by figuring out which humans are most likely the ones who should be reading them.
Also, predictive modelling, e.g. which patients are at elevated risk for bad outcomes. Or how many nurses to schedule for a particular shift. Though these don’t necessarily need AI/ML and long predate AI/ML.
Then there are auto-suggestor/auto-reminder use-cases: “You coded this patient as having diabetes without complications, but the text notes suggest diabetes with nephropathy, are you sure you didn’t mean to use that more specific code?”
So, at least in the short term, AI apps will not have the opportunity to screw up in the immediately obvious ways like incorrect diagnoses or incorrect orders. It’s the more subtle screw-ups that I’m worried about at the moment.