>Safety is limited to refusals, notably including refusals for medical or legal advice. Have they deliberately restricted those abilities to avoid lawsuits or to limit public perceptions of expertise being overtaken rapidly by AI?
I think it’s been well over a year since I’ve had an issue with getting an LLM to give me medical advice, including GPT-4o and other SOTA models like Claude 3.5/7, Grok 3 and Gemini 2.0 Pro. I seem to recall that the original GPT-4 would occasionally refuse, but could be coaxed into it.
I am a doctor, and I tend to include that information either in model memory or in a prompt (mostly to encourage the LLM to assume background knowledge and ability to interpret facts). Even without it, my impression is that most models simply append a “consult a human doctor” boilerplate disclaimer instead of refusing.
I would be rather annoyed if GPT 4.5 was a reversion in that regard, as I find LLMs immensely useful for quick checks on topics I’m personally unfamiliar with (and while hallucinations happen, they’re quite rare now, especially with search, reasoning and grounding). I don’t think OAI or other AI companies have faced any significant amount of litigation from either people who received bad advice, or doctors afraid of losing a job.
I’m curious about whether anyone has had any issues in that regard, though I’d expect not.
>Safety is limited to refusals, notably including refusals for medical or legal advice. Have they deliberately restricted those abilities to avoid lawsuits or to limit public perceptions of expertise being overtaken rapidly by AI?
I think it’s been well over a year since I’ve had an issue with getting an LLM to give me medical advice, including GPT-4o and other SOTA models like Claude 3.5/7, Grok 3 and Gemini 2.0 Pro. I seem to recall that the original GPT-4 would occasionally refuse, but could be coaxed into it.
I am a doctor, and I tend to include that information either in model memory or in a prompt (mostly to encourage the LLM to assume background knowledge and ability to interpret facts). Even without it, my impression is that most models simply append a “consult a human doctor” boilerplate disclaimer instead of refusing.
I would be rather annoyed if GPT 4.5 was a reversion in that regard, as I find LLMs immensely useful for quick checks on topics I’m personally unfamiliar with (and while hallucinations happen, they’re quite rare now, especially with search, reasoning and grounding). I don’t think OAI or other AI companies have faced any significant amount of litigation from either people who received bad advice, or doctors afraid of losing a job.
I’m curious about whether anyone has had any issues in that regard, though I’d expect not.