I would add that 1 is also useful, as long as the prompter understands what they’re doing. Rewriting the same information in many styles for different uses/contexts/audiences with minimal work is useful.
Hallucinating facts is bad, but if they’re the kind of things you can easily identify and fact check, it may not be too bad in practice. And the possibility of GPT inserting true facts is actually also useful, again as long as they’re things you can identify and check. Where we get into trouble (at current and near-future capability levels) is when people and companies stop editing and checking output before using it.
Thanks, that’s a really useful summing up!
I would add that 1 is also useful, as long as the prompter understands what they’re doing. Rewriting the same information in many styles for different uses/contexts/audiences with minimal work is useful.
Hallucinating facts is bad, but if they’re the kind of things you can easily identify and fact check, it may not be too bad in practice. And the possibility of GPT inserting true facts is actually also useful, again as long as they’re things you can identify and check. Where we get into trouble (at current and near-future capability levels) is when people and companies stop editing and checking output before using it.