As Zvi points out, what counts as “new information” depends on the person reading the message
Taking all the knowledge and writing and tendencies of the entire human race and all properties of the physical universe as a given, sure, this is correct. The response corresponds to the prompt, all the uniqueness has to be there.
Presumably when you look something up in an encyclopedia, it is because there is something that other people (the authors of the encyclopedia) know that you don’t know. When you write an essay, the situation should be exactly opposite: there is something you know that the reader of the essay doesn’t know. In this case the possibilities are:
The information I am trying to convey in the essay is strictly contained in the prompt (in which case GPT is just adding extra unnecessary verbiage)
GPT is adding information known to me but unknown to the reader of the essay (say I am writing a recipe for baking cookies and steps 1-9 are the normal way of making cookies but step 10 is something novel like “add cinnamon”)
GPT is hallucinating facts unknown to the writer of the essay (theoretically these could still be true facts, but the writer would have to verify this somehow)
Nassim Taleb’s point is that cases 1. and 3. are bad. Zvi’s point is that case 2. is frequently quite useful.
I would add that 1 is also useful, as long as the prompter understands what they’re doing. Rewriting the same information in many styles for different uses/contexts/audiences with minimal work is useful.
Hallucinating facts is bad, but if they’re the kind of things you can easily identify and fact check, it may not be too bad in practice. And the possibility of GPT inserting true facts is actually also useful, again as long as they’re things you can identify and check. Where we get into trouble (at current and near-future capability levels) is when people and companies stop editing and checking output before using it.
As Zvi points out, what counts as “new information” depends on the person reading the message
Presumably when you look something up in an encyclopedia, it is because there is something that other people (the authors of the encyclopedia) know that you don’t know. When you write an essay, the situation should be exactly opposite: there is something you know that the reader of the essay doesn’t know. In this case the possibilities are:
The information I am trying to convey in the essay is strictly contained in the prompt (in which case GPT is just adding extra unnecessary verbiage)
GPT is adding information known to me but unknown to the reader of the essay (say I am writing a recipe for baking cookies and steps 1-9 are the normal way of making cookies but step 10 is something novel like “add cinnamon”)
GPT is hallucinating facts unknown to the writer of the essay (theoretically these could still be true facts, but the writer would have to verify this somehow)
Nassim Taleb’s point is that cases 1. and 3. are bad. Zvi’s point is that case 2. is frequently quite useful.
Thanks, that’s a really useful summing up!
I would add that 1 is also useful, as long as the prompter understands what they’re doing. Rewriting the same information in many styles for different uses/contexts/audiences with minimal work is useful.
Hallucinating facts is bad, but if they’re the kind of things you can easily identify and fact check, it may not be too bad in practice. And the possibility of GPT inserting true facts is actually also useful, again as long as they’re things you can identify and check. Where we get into trouble (at current and near-future capability levels) is when people and companies stop editing and checking output before using it.