ChatGPT isn’t a substitute for a NYT subscription. It wouldn’t work at all without browsing. It would probably get blocked with browsing enabled, both by NYT through its useragent, and by OpenAI’s “alignment.” Even if it doesn’t get blocked, it would be slower than skimming the article manually, and its output not trustable.
OTOH, NYT can spend pennies to have an AI TLDR at the top of each of their pages. They can even use their own models, as semanticscholar does. Anybody who is economical enough to prefer the much worse experience of ChatGPT, would not have paid NYT in the first place. You can bypass the paywall trivially.
In fact, why don’t NYT authors write a TLDR themselves? Most of their articles are not worth reading. Isn’t the lack of a summary an anti-user feature to artificially inflate their offering’s volume?
NYT would, if anything, benefit from LLMs potentially degrading the average quality of the competing free alternatives.
The counterfactual version of GPT4 that did not have NYT in its training is extremely unlikely to have been a worse model. It’s like removing sand from a mountain.
The whole case is an example of rent-seeking post-capitalism.
ChatGPT isn’t a substitute for a NYT subscription. It wouldn’t work at all without browsing. It would probably get blocked with browsing enabled, both by NYT through its useragent, and by OpenAI’s “alignment.” Even if it doesn’t get blocked, it would be slower than skimming the article manually, and its output not trustable.
OTOH, NYT can spend pennies to have an AI TLDR at the top of each of their pages. They can even use their own models, as semanticscholar does. Anybody who is economical enough to prefer the much worse experience of ChatGPT, would not have paid NYT in the first place. You can bypass the paywall trivially.
In fact, why don’t NYT authors write a TLDR themselves? Most of their articles are not worth reading. Isn’t the lack of a summary an anti-user feature to artificially inflate their offering’s volume?
NYT would, if anything, benefit from LLMs potentially degrading the average quality of the competing free alternatives.
The counterfactual version of GPT4 that did not have NYT in its training is extremely unlikely to have been a worse model. It’s like removing sand from a mountain.
The whole case is an example of rent-seeking post-capitalism.