Take an extreme case; Sam Altman turns around tomorrow and says “We’re racing to AGI, I’m not going to worry about Safety at all.”
Would that stop you from throwing him $20 a month?
(I currently pay for Gemini)
Take an extreme case; Sam Altman turns around tomorrow and says “We’re racing to AGI, I’m not going to worry about Safety at all.”
Would that stop you from throwing him $20 a month?
(I currently pay for Gemini)
Nothing would stop me from paying for an LLM, since I already do not pay for any LLM. All the organizations offering paid LLM access are engaging in highly unsafe race dynamics, regardless of what they say they are doing, and I will not contribute to that nor incentivize it. I can accept the minor reduction in my local utility this brings.
I also am not paying for any LLM. Between Microsoft’s Copilot (formerly Bing Chat), LMSYS Chatbot Arena, and Codeium, I have plenty of free access to SOTA chatbots/assistants. (Slightly worried that I’m contributing to race dynamics or AI risk in general even by using these systems for free, but not enough to stop, unless someone wants to argue for this.)
Worth noting that many of the free options might train on your data.
So, if this is a serious issue for the work you’re doing, take this into account.
(I don’t really worry about the effects on acceleration of paying for stuff like this (or contributing training data) as the effect seems minor compared to other things and also it doesn’t seem like a good signal to myself or others. I am vegan, but I think that is somewhat of a better signal etc.)
Yeah it is a private purchase, unlike eating, so less likely to create some social effect by abstaining (i.e. vegan). I will say though, I’ve been vegan for about 7 years and I don’t think I’ve nudged anyone :|
Do labs actually make any money on these subscriptions? It seems like the average user is using far more than 20$ of requests (going by the price for API requests which surely can’t have a massive margin?).
Obviously they must gain something or they wouldn’t do it, but it seems likely the benefits are more intangible, gaining market share, generating hype and attracting API users etc. These benefits seem like they may arise from free usage as well.
It seems like the average user is using far more than 20$ of requests
I’m skeptical. I bet the average user is actually using far less than $20 per month.
(Both the median user and the average usage are probably <$20 per month IMO.)
Keep in mind that the typical user is pretty different from the typical power user as with all products.
This might change some with more long-context usage which burns way more money per second.
(Also, I think API might have a massive margin, I’m unsure.)
[no actual direct knowledge]
Yeah, all things like this follow a power distribution—a few users use it A TON and both median and average are far smaller than you’d think. Assuming they throttle the biggest users in some way, they’re making some money on paid use. But it’s probably irrelevant to their overall strategy and profitability.
I’d guess the biggest value for them is in showing growth and “willingness to pay” for this, more than the actual money collected.
I stopped paying for chatGPT earlier this week, while thinking about the departure of Jan and Daniel.
Whereas before they left I was able to say to myself “well, there are smarter people than me with worldviews similar to mine who have far more information about openAI than me, and they think it is not a horrible place, so 20 bucks a month is probably fine”, I am no longer able to do that.
They have explicitly sounded the best alarm they reasonably know how to currently. I should listen!
I cancelled my subscription to chatGPT yesterday, both because of the superalignment team dissolution, the fact that I probably won’t use it much anymore, and the fact that Claude has become available in Europe.
Obviously this is a tradeoff that depends on how useful LLMs are to you.
As for me, I haven’t found current LLMs to be useful for my work or interests at all. They’re usually right when something is easily searched for, but when something is hard to search for, they’re almost always wrong. So, in my experience, they’re only really useful if one of the following is true:
you’re bad at searching the internet
you’re bad at writing and need to reword something
correctness doesn’t matter (eg essays for college classes)
I am confused by takes like this—it just seems so blatantly wrong to me.
For example, yesterday I showed GPT-4o this image.
I asked it to show why (10) is the solution to (9). It wrote out the derivation in perfect Latex.
I guess this is in some sense a “trivial” problem, but I couldn’t immediately think of the solution. It is googleable, but only indirectly, because you have to translate the problem to a more general form first. So I think for you to claim that LLMs are not useful you have to have incredibly high standards for what problems are easy / googleable and not value the convenience of just asking the exact question with the opportunity to ask followups.
I haven’t used LLMs for math problems. Maybe they’re better at that, or maybe it’s calling WolframAlpha to get that result, or maybe the answer it gave you is wrong and you just don’t realize it. What I can say is that for any kind of non-obvious chemistry, biology, mechanical engineering, or electrical engineering question, or something about the legal meaning of wording in a patent, they’re wrong >90% of the time.
If you’re going to post something like the above, I think you should also include the response you got.
Unfortunately the sharing function is broken for me.
Screenshot?