The thing with NVIDIA though is that the IV is so high and so are premiums. I spent a few hours looking for a better trade than that, though I think it’s pretty solid.
I think SPY calls can possibly be much better than NVIDIA calls. The market doesn’t expect the stock market to go up significantly in the next few years, but I think theres a chance it will assuming timelines are short. Here’s the SPY YoY growth during the internet boom in the 90s.
Year 2000 saw a −9.7% return ($86.54) 1999: +20.4% ($95.88) 1998: +28.7% ($79.65) 1997: +33.5% ($61.89) 1996: +22.5% ($46.37) 1995: +38.0% ($37.85) 1994: +0.4% ($27.42)
Here we see that from any two year period from 1995-1999, the stock market went up anywhere from 50% to 70%.
Thus, I don’t think it’s unreasonable to think SPY has a good chance of going up 50% − 70% by Jan 15 2027 (to be fair, the past two years had a YoY growth of ~25%)
If you buy a $855 Strike price call for that date and SPY increases 50% by then you get a 12x return. If SPY increases 70% you get a 62x return.
If you buy the highest Strike Price call for that day at $910 and SPY increase by 70%, you get an 83x return.
Something to think about at least. At this time I’m going to buy long dated SPY calls for 2-3 years out at the 800 range. Nvidia calls still look good but the premiums are just so expensive because of the companies recent massive growth and volatility, so I think SPY calls are the better option.I’m still thinking about how to hedge incase the upcoming chaos turns the market sour (perhaps a Taiwanese blockade, or NVIDIA profits being hurt by increasing government interference)
This is looking like a February 2020 moment.
Thanks for hosting this competition!
Fermi Estimate: How many lives would be saved if every person in the west donated 10% of their income to EA related, highly effective charities?
Model
Donation Pool:
– Assume “the West” produces roughly $40 trillion in GDP per year.
– At a 10% donation rate, that yields about $4 trillion available annually.
Rethinking Cost‐Effectiveness:
– While past benchmarks often cite figures around $3,000 per life saved for top interventions, current estimates vary widely (from roughly $3,000 up to $20,000 per life) and only a limited pool of opportunities exists at the very low end.
– In effect, the best interventions can only absorb a relatively small fraction of the enormous $4 trillion pool.
Diminishing Returns and Saturation:
To capture the idea that effective charity has a finite “absorption” capacity, we model the lives saved LLL as:
L=Lmax×[1−exp(−DDscale)]L = L_{\text{max}} \times \left[ 1 - \exp\left(-\frac{D}{D_{\text{scale}}}\right) \right]L=Lmax×[1−exp(−DscaleD)],
where:
• DDD is the donation pool ($4 trillion),
• DscaleD_{\text{scale}}Dscale represents the funding scale over which cost‐effectiveness declines, and
• LmaxL_{\text{max}}Lmax is the maximum number of lives that can be effectively saved given current intervention opportunities.
– Based on global health data and the limited number of highly cost‐effective interventions, we set LmaxL_{\text{max}}Lmax in the range of about 10–15 million lives per year.
– To reflect that the very best interventions are relatively small in total funding size, we take DscaleD_{\text{scale}}Dscale to be around $100 billion.
Calculating the ratio:
DDscale=4 trillion100 billion=40\frac{D}{D_{\text{scale}}} = \frac{4\,\text{trillion}}{100\,\text{billion}} = 40DscaleD=100billion4trillion=40.
Since exp(−40)\exp(-40)exp(−40) is negligibly small, we get:
L≈LmaxL \approx L_{\text{max}}L≈Lmax.
Revised Estimate:
Given the uncertainties, choosing a mid‐range LmaxL_{\text{max}}Lmax of about 12 million yields a revised Fermi estimate of roughly 12 million lives saved per year under the assumption that everyone in the West donates 10% of their yearly income to EA-related charities.
Summary
This Fermi estimate suggests that if everyone in the West donated 10% of their yearly income to highly effective charities, we could save around 12 million lives per year. While you might think throwing $4 trillion at the problem would save way more people, the reality is that we’d quickly run into practical limits. Even the best charities can only scale up so much before they hit barriers like logistical challenges, administrative bottlenecks, and running out of the most cost-effective interventions. Still, saving 12 million lives every year is pretty mind-blowing and shows just how powerful coordinated, effective giving could be if we actually did it.
Technique
I brainstormed with Claude Sonnet for about 20 minutes, asking it to generate potential fermi questions in batches of 20. I did this a few times, rejecting most questions for being too boring or not being tractable enough, until it generated the one I used. I ran the question by o3-mini, and had to correct it’s reasoning here and there until it generated a good line of reasoning. Then, I fed that output back into a different instance of o3-mini and asked it to review the fermi estimate above and point out flaws. I put that output back into the original o3-mini and it gave me the model output above.
-
I think a high-quality reasoning model (such as o3), combined with other LLM’s that act as “critics”, could generate very high quality fermi estimates. Also, LLMs can generate ideas far faster than any human can, but humans can evaluate the quality those ideas in a fraction of a second. An under explored idea is to generate dozens or hundreds of ideas using an LLM about how to solve a particular problem, and having a human do the filtering and select the best ones. I can see authors using this and telling their LLM “give me 100 interesting ways I could end this story” and picking the best one.