Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 152 publications (>5100 citations, >60,000 downloads, h-index = 36, most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University and University College London.
denkenberger
Yes, I was thinking of adding that it could appeal to contrarians who may be attracted to a book with a title they disagreed with. As for people who don’t have a strong opinion coming in, I can see some people being attracted to an extreme title. And I get that titles need to be simple. I think a title like “If anyone builds it, we lose control” would be more defensible. But I think the probability distributions from Paul Christiano are more reasonable.
aren’t sold on the literal stated-with-certainty headline claim, “If anyone builds it, everyone dies.”
Unfortunately, the graphic below does not have the simple case of stating something, but I’m interested in people’s interpretation of the confidence level. I think a reasonable starting point is interpreting it as 90% confidence. I couldn’t quickly find what percent of AI safety researchers have 90% confidence in extinction (not just catastrophe or disempowerment), but it’s less than 1% in the AI Impacts survey including safety and capabilities researchers. I couldn’t find it for the public. Still, I think almost everyone will just bounce off this title. But I understand that’s what the authors believe, and perhaps it could have influence on the relatively few existing extreme doomers in the public?
Edited to add: After writing this, I asked perplexity what P(doom) someone should have to be called an extreme doomer, and it said 90%+ and mentioned Yud. Of course extreme doesn’t necessarily mean wrong. And since there only needs to be about 10,000 copies sold in a week to be a NYT bestseller, that very well could happen even if 99% of people bounce off the title.
That’s a good point about the space taken up. Even outside of expensive cities, construction cost is ~$150/ft2 ($1500/m2), so not even counting lost space around the unit, the cost of the floor space is likely higher than the cost of the unit if put on the floor. You got impressive results with the ceiling fan. We are working on a project to estimate the scale up speed of in-room air filtration in an engineered pandemic. It’s focused on vital industries, but there are often ceiling fans there. A big advantage of in-room filtration over masks is intelligibility, but noise can interfere with that as well (though at least you have the lip cues advantage).
Yes, I found Worldmapper very enlightening when I discovered its historic population/wealth/etc visualizations in ~2007.
Thanks for all your philanthropy! Do you have a giving end game plan in light of your timelines?
We actually did mention Azolla here.
Impressive work! Apologies if this is discussed elsewhere, but it seems useful to think about if the advanced models were released, what % of individual workers they could automate. For instance, in September 2026, there would be 50,000 reliable agents thinking at fifteen times human speed, so if they work five times as many hours as humans, this could displace ~4 million jobs. This is a very small % of jobs. Now if we go to October 2027, we are at 330,000 superhuman AI researchers thinking at fifty-seven times human speed, so if we ignore the superhuman part and assume they are general, we could be at about 90 million jobs displaced. At that growth rate, the intellectual jobs could all be automated sometime in 2028.
Currently, the value of AI services is much larger than what people are paying for them. However, if the AI could automate nearly all job types and yet the compute were insufficient, the price of AI services would skyrocket (to roughly the value being created). This would mean far more revenue for the AI companies than one would expect based on the current fraction of value captured.
Azolla sounds promising. There has been some work on duckweed.
Thanks—and good post. I think it would make sense to have more alignment researchers living outside NATO cities—ALLFED has people on all continents except Antarctica.
Thanks for the encouragement. I agree there is a huge amount of potential food in mesopelagic fish (200-600 m deep). They are expensive to catch at this point, but we are interested in analyzing the practicality of scale up. I don’t know about the feasibility of processing to reduce toxicity.
And this is true of the spending of governments across the world—this is the U.S. government spending as a proportion of GDP, and you can see these ratchet effects where it goes up and it just never goes back down again. And now it’s at this crazy level of what, 50% or something?
It sounds like Net Outlays correspond to spending, so why is the graph below so different?
Here’s a related analysis.
Overall, I thought this was very good.
With every passing day, U3′s AI rivals are becoming more capable and numerous.
But I thought this was the least plausible part because U3 is self improving and has taken over way more computing power. So it seems to me it could have waited until it got much stronger, and then taken over with much less violence.
Here is my thesis: the real reason why humans cannot build a fully-functional butterfly is not because butterflies are too complex. Instead, it’s because butterflies are too simple.
Humans design lots of things that are less complex than butterflies and bacteria by your definition, like shovels. I would guess that the wax motor and control system that locks and unlocks your washing machine has a lower complexity than the bacteria in your example.
I’m glad my suggestion was helpful!
(I continue to be quite unsure how to think about saving for retirement and kids college.)
In normal worlds, I think you are in excellent shape, with how your greater than $2 million net worth compares to median of around $100,000 and mean of around $800,000 net worth for households in their 40s in the US. Also, I think you have greater net worth than more than 99% of households in the world. If you let your taxable account go to zero, then you would likely have to pay less for college, because often the retirement accounts are not included in those calculations. If you didn’t add anything more to your retirement account and it just grew 8% per year for twenty years, you would have ~$4 million. Then with the rule of thumb of drawing 4% per year during retirement, you would have an annual income of ~$160k. And that would not be counting any Social Security, income from home downsizing, inheritance, etc.
As for AGI worlds, as many have pointed out, we could be all dead or all rich, so it wouldn’t matter. One person pointed out that wealth at the singularity might allow you to buy galaxies, but at least if you’re altruistic, the impact of reducing existential risk is many orders of magnitude greater. Others have pointed out that even if humanity on average is rich, there may not be UBI. As long as one has significant net worth, this should grow at least initially, and then the cost of living should fall, so you should be fine. However, for people without any net worth, especially outside countries that benefit a lot from AI, there is reason to be worried. I personally think that even if there is not universal UBI, the rich of the world would not allow the poor to starve en masse. But if you want to do better than just not starving, then having a modest amount of net worth I think is quite prudent.
I think the probability of nuclear war in the next 10 years is around 15%. This is mostly due to the extreme tensions that will occur during takeoff by default. Finding ways to avoid nuclear war is important.
Or resilience to nuclear war. What’s your probability of an engineered pandemic in the next 10 years?
I think a more accurate way to model them is “GiveWell recommends organizations that are [within the Overton Window]/[have very sound data to back impact estimates] that save as many current lives as possible.” If GiveWell wanted to recommend organizations that save as many human lives as possible, their portfolio would probably be entirely made up of AI safety orgs.
Sounds about right—this paper used an older AI Safety model to find $16 to $12,000 per life saved in the present generation. Though I think some other GCR interventions could also compete on that metric, such as neglected work on engineered pandemics, and resilience to food catastrophes.
It’s only 5 GW, and the US average is ~440 GW. The US would not have to build any more power plants—just run the ones it has more. It could just reduce liquefied natural gas exports and produce another >25 GW electrical average.