Some altruistically-motivated projects would be valid investments for a Checkbook IRA. I guess if you wanted to donate 401k/IRA earnings to charity you’d still have to pay the 10% penalty (though not the tax if the donation was deductible) but that seems the same whether it’s pretax or a heavily-appreciated Roth.
lexande
The math in the comment I linked works the same whether the chance of money ceasing to matter in five years’ time is for happy or unhappy reasons.
My impression is that the “Substantially Equal Periodic Payments” option is rarely a good idea in practice because it’s so inflexible in not letting you stop withdrawals later, potentially even hitting you with severe penalties if you somehow miss a single payment. I agree that most people are better off saving into a pretax 401k when possible and then rolling the money over to Roth during low-income years or when necessary. I don’t think this particularly undermines jefftk’s high-level point that tax-advantaged retirement savings can be worthwhile even conditional on relatively short expected AI timelines.
I prefer pre-tax contributions over Roth ones now because of my expectation that probably there will be an AI capabilities explosion well before I reach 59.5. If I had all or most of my assets in Roth accounts it would be terrible.
Why would money in Roth accounts be so much worse than having in in pretax accounts in the AI explosion case? If you wanted the money (which would then be almost entirely earnings) immediately you could get it by paying tax+10% either way. But your accounts would be up so much that you’d only need a tiny fraction of them to fund your immediate consumption, the rest you could keep investing inside the 401k/IRA structure.
You have to be really confidently optimistic or pessimistic about AI to justify a major change in consumption rates; if you assign a significant probability to “present rate no singularity”/AI winter futures then the benefits of consumption smoothing dominate and you should save almost as much (or as little) as you would if you didn’t know about AI.
Note that it is entirely possible to invest in almost all “non-traditional” things within a retirement account; “checkbook IRA” is a common term for a structure that enables this (though the fees can be significant and most people should definitely stick with index funds). Somewhat infamously, Peter Thiel did much of his early angel investing inside his Roth IRA, winding up with billions of dollars in tax-free gains.
In particular it seems very plausible that I would respond by actively seeking out a predictable dark room if I were confronted with wildly out-of-distribution visual inputs, even if I’d never displayed anything like a preference for predictability of my visual inputs up until then.
It seems like a major issue here is that people often have limited introspective access to what their “true values” are. And it’s not enough to know some of your true values; in the example you give the fact that you missed one or two causes problems even if most of what you’re doing is pretty closely related to other things you truly value. (And “just introspect harder” increases the risk of getting answers that are the results of confabulation and confirmation bias rather than true values, which can cause other problems.)
Here’s an attempt to formalize the “is partying hard worth so much” aspect of your example:
It’s common (with some empirical support) to approximate utility as proportional to log(consumption). Suppose Alice has $5M of savings and expected-future-income that she intends to consume at a rate of $100k/year over the next 50 years, and that her zero utility point is at $100/year of consumption (since it’s hard to survive at all on less than that). Then she’s getting log(100000/100) = 3 units of utility per year, or 150 over the 50 years.
Now she finds out that there’s a 50% chance that the world will be destroyed in 5 years. If she maintains her old spending patterns her expected utility is .5*log(1000)*50 + .5*log(1000)*5 = 82.5. Alternately, if interest rates were 0%, she might instead change her plan to spend $550k/year over the next 5 years and then $50k/year subsequently (if she survives). Then her expected utility is log(5500)*5+.5*log(500)*45 = 79.4, which is worse. In fact her expected utility is maximized by spending $182k over the next five years and $91k after that, yielding an expected utility of about 82.9, only a tiny increase in EV. If she has to pay extra interest to time-shift consumption (either via borrowing or forgoing investment returns) she probably just won’t bother. So it seems like you need very high confidence of very short timelines before it’s worth giving up the benefits of consumption-smoothing.
- 23 Feb 2024 16:37 UTC; 10 points) 's comment on Retirement Accounts and Short Timelines by (
- 11 Jan 2023 21:12 UTC; 5 points) 's comment on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by (EA Forum;
Why would you expect her to be able to diminish the probability of doom by spending her million dollars? Situations where someone can have a detectable impact on global-scale problems by spending only a million dollars are extraordinarily rare. It seems doubtful that there are even ways to spend a million dollars on decreasing AI xrisk now when timelines are measured in years (as the projects working on it do not seem to be meaningfully funding-constrained), much less if you expected the xrisk to materialize with 50% probability tomorrow (less time than it takes to e.g. get a team of researchers together).
I think it generally makes sense to try to smooth personal consumption, but that for most people I know this still implies a high savings rate at their first high-paying job.
As you note, many of them would like to eventually shift to a lower-paying job, reduce work hours, or retire early.
Even if this isn’t their current plan, burnout is a major risk in many high-paying career paths and might oblige them to do so, and so there’s a significant probability of worlds where the value of having saved up money during their first high-paying job is large.
If they’re software engineers in the US they face the risk that US software engineer salaries will revert to the mean of other countries and other professional occupations. https://www.jefftk.com/p/programmers-should-plan-for-lower-pay
If they want but don’t currently have children, then even if their income is higher later in their career, it’s likely that their income-per-household-member won’t be. Childcare and college costs mean they should probably be prepared to spend more per child in at least some years than they currently do on their own consumption.
Yeah that’s essentially the example I mentioned that seems weirder to me, but I’m not sure, and at any rate it seems much further from the sorts of decisions I actually expect humanity to have to make than the need to avoid Malthusian futures.
I’m happy to accept the sadistic conclusion as normally stated, and in general I find “what would I prefer if I were behind the Rawlsian Veil and going to be assigned at random to one of the lives ever actually lived” an extremely compelling intuition pump. (Though there are other edge cases that I feel weirder about, e.g. is a universe where everyone has very negative utility really improved by adding lots of new people of only somewhat negative utility?)
As a practical matter though I’m most concerned that total utilitarianism could (not just theoretically but actually, with decisions that might be locked-in in our lifetimes) turn a “good” post-singularity future into Malthusian near-hell where everyone is significantly worse off than I am now, whereas the sadistic conclusion and other contrived counterintuitive edge cases are unlikely to resemble decisions humanity or an AGI we create will actually face. Preventing the lock-in of total utilitarian values therefore seems only a little less important to me than preventing extinction.
I think
- Humans are bad at informal reasoning about small probabilities since they don’t have much experience to calibrate on, and will tend to overestimate the ones brought to their attention, so informal estimates of the probability very unlikely events should usually be adjusted even lower.
- Humans are bad at reasoning about large utilities, due to lack of experience as well as issues with population ethics and the mathematical issues with unbounded utility, so estimates of large utilities of outcomes should usually be adjusted lower.
- Throwing away most of the value in the typical case for the sake of an unlikely case seems like a dubious idea to me even if your probabilities and utility estimates are entirely correct; the lifespan dilemma and similar results are potential intuition pumps about the issues with this, and go through even with only single-exponential utilities at each stage. Accordingly I lean towards overweighting the typical range of outcomes in my decision theory relative to extreme outcomes, though there are certainly issues with this approach as well.As far as where the penalty starts kicking in quantitatively, for personal decisionmaking I’d say somewhere around “unlikely enough that you expect to see events at least this extreme less than once per lifetime”, and for altruistic decisionmaking “unlikely enough that you expect to see events at least this extreme less than once in the history of humanity”. For something on the scale of AI alignment I think that’s around 1/1000? If you think the chances of success are still over 1% then I withdraw my objection.
The Pascalian concern aside I note that the probability of AI alignment succeeding doesn’t have to be *that* low before its worthwhileness becomes sensitive to controversial population ethics questions. If you don’t consider lives averted to be a harm then spending $10B to decrease the chance of 10 billion deaths by 1/10000 is worse value than AMF. If you’re optimizing for the average utility of all lives eventually lived then increasing the chance of a flourishing future civilization to pull up the average is likely worth more but plausibly only ~100x more (how many people would accept a 1% chance of postsingularity life for a 99% chance of immediate death?) so it’d still be a bad bet below 1/1000000. (Also if decreasing xrisk increases srisk, or if the future ends up run by total utilitarians, it might actually pull the average down.)
I think that I’d easily accept a year of torture in order to produce ten planets worth of thriving civilizations. (Or, if I lack the resolve to follow through on a sacrifice like that, I still think I’d have the resolve to take a pill that causes me to have this resolve.)
I’d do this to save ten planets of worth of thriving civilizations, but doing it to produce ten planets worth of thriving civilizations seems unreasonable to me. Nobody is harmed by preventing their birth, and I have very little confidence either way as to whether their existence will wind up increasing the average utility of all lives ever eventually lived.
There’s some case for it but I’d generally say no. Usually when voting you are coordinating with a group of people with similar decision algorithms who you have some ability to communicate with, and the chance of your whole coordinated group changing the outcome is fairly large, and your own contribution to it pretty legible. This is perhaps analogous to being one of many people working on AI safety if you believe that the chance that some organization solves AI safety is fairly high (it’s unlikely that your own contributions will make the difference but you’re part of a coordinated effort that likely will). But if you believe is extremely unlikely that anybody will solve AI safety then the whole coordinated effort is being Pascal-Mugged.
This is Pascal’s Mugging.
Previously comparisons between the case for AI xrisk mitigation and Pascal’s Mugging were rightly dismissed on the grounds that the probability of AI xrisk is not actually that small at all. But if the probability of averting the xrisk is as small as discussed here then the comparison with Pascal’s Mugging is entirely appropriate.
The cost of Covid is not just unlikely chronic effects, nor vanishingly-unlikely-with-three-shots severe/fatal effects, but also making you feel sick and obliging you to quarantine for ~five days (and probably send some uncomfortable emails to people you saw very recently). With the understandable abandonment of NPIs and need to get on with life, the chance that you will catch Covid in a given major wave if not recently boosted seems pretty high, perhaps 50%? (There were 30M confirmed US cases during the Omicron wave, and at least for most of the pandemic confirmed cases seemed to undercount true cases by about 3x, which makes 27% of the US population despite recent boosters and NPIs.) 100% chance of losing one predictable day (plus perhaps 5% chance of losing five days) seems much better than 50% chance of losing five unpredictable days.
- Is there any reason to think research that could lead to malaria vaccines is funding-constrained? There doesn’t seem to be any shortage of in-mice studies, and in light of Eroom’s Law the returns on marginal biomedical research investment seem low.
- Malaria is preventable and curable with existing drugs, so vaccines for it only make sense if their cost (including required research) works out lower than preventing it in other ways, which means some strategies that made sense for something like Covid won’t make sense here.
- That’s not how international waters works, you’re still subject to the jurisdiction of the flag country and if they’re okay with your trial you could do it more cheaply on land there.
- If you attempt an end-run of the developed-country regulators with your trial they will just refuse to approve anything based on your trial data, which is why pharma companies don’t jurisdiction-shop much at present.
- That said developed country regulators do in fact approve challenge trials for malaria vaccines (as I noted) and vaccines for other curable diseases. Regulatory & IRB frameworks no doubt still add a bunch of overhead but this does further bound the potential benefits of attempting to work outside them.
- I don’t know what “focusing on epistemics” could possibly entail in terms of concrete interventions. Trying to develop prediction markets I suppose? I have updated away from the usefulness of those based on their performances over the past past year though, and it seems like they are more constrained by policy than by lack of marginal funding (at retail donor levels).
- Policy change is still intractable.
- In general there are lots of margins on which the world might be improved, but the vast majority of them are not plausibly bottlenecked on resources that I or most EAs I know personally control. Learning about a few more such margins is not a significant update. I focus on bednets not because I think it’s unusually much more important than other world-improving margins, nor because I think it will be a margin where unusually much improvement happens in coming years, but because it’s a rare case of a margin where I think decisions I can make personally (about what to do with my disposable income dollars) are likely to have a nontrivial impact.
It’s plausible that the Covid-19 pandemic could end up net massively saving lives, and a lot of Effective Altruists (and anyone looking to actually help people) have some updating to do. It’s also worth saying that 409k people died of malaria in 2020 around the world, despite a lot of mitigation efforts, so can we please please please do some challenge trials and ramp up production in advance and otherwise give this the urgency it deserves?
What update is this supposed to cause for Effective Altruists? We already knew that policy around all sorts of global health (and other) issues is very far from optimal, but there’s nothing we can do about that. Even a global pandemic wasn’t enough to get authorities to treat trials and approvals with appropriate urgency and consideration of the costs of inaction, so what hope would a tiny number of advocates have? We can fantasize all day about what we’d do if we ran the world, but back in reality policy change is intractable and donating to incrementally-scalable interventions like bednets remains the best most of us can personally do. Or am I misunderstanding what you meant here?
(Note also that malaria vaccine human challenge trials were already a thing; Effective Altruist John Beshir participated as a subject in one in 2019.)
It’s true that claims that poor people now are much richer than poor or even rich people 300 years ago rely somewhat on cherrypicking which axes to measure, but the cited claims of “100-fold productivity increase” since then *also* rely on cherrypicking which axes to measure.
We haven’t gotten 100x more productive in obtaining oxygen, certainly, nor in many still-scarce resources people care about (childcare might be a particularly clear example). So people still experience poverty because civilization is still tightly bottlenecked on some resources.
I don’t think there are any resources which have gotten 100x more abundant per capita but that people still desperately scrabble to afford basic levels of. And for resources that are abundant but not hyperabundant, it’s clear how redistribution like UBI can help.