Everything in the OP matches my memory / my notes, within the level of noise I would expect from my memory / my notes.
Optimization Process
Book summary: Selfish Reasons to Have More Kids
Seattle, WA – October 2021 ACX Meetup
Seattle, WA – ACX Meetups Everywhere 2021
That’s a great point! My rough model is that I’ll probably live 60 more years, and the last ~20 years will be ~50% degraded, so by 60 remaining life-years are only 50 QALYs. But… as you point out, on the other hand, my time might be worth more in 10 years, because I’ll have more metis, or something. Hmm.
(Another factor: if your model is that awesome life-extension tech / friendly AI will come before the end of your natural life, then dying young is a tragedy, since it means you’ll miss the Rapture; in which case, 1 micromort should perhaps be feared many times more than this simple model suggests. I… haven’t figured out how to feel about this small-probability-of-astronomical-payoff sort of argument.)
-
Hmm! I think the main crux of our disagreement is over “how abstract is ‘1 hour of life expectancy’?”: you view it as pretty abstract, and I view it as pretty concrete.
The reason I view it as concrete is: I equate “1 hour of life expectancy” to “1 hour spent driving,” since I mildly dislike driving. That makes it pretty concrete for me. So, if there’s a party that I’m pretty excited about, how far would I be willing to drive in order to attend? 45 minutes each way, maybe? So “a party I’m pretty excited about” is worth about 3 micromorts to me.
Does this… sound sane?
-
I’m in a house that budgets pretty aggressively, so, in practice, I budget, and maybe I’m wrong about how this would go; but, if I ditched budgeting entirely, and I was consistently bad at assessing tradeoffs, I would expect that I could look back after two weeks and say, “Whoa, I’ve taken on 50 life-hours of risk over the last two weeks, but I don’t think I’ve gotten 50 hours of life-satisfaction-doubling joy or utility out of seeing people. Evidently, I have a strong bias towards taking more risk than I should. I’d better retrospect on what I’ve been taking risk doing, and figure out what activities I’m overvaluing.”
Or maybe I’m overestimating my own rationality!
-
Pedantry appreciated; you are quite right!
Thanks for the thoughtful counterargument!
Things I think we agree on:
-
you should really be deciding policies rather than initial purchase choices
Yes, absolutely, strong agreement.
-
“Deciding how to accumulate COVID risk” closely resembles “deciding how to spend a small fraction of your money,” but not “deciding how to spend a large fraction of your money”: when money is tight, the territory contains a threshold that’s costly to go over, so your decision-making process should also contain a threshold that shouldn’t be gone over, i.e. a budget; but there is no such nonlinearity when accumulating (normal amounts of) COVID risk, or when spending a small fraction of your money.
-
In principle the right way to make my choice [of what to buy] is to figure out what utility I’ll get from each possibility, figure out what utility I’ll get from having any given amount more savings, and choose whatever maximizes the total… A common solution is to first pick some amount of money that seems reasonable… And then to go shopping and be guided by that budget.
Actually, I’m not sure I disagree with any of your explicit claims. The only claim I think we might disagree on is something like “budgeting is a good strategy even when costs/benefits add pretty much linearly,” as in the ‘spend a small fraction of your money’ or ‘accumulate COVID risk’ scenarios: I perceive you as agreeing with that statement, whereas I disagree with it (because it encourages you to think in terms of “whether I’ll exceed my budget” instead of the ground truth).
If you do endorse that bolded statement, I’m curious why. I read your comment as explaining why people do budget in low-stakes scenarios, but not why that’s optimal. (My best guess at your answer, reading between the lines, is “because it saves a lot of error-prone calculation,” which, hmm, doesn’t speak to me personally, but people differ, and maybe I overestimate my own ability to do good cost/benefit calculations.)
(Don’t get me wrong, I do sometimes do something that looks like budgeting, as you describe, when I’m spending small amounts of money; but I view it as a bad habit that I want to break myself of—with a proper eye towards TDT, though, of course.)
-
Fantastic. Thanks so much for that link—I found that whole thread very enlightening.
Yes, agreed! An earlier draft had the exposure happening “yesterday” instead of “this morning,” but, yeah, I wanted to make it clearer-cut in the face of the reports I’ve heard that Delta has very short incubation periods some nonzero fraction of the time.
I’ve also seen a couple of variations on risk budgets in group houses, along the lines of: the house has a total risk budget, and then distributes that budget among its members (and maybe gives them some way to trade). In the case where the house has at least one risk-discussion-hater in it, this might make sense; but if everybody is an enthusiastic cost/benefit analyzer, I strongly suspect that it’s optimal to ditch the budget, figure out how many housemates will get sick if a single person gets sick (e.g. if housemate-to-housemate transmission is 30%, then in a 3-person household, one sick person will get an average of about 0.60 housemates sick), and use that to institute a Pigouvian tax on microCOVIDs, exactly as in the “enthusiastic optimizer” example.
Mildly against COVID risk budgets
Yes, that’s what they did! (Emphasis on the “somehow”—details a mystery to me.) Some piece of intro text for the challenge explained that Codex would receive, as input, both the problem statement (which always included a handful of example inputs/output/explanation triplets), and the user’s current code up to their cursor.
[Question] Is microCOVID “badness” superlinear?
Trying to spin this into a plausible story: OpenAI trains Jukebox-2, and finds that, though it struggles with lyrics, it can produce instrumental pieces in certain genres that people enjoy about as much as human-produced music, for about $100 a track. Pandora notices that it would only need to play each track ($100 / ($0.00133 per play) = 75k) times to break even with the royalties it wouldn’t have to pay. Pandora leases the model from OpenAI, throws $100k at this experiment to produce 1k tracks in popular genres, plays each track 100k times, gets ~1M thumbs-[up/down]s (plus ~100M “no rating” reactions, for whatever those are worth), and fine-tunes the model using that reward signal to produce a new crop of tracks people will like slightly more.
Hmm. I’m not sure if this would work: sure, from one point of view, Pandora gets ~1M data points for free (on net), but from another reasonable point of view, each data point (a track) costs $100 -- definitely not cheaper than getting 100 ratings off Mechanical Turk, which is probably about as good a signal. This cycle might only work for less-expensive-to-synthesize art forms.
Consider AI-generated art (e.g. TWDNE, GPT-3 does Seinfeld, reverse captioning, Jukebox, AI Dungeon). Currently, it’s at the “heh, that’s kinda neat” stage; a median person might spend 5-30 minutes enjoying it before the novelty wears off.
(I’m about to speculate a lot, so I’ll tag it with my domain knowledge level: I’ve dabbled in ML, I can build toy models and follow papers pretty well, but I’ve never done anything serious.)
Now, suppose that, in some limited domain, AI art gets good enough that normal people will happily consume large amounts of its output. It seems like this might cause a phase change where human-labeled training data becomes cheap and plentiful (including human labels for the model’s output, a more valuable reward signal than e.g. a GAN’s discriminator); this makes better training feasible, which makes the output better, which makes more people consume and rate the output, in a virtuous cycle that probably ends with a significant chunk of that domain getting automated.
I expect that this, like all my most interesting ideas, is fundamentally flawed and will never work! I’d love to hear a Real ML Person’s take on why, if there’s an obvious reason.
Oof, tracking the instead of the is such a horrifying idea I didn’t even think of it. I guess you could do that, though! I guess. Ew. I love it.
Yeah, this is a fair point!
Let’s see—a median Fermi estimate might involve multiplying 5 things together. If it takes 7 seconds to pull up my calculator app, and that lets me do a perfectly accurate operation every second instead of slightly-error-prone operation every two seconds, then using the calculator gives me a 100% accurate answer in 12sec instead of a five-times-slightly-inaccurate answer in 10sec.
I still feel skeptical for some reason, but that’s probably just status quo bias. This seems like a reasonable tradeoff. I’ll try it for a month and see how it goes!
Ooh, you raise a good point, Caplan gives $12k as the per-cycle cost of IVF, which I failed to factor in. I will edit that in. Thank you for your data!
And you’re right that medical expenses are part of the gap: the book says the “$100k” figure for surrogacy includes medical expenses (which you’d have to pay anyway) and “miscellaneous” (which… ???).
So, if we stick with the book’s “$12k per cycle” figure, times an average of maybe 2 cycles, that gives $24k, which still leaves a $56k gap to be explained. Conceivably, medical expenses and “miscellaneous” could fill that gap? I’m sure you know better than I!