Hubbard recommends a few commercial Monte Carlo tools for risk analysis that seem very related: Oracle Crystal Ball, @Risk, XLSim, Risk Solver Engine, Analytica.
AlexSchell
Neat write-up. I’d say that the scale elasticity of Cost is also irrelevant, since vegetarianism promotion only has a small marginal effect on scale.
This was more of a side effect of deciding to pare down on my possessions than an intervention specifically aimed at buying fewer books, but I rarely buy books anymore just because I want to read them. I get books on LibGen or at the university library. In the rare event in which a book turns out to be a really valuable reference I may then buy it.
I found the links by googling “green card marriage”.
It looks like marrying specifically for US residency purposes is illegal. This report gives the impression that only a tiny fraction of people actually get prosecuted. You’ll have to convincingly lie to a consul and likely undergo some investigation (see e.g. here).
Time, legal risk, reputation. The opportunity cost is lower if you were going to marry a random/non-specific person anyway, but I’m assuming you’re asking about a sham marriage that you’re going to end later.
I forget the details, but I think the argument intentionally focuses on ancestor simulations for epistemic reasons, to preserve a similarity between the simulating and simulated universes. If you don’t assume that the basement-level universe is quite similar to our own, it’s hard to reason about its computational resources. It’s also hard to tell in what proportion a totally different civilization would simulate human civilizations, hence the focus on ancestor simulations. I’m not sure if this is a conservative assumption (giving some sort of lower bound) or just done for tractability.
ETA: See FAQs #4 and #11 here.
I’d be surprised if fake marriages turned out to be the most cost-effective way to help poor people immigrate to the US, even if you want to focus on refugees specifically.
Huh, thanks. Not sure how I managed to misremember so specifically. Edited post.
The hack is due to Anders Sandberg, with modafinil tablets though [ETA: this last part is false, see Kaj’s reply]. Works wonderfully (whether with modafinil or caffeine).
Do you have any sources that quantify the risk?
Oops, shouldn’t have assumed you’re talking about genetics :)
Still, if you’re talking about character in a causally neutral sense, it seems that you need to posit character traits that hardly change within a person’s lifetime. Here I admit that the evidence for rapid institutional effects is weaker than the evidence for institutional effects in general.
(Re: Hong Kong, Singapore, no, I do mean those cities specifically. Their economic outcomes differ strikingly from culturally and genetically similar neighbors because of their unique histories.
Many Western societies have seen pretty dramatic productivity-enhancing institutional changes in the last few hundred years that aren’t explicable in terms of changes in genetic makeup. In light of this, your view seems to rely on believing that most currently remaining institutional variation is genetic, whereas this wasn’t the case ~300 years ago. Do you think this is the case?
Hong Kong, Singapore, and South Korea seem to make a pretty strong case for a huge independent effect of institutions.
Ignore the last sentence and take the rest for what it’s worth :) I did the equivalent of somewhat tactlessly throwing up my hands after concluding that the exchange stopped being productive (for me at least, if not for spectators) a while ago.
The fact that the assumptions of an incredibly useful theory of rational decisionmaking turn out not to be perfectly satisfied does not imply that we get to ignore the theory. If we want to do seemingly crazy things like diversifying charitable donations, we need an actual positive reason, such as the prescriptions of a better model of decisionmaking that can handle the complications. Just going with our intuition that we should “diversify” to “reduce risk”, when we know that those intuitions are influenced by well-documented cognitive biases, is crazy.
This has been incredibly unproductive I can’t believe I’m still talking to you kthxbai
You don’t imagine that when someone, say, characterizes a financial asset as having the expected return of 5% with 20% volatility, these probabilities are precise, do you?
Those are not even probabilities at all.
There are two very different sorts of scenarios with something like “imprecise probabilities”.
The first sort of case involves uncertainty about a probability-like parameter of a physical system such as a biased coin. In a sense, you’re uncertain about “the probability that the coin will come up heads” because you have uncertainty about the bias parameter. But when you consider your subjective credence about the event “the next toss will come up heads”, and integrate the conditional probabilities over the range of parameter values, what you end up with is a constant. No uncertainty.
In the second sort of case, your very subjective credences are uncertain. On the usual definition of subjective probabilities in terms of betting odds this is nonsense, but maybe it makes some sense for boundedly introspective humans. Approximately none of the decision theory corpus applies to this case, because it all assumes that credences and expected values are constants known to the agent. Some decision rules for imprecise credence have been proposed, but my understanding is that they’re all problematic (this paper surveys some of the problems). So decision theory with imprecise credence is currently unsolved.
Examples of the first sort are what gives talk about “uncertain probabilities” its air of reasonableness, but only the second case might justify deviations from expected utility maximization. I shall have to write a post about the distinction.
Let me try to refocus a bit. You seem to want to describe a situation where I have uncertainty about probabilities, and hence uncertainty about expected values. If this is not so, your points are plainly inconsistent with expected utility maximization, assuming that your utility is roughly linear in QALYs in the range you can affect. If you are appealing to imprecise probability, what I alluded to by “I have no idea” is that there are no generally accepted theories (certainly not “plenty”) for decision making with imprecise credence. It is very misleading to invoke diversification, risk premia, etc. as analogous or applicable to this discussion. None of these concepts make any essential use of imprecise probability in the way your example does.
Adding imprecise probability (a 1.1% credence that I’m not sure of) takes us a bit afield, I think. Imprecise probability doesn’t have an established decision theory in the way probability has expected utility theory. But that aside, assuming that I’m calibrated in the 1% range and good at introspection and my introspection really tells me that my expected QALY/$ for charity B is 1.1, I’ll donate to charity B. I don’t know how else to make this decision. I’m curious to hear how much meta-confidence/precision you need for that 1.1% chance for you to switch from A to B (or go “all in” on B). If not even full precision (e.g. the outcome being tied to a RNG) is enough for you, then you’re maximizing something other than expected QALYs.
(I agree with Gelman that risk-aversion estimates from undergraduates don’t make any financial sense. Neither do estimates of their time preference. That just means that people compartmentalize or outsource financial decisions where the stakes are actually high.)
This strikes me as compatible with what gjm said in the sentence before the one you quoted. Some chicken-buying decisions will make no difference, and others are going to have a disproportionate effect by hitting some threshold. In aggregate, chicken purchases by a supermarket have to equal their chicken sales (plus inventory breakage), so a pretty good guess for the expected impact of buying one less chicken is that one less chicken is going to be produced. Richard Chappell discusses a very simple model here. I haven’t seen believable models where in the long run there is substantial deviation from one-for-one.