Perhaps I should point out one particular way in which I could be badly wrong: presumably aid tends to go to the poorest African countries, whose GDP may be way below the average, so 1% of GDP might turn out to be a substantial amount for the countries it actually goes to. Perhaps Moyo’s book has the relevant numbers?
g
Eliezer, it’s clear that Africa is in trouble. How compelling an argument does Moyo’s book offer for believing that Africa is in trouble because it needs less aid, rather than because it needs more?
In this particular context it seems a bit strange to describe Moyo as an African economist. She lives in London and so far as I can tell has lived in the West for most of her adult life. In particular, the two most obvious reasons one might have for trusting an African economist more on this issue—that her self-interest is more closely aligned with what’s best for Africa than with what’s best for the West, and that she’s constantly exposed to the economic realities of life in poor African countries—are less applicable than they would be to someone who actually lives in Africa.
Oh, and … $1 trillion. Sounds like a lot. That’s over the last 50 years, though. $20bn/year. Still sounds like a lot. The population of Africa is a little less than a billion. $20/year per person. Hmm. It’s not quite so obvious that that would be enough to have a major distorting effect. Total GDP of Africa is something like $2T/year, which would make foreign aid to Africa something like 1% of its GDP. Again, would we really expect much distortion from that? Or, for that matter: If it’s possible for aid to help Africa, would we expect aid at that level to have done much good?
(These are all without-even-an-envelope calculations, and could be badly wrong.)
I’m not sure whether “it” in Rasmus’s second paragraph is referring specifically to the fact that you can submit old predictions, or to the idea of the site as a whole; but the possibility—nay, the certainty—of considerable selection bias makes this (to me) not at all like a database of all pundit predictions, but more another form of entertainment.
Don’t misunderstand me; I think it’s an excellent form of entertainment, and entertainment with an important serious side. But even if someone is represented by a dozen predictions on Wrong Tomorrow, all of them (correctly) marked WRONG, that could just mean that it’s only the wackiest 1% of their predictions that have been submitted. Which would show that they’re far from infallible, but that’s hardly news.
Quite possibly this is the best one can do without a large paid staff (which introduces troubles aplenty of its own); it’s just not feasible to track every single testable prediction made by any pundit, and if that started being done and noticed the likely result is that pundits would start taking more care to make their predictions untestable.
vroman, see the post on Less Wrong about least-convenient possible worlds. And the analogue in Doug’s scenario of the existence of (Pascal’s) God isn’t the reality of the lottery he proposes—he’s just asking you to accept that for the sake of argument—but your winning the lottery.
Carl, it clearly isn’t based only on that since Eliezer says “You see it all the time in discussion of cryonics”.
Eliezer, it seems to me that you may be being unfair to those who respond “Isn’t that a form of Pascal’s wager?”. In an exchange of the form
Cryonics Advocate: “The payoff could be a thousand extra years of life or more!”
Cryonics Skeptic: “Isn’t that a form of Pascal’s wager?”
I observe that CA has made handwavy claims about the size of the payoff, hasn’t said anything about how the utility of a long life depends on its length (there could well be diminishing returns), and hasn’t offered anything at all like a probability calculation, and has entirely neglected the downsides (I think Yvain makes a decent case that they aren’t obviously dominated by the upside). So, here as in the original Pascal’s wager, we have someone arguing “put a substantial chunk of your resources into X, which has uncertain future payoff Y” on the basis that Y is obviously very large, and apparently ignoring the three key subtleties, namely how to get from Y to the utility-if-it-works, what other low-probability but high-utility-delta possibilities there are, and just what the probability-that-it-works is. And, here as with the original wager, if the argument does work then its consequences are counterintuitive to many people (presumably including CS).
That wouldn’t justify saying “That is just Pascal’s wager, and I’m not going to listen to you any more.” But what CS actually says is “Isn’t that a form of Pascal’s wager?”. It doesn’t seem to me an unreasonable question, and it gives CA an opportunity to explain why s/he thinks the utility really is very large, the probability not very small, etc.
I think the same goes for your infinite-physics argument.
I don’t see any grounds for assuming (or even thinking it likely) that someone who says “Isn’t that just a form of Pascal’s wager?” has made the bizarrely broken argument you suggest that they have. If they’ve made a mistake, it’s in misunderstanding (or failing to listen to, or not guessing correctly) just what the person they’re talking to is arguing.
Therefore: I think you’ve committed a Pascal’s Wager Fallacy Fallacy Fallacy.
- Jul 31, 2011, 11:06 AM; 1 point) 's comment on On the unpopularity of cryonics: life sucks, but at least then you die by (
(Second attempt at posting this. My first attempt vanished into the void. Apologies if this ends up being a near-duplicate.)
Patrick (orthonormal), I’m pretty sure “Earth” is right. If you’re in the Huygens system already, you wouldn’t talk about “the Huygens starline”. And the key point of what they’re going to do is to keep the Superhappies from reaching Earth; cutting off the Earth/Huygens starline irrevocably is what really matters, and it’s just too bad that they can’t do it without destroying Huygens. (Well, maybe keeping the Superhappies from finding out any more about the human race is important too.)
Patrick (orthonormal), I’m fairly sure that “Earth” is correct. They haven’t admitted that what they’re going to do is blow up Huygens (though of course the President guesses), and the essential thing about what they’re doing is that it stops the aliens getting to Earth (and therefore to the rest of humanity). And when talking to someone in the Huygens system, talk of “the Huygens starline” wouldn’t make much sense; we know that there are at least two starlines with endpoints at Huygens.
Eliezer, did you really mean to have the “multiplication factor” go from 1.5 to 1.2 rather than to something bigger than 1.5?
Beerholm --> Beerbohm, surely? (On general principles; I am not familiar with the particular bit of verse Eliezer quoted.)
Wei Dai, singleton-to-competition is perfectly possible, if the singleton decides it would like company.
Reasoning by analogy is at the heart of what has been called “the outside view” as opposed to “the inside view” (in the context of, e.g., trying to work out how long some task is going to take). Eliezer is on record as being an advocate of the outside view. The key question, I think, is how deep are the similarities you’re appealing to. Unfortunately, that’s often controversial.
(So: I agree with Robin’s first comment here.)
I’d suggest:
Existing contributors keep posting at whatever frequency they’re happy with (which hopefully would be above zero, but that’s up to them).
Also, slowly scour the web for material that wouldn’t be out of place on OB. When you find some, ask the author two or three questions. (a) May we re-post this on OB? (b) Would you like to write an article for OB? (c) [if appropriate] May we re-post some of your other existing material on OB?
If the posting rate drops greatly from what it is now, have more open threads. (One a week, on a regular schedule?) Be (cautiously) on the lookout for opportunities to say “Would you like to turn that into an OB post?”.
I’d strongly not suggest
Anything that would broaden the focus of OB much. (It already strays a little further from its notional core topic than would be my ideal.)
Voting.
Continuing Robin Hanson’s quirk of deleting as many words from the title as is possible without rendering it completely unintelligible. (Or, sometimes, one more than that.) :-)
Those subjunctives in 1-3 of course assume that there are people willing to do that much work. I don’t know whether there are, not least because I haven’t seriously tried to estimate how much work it is.
Richard, I wasn’t suggesting that there’s anything wrong with your running a simulation, I just thought it was amusing in this particular context.
Anyone who evaluates the performance of an algorithm by testing it with random data (e.g., simulating these expert-combining algorithms with randomly-erring “experts”) is ipso facto executing a randomized algorithm...
So, the randomized algorithm isn’t really better than the unrandomized one because getting a bad result from the unrandomized one is only going to happen when your environment maliciously hands you a problem whose features match up just wrong with the non-random choices you make, so all you need to do is to make those choices in a way that’s tremendously unlikely to match up just wrong with anything the environment hands you because it doesn’t have the same sorts of pattern in it that the environment might inflict on you.
Except that the definition of “random”, in practice, is something very like “generally lacking the sorts of patterns that the environment might inflict on you”. When people implement “randomized” algorithms, they don’t generally do it by introducing some quantum noise source into their system (unless there’s a real adversary, as in cryptography), they do it with a pseudorandom number generator, which precisely is a deterministic thing designed to produce output that lacks the kinds of patterns we find in the environment.
So it doesn’t seem to me that you’ve offered much argument here against “randomizing” algorithms as generally practised; that is, having them make choices in a way that we confidently expect not to match up pessimally with what the environment throws at us.
Or, less verbosely:
Indeed randomness can improve the worst-case scenario, if the worst-case environment is allowed to exploit “deterministic” moves but not “random” ones. What “random” means, in practice, is: the sort of thing that typical environments are not able to exploit. This is not cheating.
nazgulnarsil, just because you wouldn’t have to call it a belief doesn’t mean it wouldn’t be one; I believe in the Atlantic Ocean even though I wouldn’t usually say so in those words.
It was rather tiresome the way that Lanier answered so many things with (I paraphrase here) “ha ha, you guys are so hilariously, stupidly naive” without actually offering any justification. (Apparently because the idea that you should have justification for your beliefs, or that truth is what matters, is so terribly terribly out of date.) And his central argument, if you can call it that, seems to amount to “it’s pragmatically better to reject strong AI, because I think people who have believed in it have written bad software and are likely to continue doing so”. Lanier shows many signs of being a smart guy, but ugh.
Vladimir, if I understand both you and Eliezer correctly you’re saying that Eliezer is saying not “intelligence is reality-steering ability” but “intelligence is reality-steering ability modulo available resources”. That makes good sense, but that definition is only usable in so far as you have some separate way of estimating an agent’s available resources, and comparing the utility of what might be very different sets of available resources. (Compare a nascent superintelligent AI, with no ability to influence the world directly other than by communicating with people, with someone carrying a whole lot of powerful weapons. Who has the better available resources? Depends on context—and on the intelligence of the two.) Eliezer, I think, is proposing a way of evaluating the “intelligence” of an agent about which we know very little, including (perhaps) very little about what resources it has.
Put differently: I think Eliezer’s given a definition of “intelligence” that could equally be given as a definition of “power”, and I suspect that in practice using it to evaluate intelligence involves applying some other notion of what counts as intelligence and what counts as something else. (E.g., we’ve already decided that how much money you have, or how many nuclear warheads you have at your command, don’t count as “intelligence”.)
How do you avoid conflating intelligence with power? (Or do you, in fact, think that the two are best regarded as different facets of the same thing?) I’d have more ability to steer reality into regions I like if I were cleverer—but also if I were dramatically richer or better-connected.
PK, I thought Eliezer’s post made at least one point pretty well: If you disagree with some position held by otherwise credible people, try to understand it from their perspective by presenting it as favourably as you can. His worked example of capitalism might be helpful to people who are otherwise inclined to think that unrestrained capitalism is obviously bad and that those who advocate it do so only because they want to advance their own interests at the expense of others less fortunate.
I agree that he’s probably violating his own advice when he implies that capitalism amounts to treating “finance as … an ultimate end”.
kebko, (1) doubtless there’s something terribly dysfunctional going on; the question is whether it’s better treated by giving more aid or by giving less. (2) If the continent’s GDP might have been larger than it is, then the argument I was making applies more, not less. (Namely: the amount of foreign aid seems very small in comparison with the total size of the economy, which suggests that the amount of influence it can have had for good or ill probably isn’t all that enormous.)
Carl, I like the idea of inventing things and making them free, but it might be unattractive to the people who’d need to do (or at least fund) it because it doesn’t look like charity to, e.g., people looking at your accounts; and because unless the technologies are tightly Africa-focused they might lose a lot more in potential revenue than Africa gains in value. Also, it only works in so far as there are the necessary (human and material) resources in the poorest African countries to take advantage of the inventions.
Ian C, you either don’t know what reason is or (at least in this case) don’t know how to do it.
haig, if she’s really calling for an end to all aid to Africa then that seems to go beyond what you suggest. (Eliezer could be right that she’s keeping the message simple but really wants something more sophisticated. I am not convinced that this is the right strategy even if she’s right about the underlying facts, and I’d also have thought that in a book-length treatment of the issue she could afford to present a less-simplistic version of her case.)