Already done: JustFuckingGoogleIt
DonGeddis
Roughly on the same topic, a few years ago I read Intelligence in War by John Keegan. I was expecting a glorification of that attribute which I believed to be so important; to read story after story of how proper intelligence made the critical difference during military battles.
Much to my surprise, Keegan spends the whole book basically shooting down that theory. Instead, he has example after example where one side clearly had a dominant intelligence advantage (admittedly, here we’re talking about “information”, not strictly “rationality”), but it always wound up being a mere minor factor in the outcome of the battle.
Definitely worth checking out, if you’re at all interested in the power (or lack thereof) of being smarter, rather than all the other factors that determine the outcome of military battles.
Eliezer and Robin argue passionately for cyronics. Whatever you might think of the chances of some future civilization having the technical ability, the wealth, and the desire to revive each of us—and how that compares to the current cost of signing up—one thing that needs to be considered is whether your head will actually make it to that future time.
Ted Williams seems to be having a tough time of it.
It’s hard to discuss the subject with the debate becoming emotional, but let me just say that Roissy’s goals are to be an entertaining writer, to succeed at picking up women, and to debunk false commonsense notions of dating, through real-life experience.
He’s not trying to submit a peer-reviewed paper on evo psych to a rationality audience. To judge him on that basis is to kind of miss the point.
(Ethics is a whole separate question. But then, Stalin was a atheist too, wasn’t he?)
Rather than using a PRNG (which, as you say, requires memory), you could use a source of actual randomness (e.g. quantum decay). Then you don’t really have extra memory with the randomized algorithm, do you?
Forget about whether your sandbox is a realistic enough test. There are even questions about how much safety you’re getting from a sandbox. So, we follow your advice, and put the AI in a box in order to test it. And then it escapes anyway, during the test.
That doesn’t seem like a reliable plan.
Re: abiogenesis. You say:
we know of no mechanism under which creation of life seems even remotely plausible.
For a plausible mechanism, see this video. (It starts with anti-creationism stuff; skip to 2:45 to watch the science.)
Exactly! This is gambling, isn’t it? A small expected loss, with a tiny chance of some huge gain.
If your utility for money really is so disproportionate to the actual dollar value, then you probably ought to take a trip to Las Vegas and lay down a few long-odds bets. You’ll almost certainly lose your betting money (but you wouldn’t “notice it in [your] monthly finances”), while there’s some (small) chance that you get lucky and “change [your] month considerably”.
It’s not hypothetical! You can do this in the real world! Go to Vegas right now.
(If the plane flight is bothering you, I’m sure we could locate some similar online betting opportunities.)
I think there’s also a short-term/long-term thing going on with your examples. The drunk really wants to drink in the moment; they just don’t enjoy living with the consequences later. Similarly, in the moment, you really do want to continue reading Reddit; it’s only hours or days later that you wish you had also managed to complete that other project which was your responsibility.
I bet there’s something going on here, about maximizing integrated lifetime happiness, vs. in-the-moment decision-making, possibly with great discounts to those future selves who will suffer the negative effects.
I’m curious if Eliezer (or anyone else) has anything more to say about where the Born Probabilities come from. In that post, Eliezer wrote:
But what does the integral over squared moduli have to do with anything? On a straight reading of the data, you would always find yourself in both blobs, every time. How can you find yourself in one blob with greater probability? What are the Born probabilities, probabilities of? [...] I don’t know. It’s an open problem. Try not to go funny in the head about it.
Fair enough. But around the same time, Eliezer suggested Drescher’s book Good and Real, which I’ve been belatedly making my way through.
And then, on pages 150-151, I see that Drescher actually attempts to explain (derive?) the Born probabilities. He also says that we can “reach the same conclusion [...] by appeal to decision theory,” and references Deutsch 1999 (“Quantum Theory of Probability and Decisions”) and Wallace 2003 (“Quantum Probability and Decision Theory, Revisited”).
My problem: I still don’t get it. I loved Eliezer’s commonsense explanation of QM and MWI. I’m looking for something at the same level, just as intuitive, for the Born probabilities.
Anyone willing and able to take on that challenge?
“Dust” has been used in SF for nanotech before. And especially runaway nanotech, that is trying to disassemble everything, like a doomsday war weapon that got out of control. I recalled the paperclip maximizer too. Oh, and the Polity/Cormac SF books by Neal Asher, with Jain nodes (made by super AIs) that seem to have roughly the same objective.
Is there anything that you consider proven beyond any possibility of doubt by both empirical evidence and pure logic, and yet saying it triggers automatic stream of rationalizations in other people?
Hitler had a number of top-level skills, and we could learn (some) positive lessons from his example(s).
Eugenics would improve the human race (genepool).
Human “racial” groups may have differing average attributes (like IQ), and these may contribute to the explanation of historical outcomes of those groups.
(Perhaps these aren’t exactly topics that Less Wrong readers (in particular) would run away from. I was attempting to answer the question by riffing off Paul Graham’s idea of taboos. What is it “not appropriate” to talk about in ordinary society? Politeness might trigger the rationalization response...)
rlpowell, you are incorrect. You are spouting an untested theory that is repeated as fact by those with a vested interest in avoiding the harsh light of truth.
In actual fact, there is no problem with breaking someone’s arm in an MMA fight (see Mir vs. Sylvia in the UFC, for example). It’s also close to impossible to break someone’s neck (deliberately), despite what you may see in movies.
The “we’re too dangerous to fight” is an easy meme to propagate. But let me just ask you this: let’s just say, hypothetically, that your theory (“maximum damage” masters are “useless in MMA fights”) was false. How would you ever know? Assuming that someone did not yet have a belief about that proposition, what kind of evidence are you actually aware of, about whether the statement is true or false?
I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.
I realize you mostly care about #1, but just for more data: #2 I’d probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages.
For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.
I agree with Doug S. What most people think about, when they want to “try being female for awhile”, is to keep their same mind (or perhaps they believe in a soul) while just trying out different clothing. Basically, be in The Matrix, but just get instantiated as the Woman in the Red Dress for a week. Or maybe more like the movie Strange Days, with a technology that’s like TV (but better!), kind of like virtual reality. Like watching a movie, but using all your senses, and really getting immersed into it.
I don’t think most men imagine actually thinking like a woman’s brain thinks. As you say, that wouldn’t really be them any longer.
@ John: can you really not see the difference between “this is guaranteed to succeed”, vs. “this has only a tiny likelihood of failure”? Those aren’t the same statements.
“If you play the game this way”—but why would anyone want to play a game the way you’re describing? Why is that an interesting game to play, an interesting way to compare algorithms? It’s not about worst case in the real world, it’s not about average case in the real world. It’s about performance on a scenario that never occurs. Why judge algorithms on that basis?
As for predicting the random bits … Look, you can do whatever you want inside your algorithm. Your queries on the input bits are like sensors into an environment. Why can’t I place the bits after you ask for them? And then just move the 1 bits away from whereever you happened to decide to ask?
The point is, that you decided on a scenario that has zero relevance to the real world, and then did some math about that scenario, and thought that you learned something about the algorithms which is useful when applying them in the real world.
But you didn’t. Your math is irrelevant to how these things will perform in the real world. Because your scenario has nothing to do with any actual scenario that we see in deployment.
(Just as an example: you still haven’t acknowledged the difference between real random sources—like a quantum counter—vs. PRNGs—which are actually deterministic! Yet if I presented you with a “randomized algorithm” for the n-bit problem, which actually used a PRNG, I suspect you’d say “great! good job! good complexity”. Even though the actual real algorithm is deterministic, and goes against everything you’ve been ostensibly arguing during this whole thread. You need to understand that the real key is: expected (anti-)correlations between the deterministic choices of the algorithm, and the input data. PRNGs are sufficient to drop the expected (anti-)correlations low enough for us to be happy.)
@ Will: Yes, you’re right. You can make a randomized algorithm that has the same worst-case performance as the deterministic one. (It may have slightly impaired average case performance compared to the algorithm you folks had been discussing previously, but that’s a tradeoff one can make.) My only point is that concluding that the randomized one is necessarily better, is far too strong a conclusion (given the evidence that has been presented in this thread so far).
But sure, you are correct that adding a random search is a cheap way to have good confidence that your algorithm isn’t accidentally negatively correlated with the inputs. So if you’re going to reuse the algorithm in a lot of contexts, with lots of different input distributions, then randomization can help you achieve average performance more often than (some kinds of) determinism, which might occasionally have the bad luck to settle into worst-case performance (instead of average) for some of those distributions.
But that’s not the same as saying that it has better worst-case complexity. (It’s actually saying that the randomized one has better average case complexity, for the distributions you’re concerned about.)
To look at it one more time … Scott originally said Suppose you’re given an n-bit string, and you’re promised that exactly n/4 of the bits are 1, and they’re either all in the left half of the string or all in the right half.
So we have a whole set of deterministic algorithms for solving the problem over here, and a whole set of randomized algorithms for solving the same problem. Take the best deterministic algorithm, and the best randomized algorithm.
Some people want to claim that the best randomized algorithm is “provably better”. Really? Better in what way?
Is it better in the worst case? No, in the very worst case, any algorithm (randomized or not) is going to need to look at n/4+1 bits to get the correct answer. Even worse! The randomized algorithms people were suggesting, with low average complexity, will—in the very worst case—need to look at more than n/4+1 bits, just because there are 3/4n 0 bits, and the algorithm might get very, very unlucky.
OK, so randomized algorithms are clearly not better in the worst case. What about the average case? To begin with, nobody here has done any average case analysis. But I challenge any of you to prove that every deterministic algorithm on this problem is necessarily worse, on average, that some (one?) randomized algorithm. I don’t believe that is the case.
So what do we have left? You had to invent a bizarre scenario, supposing that the environment is an adversarial superintelligence who can perfectly read all of your mind except bits designated ‘random’, as Eliezer say, in order to find a situation where the randomized algorithm is provably better. OK, that proof works, but why is that scenario at all interesting to any real-world application? The real world is never actually in that situation, so it’s highly misleading to use it as a basis for concluding that randomized algorithms are “provably better”.
No, what you need to do is argue that the pattern of queries used by a (or any?) deterministic algorithm, is more likely to be anti-correlated with where the 1 bits are in the environment’s inputs, than the pattern used by the randomized algorithm. In other words, it seems you have some priors on the environment, that the inputs are not uniformly distributed, nor chosen with any reasonable distribution, but are in fact negatively correlated with the deterministic algorithm’s choices. And the conclusion, then, would be that the average case performance of the deterministic algorithm is actually worse than the average computed assuming a uniform distribution of inputs.
Now, this does happen sometimes. If you implement a bubble sort, it’s not unreasonable to guess that the algorithm might be given a list sorted in reverse order, much more often than picking from a random distribution would suggest.
And similarly, if the n-bits algorithm starts looking at bit #1, then #2, etc. … well, it isn’t at all unreasonable to suppose that around half the inputs will have all the 1 bits in the right half of the string, so the naive algorithm will be forced to exhibit worst-case performance (n/4+1 bits examined) far more often than perhaps necessary.
But this is an argument about average case performance of a particular deterministic algorithm. Especially given some insight into what inputs the environment is likely to provide.
This has not been an argument about worst case performance of all deterministic algorithms vs. (all? any? one?) randomized algorithm.
Which is what other commenters have been incorrectly asserting from the beginning, that you can “add power” to an algorithm by “adding randomness”.
Maybe you can, maybe you can’t. (I happen to think it highly unlikely.) But they sure haven’t shown it with these examples.
@ John, @ Scott: You’re still doing something odd here. As has been mentioned earlier in the comments, you’ve imagined a mind-reading superintelligence … except that it doesn’t get to see the internal random string.
Look, this should be pretty simple. The phrase “worst case” has a pretty clear layman’s meaning, and there’s no reason we need to depart from it.
You’re going to get your string of N bits. You need to write an algorithm to find the 1s. If your algorithm ever gives the wrong answer, we’re going to shoot you in the head with a gun and you die. I can write a deterministic algorithm that will do this in at most n/4+1 steps. So we’ll run it on a computer that will execute at most n/4+1 queries of the input string, and otherwise just halt (with some fixed answer). We can run this trillions of times, and I’m never getting shot in the head.
Now, you have a proposal. You need one additional thing: a source of random bits, as an additional input to your new algorithm. Fine, granted. Now we’re going to point the gun at your head, and run your algorithm trillions of times (against random inputs). I was only able to write a deterministic algorithm; you have the ability to write a randomized algorithm. Apparently you think this gives you more power.
Now then, the important question: are you willing to run your new algorithm on a special computer that halts after fewer than n/4+1 queries of the input string? Do you have confidence that, in the worst case, your algorithm will never need more than, say, n/4 queries?
No? Then stop making false comparisons between the deterministic and the randomized versions.
But the “inside view” bias is not amenable to being repaired, just by being aware of the bias. In other words, yes, the suggestion is that the direct arguments are optimistically biased. But no, that doesn’t mean that anybody expects to be able to identify specific flaws in the direct arguments.
As to what those flaws are … generally, they occur by failing to even imagine some event, which is in fact possible. So your question to identify the flaws is basically the same as, “what possible relevant events have you not yet thought of?”
Tough question to answer...