In case that wasn’t a rhetorical question, you almost certainly did: your Introduction to Bayesian Reasoning is the fourth Google hit for “Bayesian”, the third Google hit for “Bayes”, and has a pagerank of 5, the same as the Cryonics Institute’s main website.
Tom_McCabe2
“Would they take the next step, and try to eliminate the unbearable pain of broken hearts, when someone’s lover stops loving them?”
We already have an (admittedly limited) counterexample to this, in that many Westerners choose to seek out and do somewhat painful things (eg., climbing Everest), even when they are perfectly capable of choosing to avoid them, and even at considerable monetary cost.
“Some ordinary young man in college suddenly decides that everyone around them is staring at them because they’re part of the conspiracy.”
I don’t think that this is at all crazy, assuming that “they” refers to you (people are staring at me because I’m part of the conspiracy), rather than everyone else (people are staring at me because everyone in the room is part of the conspiracy). Certainly it’s happened to me.
“Poetry aside, a human being isn’t the seed of a god.”
A human isn’t, but one could certainly argue that humanity is.
“But with a sufficient surplus of power, you could start doing things the eudaimonic way. Start rethinking the life experience as a road to internalizing new strengths, instead of just trying to keep people alive efficiently.”
It should be noted that this doesn’t make the phenomenon of borrowed strength go away, it just outsources it to the FAI. If anything, given the kind of perfect recall and easy access to information that an FAI would have, the ratio of cached historical information to newly created information should be much higher than that of a human. Of course, an FAI wouldn’t suffer the problem of losing the information’s deep structure like a human would, but it seems to be a fairly consistent principle that the amount of cached data grows faster than the rate of data generation.
The problem here- the thing that actually decreases utility- is humans taking actions without sufficient understanding of the potential consequences, in cases where “Humans seem to do very well at recognizing the need to check for global consequences by perceiving local features of an action.” (CFAI 3.2.2) fails. I wonder, out of a sense of morbid curiosity, what the record is for the highest amount of damage caused by a single human without said human ever realizing that they did anything bad.
“By now, it’s probably true that at least some people have eaten 162,329 potato chips in their lifetimes. That’s even less novelty and challenge than carving 162,329 table legs.”
Nitpick: it takes much less time and mental energy to eat a potato chip than to carve a table leg, so the total quantity of sphexishness is much smaller.
“Or, to make it somewhat less strong, as if I woke up one morning to find that banks were charging negative interest on loans?”
They already have, at least for a short while.
http://www.nytimes.com/2008/12/10/business/10markets.html
“We are currently living through a crisis that is in large part due to this lack of appreciation for emergent behavior. Not only people in general but trained economists, even Nobel laureates like Paul Krugman, lack the imagination to understand the emergent behavior of free monetary systems.”
“Emergence”, in this instance, is an empty buzzword, see http://lesswrong.com/lw/iv/the_futility_of_emergence/. “Imagination” also seems likely to be an empty buzzword, in the sense of http://lesswrong.com/lw/jb/applause_lights/.
“precisely because the emergent behavior of the market is more powerful, more intelligent, in solving the problem of resource allocation than any committee.”
Markets do not allocate resources anywhere near optimally, and sometimes they do even worse than committees of bureaucrats; the bureaucrats, for instance, may increase utility by allocating more resources to poor people on grounds of higher marginal utility per dollar per person.
“Once you understand it then it’s not so amazing but it is very difficult to understand. Ben Bernanke doesn’t understand and Alan Greenspan didn’t understand before him.”
If you think you know more than Bernanke, then why haven’t you become rich by making better-than-expected bets?
“It can be improved on by randomisation: randomly betting on heads with p=0.5 and tails with p=0.5 is a stochastic strategy which offers improved returns—and there is no deterministic strategy which produces superior results to it.”
Eliezer has already noted that it is possible for a random strategy to be superior to a stupid deterministic strategy:
“But it is possible in theory, since you can have things that are anti-optimized. Say, the average state has utility −10, but the current state has an unusually low utility of −100. So in this case, a random jump has an expected benefit. If you happen to be standing in the middle of a lava pit, running around at random is better than staying in the same place. (Not best, but better.) A given AI algorithm can do better when randomness is injected, provided that some step of the unrandomized algorithm is doing worse than random.”
The point of the post is that a random strategy is never better than the best possible deterministic strategy. And assuming that you’re betting on real, physical coinflips, a random strategy is actually worse than the deterministic strategy of betting that the coin will come up heads if it started as heads and vice versa (see http://www.npr.org/templates/story/story.php?storyId=1697475).
“It is not clear this can be shown to be true. ‘Improvement’ depends on what is valued, and what the context permits. In the real world, the value of an algorithm depends on not only its abstract mathematical properties but the costs of implementing it in an environment for which we have only imperfect knowledge.”
Eliezer specifically noted this in the post:
“Sometimes it is too expensive to take advantage of all the knowledge that we could, in theory, acquire from previous tests. Moreover, a complete enumeration or interval-skipping algorithm would still end up being stupid. In this case, computer scientists often use a cheap pseudo-random algorithm, because the computational cost of using our knowledge exceeds the benefit to be gained from using it. This does not show the power of randomness, but, rather, the predictable stupidity of certain specific deterministic algorithms on that particular problem.”
“This may not sound like a profound insight, since it is true by definition. But consider—how many comic books talk about “mutation” as if it were a source of power? Mutation is random. It’s the selection part, not the mutation part, that explains the trends of evolution.”
I think this is a specific case of people treating optimization power as if it just drops out of the sky at random. This is certainly true for some individual humans (eg., winning the lottery), but as you point out, it can’t be true for the system as a whole.
“These greedy algorithms work fine for some problems, but on other problems it has been found that greedy local algorithms get stuck in local minima.”
Er, do you mean local maxima?
“When dealing with a signal that is just below the threshold, a noiseless system won’t be able to perceive it at all. But a noisy system will pick out some of it—some of the time, the noise and the weak signal will add together in such a way that the result is strong enough for the system to react to it positively.”
In such a case, you can clearly affect the content of the signal, so why not just give it a blanket boost of ten points (or whatever), if the threshold is so high that you’re missing desirable data?
I will not be there due to a screwup by Continental Airlines, my apologies.
See everyone there.
“As far as my childhood goes I created a lot of problems for myself by trying to force myself into a mold which conflicted strongly with the way my brain was setup.”
“It’s interesting that others have shared this experience, trying to distance ourselves from, control, or delete too much of ourselves—then having to undo it. I hadn’t read of anyone else having this experience, until people started posting here.”
For some mysterious reason, my younger self was so oblivious to the world that I never experienced (to my recollection) a massive belief system rewrite. I assume that what you’re referring to is learning a whole bunch of stuff, finding out later on that it’s all wrong, and then go back and undoing it all. I don’t think I ever learned the whole bunch of stuff in the first place- eg., when I discovered atheism, I didn’t really have an existing Christian belief structure that had to be torn down. I knew about Jesus and God and the resurrection and so forth, but I hadn’t really integrated it into my head, so when I discovered atheists, I just accepted their arguments as true and moved on.
“Would you kill babies if it was the right thing to do? If no, under what circumstances would you not do the right thing to do? If yes, how right would it have to be, for how many babies?”
I would have answered “yes”; eg., I would have set off a bomb in Hitler’s car in 1942, even if Hitler was surrounded by babies. This doesn’t seem to be a case of corruption by unethical hardware; the benefit to me from setting off such a bomb is quite negative, as it greatly increases my chance of being tortured to death by the SS.
“But what if you were “optimistic” and only presented one side of the story, the better to fulfill that all-important goal of persuading people to your cause? Then you’ll have a much harder time persuading them away from that idea you sold them originally—you’ve nailed their feet to the floor, which makes it difficult for them to follow if you yourself take another step forward.”
Hmmm… if you don’t need people following you, could it help you (from a rationality standpoint) to lie? Suppose that you read about AI technique X. Technique X looks really impressive, but you’re still skeptical of it. If you talk about how great technique X looks, people will start to associate you with technique X, and if you try to change your mind about it, they’ll demand an explanation. But if you lie (either by omission, or directly if someone asks you about X), you can change your mind about X later on and nobody will call you on it.
NOTE: This does require telling the same lie to everyone; telling different lies to different groups of people is, as noted, too messy.
“Human beings, who are not gods, often fail to imagine all the facts they would need to distort to tell a truly plausible lie.”
One of my pet hobbies is constructing metaphors for reality which are blatantly, factually wrong, but which share enough of the deep structure of reality to be internally consistent. Suppose that you have good evidence for facts A, B, and C. If you think about A, B, and C, you can deduce facts D, E, F, and so forth. But given how tangled reality is, it’s effectively impossible to come up with a complete list of humanly-deducible facts in advance; there’s always going to be some fact, Q, which you just didn’t think of. Hence, if you map A, B, and C to A’, B’, and C’, use A’, B’, and C’ to deduce Q’, and map Q’ back to Q, how accurate Q is is a good check for how well you understand A, B, and C.
“I am willing to admit of the theoretical possibility that someone could beat the temptation of power and then end up with no ethical choice left, except to grab the crown. But there would be a large burden of skepticism to overcome.”
If all people, including yourself, become corrupt when given power, then why shouldn’t you seize power for yourself? On average, you’d be no worse than anyone else, and probably at least somewhat better; there should be some correlation between knowing that power corrupts and not being corrupted.
I volunteer to be the Gatekeeper party. I’m reasonably confident that no human could convince me to release them; if anyone can convince me to let them out of the box, I’ll send them $20. It’s possible that I couldn’t be convinced by a transhuman AI, but I wouldn’t bet $20 on it, let alone the fate of the world.
“To accept this demand creates an awful tension in your mind, between the impossibility and the requirement to do it anyway. People will try to flee that awful tension.”
More importantly, at least in me, that awful tension causes your brain to seize up and start panicking; do you have any suggestions on how to calm down, so one can think clearly?
“Eliezer2000 lives by the rule that you should always be ready to have your thoughts broadcast to the whole world at any time, without embarrassment.”
I can understand most of the paths you followed during your youth, but I don’t really get this. Even if it’s a good idea for Eliezer_2000 to broadcast everything, wouldn’t it be stupid for Eliezer_1200, who just discovered scientific materialism, to broadcast everything?
“If everyone were to live for others all the time, life would be like a procession of ants following each other around in a circle.”
For a more mathematical version of this, see http://www.acceleratingfuture.com/tom/?p=99.
“It does not seem a very intuitive belief (except for very religious types and Eliezer1997 was not one of those), so what was its justification?”
WARNING: Eliezer-1999 content.
http://yudkowsky.net/tmol-faq/tmol-faq.html
“Even so, if you don’t try, or don’t try hard enough, you don’t get a chance to sit down at the high-stakes table—never mind the ability ante.”
Are you referring to external exclusion of people who don’t try, or self-exclusion?
“3WC would be a terrible movie. “There’s too much dialogue and not enough sex and explosions”, they would say, and they’d be right.”
Hmmm.. Maybe we should put together a play version of 3WC; plays can’t have sex and explosions in any real sense, and dialogue is a much larger driver.