That comic is my source too. I just never considered taking it at face value (too many apparent contradictions). My bad for mind projection.
Zed
Does Mount Stupid refer to the observation that people tend to talk loudly and confidently about subjects they barely understand (but not about subjects they understand so poorly that they know they must understand it poorly)? In that case, yes, once you stop opining the phenomenon (Mount Stupid) goes away.
Mount Stupid has a very different meaning to me. To me it refers to the idea that “feeling of competence” and “actual competence” are not linearly correlated. You can gain a little in actual competence and gain a LOT in terms of “feeling of competence”. This is when you’re on Mount Stupid. Then, as you learn more your feeling of competence and actual competence sort of converge.
The picture that takes “Willingness to opine” on the Y-axis is, in my opinion, a funny observation of the phenomenon that people who learn a little bit about a subject become really vocal about it. It’s just a funny way to visualize the real insight (Δ feeling of competence != Δ competence) in a way that connects to people because we can probably all remember when we made that specific mistake (talking confidently about a subject we knew little about).
I don’t think so, because my understanding of the topic didn’t improve—I just don’t want to make a fool out of myself.
I’ve moved beyond mount stupid on the meta level, the level where I can now tell more accurately whether my understanding of a subject is lousy or OK. On the subject level I’m still stupid, and my reasoning, if I had to write it down, would still make my future self cringe.
The temptation to opine is still there and there is still a mountain of stupid to overcome, and being aware of this is in fact part of the solution. So for me Mount Stupid is still a useful memetic trick.
Macroeconomics. My opinion and understanding used to be based on undergrad courses and a few popular blogs. I understood much more than the “average person” about the economy (so say we all) and therefore believed that I my opinion was worth listening to. My understanding is much better now but I still lack a good understanding of the fundamentals (because textbooks disagree so violently on even the most basic things). If I talk about the economy I phrase almost everything in terms of “Economist Y thinks X leads to Z because of A, B, C.”. This keeps the different schools of economics from blending together in some incomprehensible mess.
QM. Still on mount stupid, and I know it. I have to bite my tongue not to debate Many Worlds with physics PhDs.
Evolution. Definitely on mount stupid. I know this because I used to think “group pressure” was a good argument until EY persuaded me otherwise. I haven’t studied evolution since so I must be on mount stupid still.
Aside from being aware of the concept of Mount Stupid I have not changed my behavior all that much. If I keep studying I know I’m going to get beyond Mount Stupid eventually. The faster I study the less time I spend on top of mount stupid and the less likely I am to make a fool out of myself. So that’s my strategy.
I have become much more careful about monitoring my own cognitive processes: am I saying this just to win the argument? Am I looking specifically for arguments that support my position, and if so, am I sure I’m not rationalizing? So in that respect I’ve improved a little. It’s probably the most valuable sort of introspection that typical well educated and intelligent people lack.
One crucial point about Mount Stupid that hasn’t been mentioned here yet is that it applies every time you “level up” on a subject. Every time you level up on a subject you’re at a new valley with a Mount Stupid you have to cross. You can be an expert frequentist rationalist but a lousy Bayesian rationalist, and by learning a little about Bayesianism you can become stupider (because you’re good at distinguishing good vs bad frequentist reasoning but you can’t tell the difference for Bayes (and if you don’t know you can’t tell the difference you’re also on Meta Mount Stupid)).
[a “friendly” AI] is actually unFriendly, as Eliezer uses the term
Absolutely. I used “friendly” AI (with scare quotes) to denote it’s not really FAI, but I don’t know if there’s a better term for it. It’s not the same as uFAI because Eliezer’s personal utopia is not likely to be valueless by my standards, whereas a generic uFAI is terrible from any human point of view (paperclip universe, etc).
Game theory. If different groups compete in building a “friendly” AI that respects only their personal extrapolated coherent violation (extrapolated sensible desires) then cooperation is no longer an option because the other teams have become “the enemy”. I have a value system that is substantially different from Eliezer’s. I don’t want a friendly AI that is created in some researcher’s personal image (except, of course, if it’s created based on my ideals). This means that we have to sabotage each other’s work to prevent the other researchers to get to friendly AI first. This is because the moment somebody reaches “friendly” AI the game is over and all parties except for one lose. And if we get uFAI everybody loses.
That’s a real problem though. If different fractions in friendly AI research have to destructively compete with each other, then the probability of unfriendly AI will increase. That’s real bad. From a game theory perspective all FAI researchers agree that any version of FAI is preferable to uFAI, and yet they’re working towards a future where uFAI is becoming more and more likely! Luckily, if the FAI researchers take the coherent extrapolated violation of all of humanity the problem disappears. All FAI researchers can work to a common goal that will fairly represent all of humanity, not some specific researcher’s version of “FAI”. It also removes the problem of different morals/values. Some people believe that we should look at total utility, other people believe we should consider only average utility. Some people believe abstract values matter, some people believe consequences of actions matter most. Here too the solution of an AI that looks at a representative set of all human values is the solution that all people can agree on as most “fair”. Cooperation beats defection.
If Luke were to attempt to create a LukeFriendlyAI he knows he’s defecting from the game theoretical optimal strategy and thereby increasing the probability of a world with uFAI. If Luke is aware of this and chooses to continue on that course anyway then he’s just become another uFAI researcher who actively participates in the destruction of the human species (to put it dramatically).
We can’t force all AI programmers to focus on the FAI route. We can try to raise the sanity waterline and try to explain to AI researchers that the optimal (game theoretically speaking) strategy is the one we ought to pursue because it’s most likely to lead to a fair FAI based on all of our human values. We just have to cooperate, despite differences in beliefs and moral values. CEV is the way to accomplish that because it doesn’t privilege the AI researchers who write the code.
If you’re certain that belief A holds you cannot change your mind about that in the future. The belief cannot be “defeated”, in your parlance. So given that you can be exposed to information that will lead you to change your mind we conclude that you weren’t absolutely certain about belief A in the first place. So how certain were you? Well, this is something we can express as a probability. You’re not 100% certain a tree in front of you is, in fact, really there exactly because you realize there is a small chance you’re drugged or otherwise cognitively incapacitated.
So as you come into contact with evidence that contradicts what you believe you become less certain your belief is correct, and as you come into contact with evidence that confirms what you believe you become more confident your belief is correct. Apply Bayes’ rules for this (for links to Bayes and Bayesian reasoning see other comments in this thread).
I’ve just read a couple of pages of Defeasible Reasoning by Pollock and it’s a pretty interesting formal model of reasoning. Pollock argues, essentially, that Bayesian epistemology is incompatible with deductive reasoning (pg 15). I semi-quote: “[...] if Bayesian epistemology were correct, we could not acquire new justified beliefs by reasoning from previously justified beliefs” (pg 17). I’ll read the paper, but this all sounds pretty ludicrous to me.
Looks great!
I may be alone in this, and I haven’t mentioned this before because it’s a bit of a delicate subject. I assume we all agree that first impressions matter a great deal, and that appearances play a large role in that. I think that, how to say this, ehm, it would, perhaps, be in the best interest of all of us, if you could use photos that don’t make the AI thinkers give off this serial killer vibe.
I second Manfred’s suggestion about the use of beliefs expressed as probabilities.
In puzzle (1) you essentially have a proof for T and a proof for ~T. We don’t wish the order in which we’re exposed to the evidence to influence us, so the correct conclusion is that you should simply be confused*. Thinking in terms of “Belief A defeats belief B” is a bit silly, because you then get situations where you’re certain T is true, and the next day you’re certain ~T is true, and the day after that you’re certain again that T is true after all. So should beliefs defeat each other in this manner? No. Is it rational? No. Does the order in which you’re exposed to evidence matter? No.
In puzzle (2) the subject is certain a proposition is true (even though he’s still free to change his mind!). However, accepting contradicting evidence leads to confusion (as in puzzle 1), and to mitigate this the construct of “Misleading Evidence” is introduced that defines everything that contradicts the currently held belief as Misleading. This obviously leads to Status Quo Bias of the worst form. The “proof” that comes first automatically defeats all evidence from the future, therefore making sure that no confusion can occur. It even serves as a Universal Counterargument (“If that were true I’d believe it and I don’t believe it therefore it can’t be true”). This is a pure act of rationalization, not of rationality.
*) meaning that you’re not completely confident of T and ~T.
My view about global rationality is similar to that the view of John Baez about individual risk-adversity. An individual should typically be cautious because the maximum downside (destruction of your brain) is huge even for day-to-day actions like crossing the street. In the same way, we have only one habitable planet and one intelligent species. If we (accidentally) destroy either we’re boned. Especially when we don’t know exactly what we’re doing (as is the case with AI) caution should be the default approach, even if we were completely oblivious to the concept of a singularity.
that the most pressing issue is to increase the confidence into making decisions under extreme uncertainty or to reduce the uncerainty itself.
I disagree, it’s not the most pressing issue. In a sufficiently complex system there are always going to be vectors we poorly understand. The problem here is that we have a global society where it becomes harder every year for a single part to fail independently of the rest. A disease or pathogen is sure to spread to all parts of the world, thanks to our infrastructure. Failure of the financial markets affect the entire world because the financial markets too are intertwined. Changes in the climate also affect the entire globe, not just the countries who pollute. An unfriendly AI cannot be contained either. Everywhere you look there are now single points of failure. The more connected our world becomes the more vulnerable we become to black swan events that rock the world. Therefore, the more cautious we have to be. The strategy we used in the past 100.000 years (blindly charge forward) got us where we are today but it isn’t very good anymore. If we don’t know exactly what we’re doing we should make absolutely sure that all worst case scenarios affect only a small part of the world. If we can’t make such guarantees then we should probably be even more reluctant to act at all. We must learn to walk before we can run.
Under extreme uncertainty we cannot err on the side of caution. We can reduce uncertainty somewhat (by improving our estimates) but there is no reason to assume we will take all significant factors into account. If you start out with a 0.001 probability of killing all of humanity there is no amount of analysis that can rationally lead to the conclusion “eh, whatever, let’s just try it and see what happens”, because the noise in our confidence will exceed a few parts in a million at the least, which is already an unacceptable level of risk. It took billions of years for evolution to get us to this point. We can now mess it up in the next 1000 years or so because we’re in such a damn hurry. That’d be a shame.
From the topic, in this case “selection effects in estimates of global catastrophic risk”. If you casually mention you don’t particularly care about humans or that personally killing a bunch of them may be an effective strategy the discussion is effectively hijacked. So it doesn’t matter that you don’t wish to do anybody harm.
Let G be a a grad student with an IQ of 130 and a background in logic/math/computing.
Probability: The quality of life of G will improve substantially as a consequence of reading the sequences.
Probability: Reading the sequences is a sound investment for G (compared to other activities)
Probability: If every person on the planet were trained in rationality (as far as IQ permits) humanity would allocate resources in a sane manner.
Ah, you’re right. Thanks for the correction.
I edited the post above. I intended P(Solipsism) < 0.001
And now I think a bit more about it I realize the arguments I gave are probably not “my true objections”. They are mostly appeals to (my) intuition.
You shouldn’t do it because it’s an invitation for people to get sidetracked. We try to avoid politics for the same reason.
P(Simulation) < 0.01; little evidence in favor of it and it requires that there is some other intelligence doing the simulation, that there can be the kind of fault-tolerant hardware that can (flawlessly) compute the universe. I don’t think posthuman ancestors are capable of running a universe as a simulation. I think Bostrom’s simulation argument is sound.
1 - P(Solipsism) > 0.999; My mind doesn’t contain minds that are consistently smarter than I am and can out-think me on every level.
P(Dreaming) < 0.001; We don’t dream of meticulously filling out tax forms and doing the dishes.
[ Probabilities are not discounted for expecting to come into contact with additional evidence or arguments ]
Anything by Knuth.
I know several people who moved to Asia to work on their internet startup. I know somebody who went to Asia for a few months to rewrite the manuscript of a book. In both cases the change of scenery (for inspiration) and low cost of living made it very compelling. Not quite the same as Big Thinking, but it’s close.
I’m flattered, but I’m only occasionally coherent.
When you say “I have really thought about this a considerable amount”, I hear “I have diagnosed the problem quite a while ago and it’s creating a pit in my stomach but I haven’t taken any action yet”. I can’t give you any points for that.
When you’re dealing with a difficult problem and if you’re an introspective person it’s easy to get stuck in a loop where you keep going through the same sorts of thoughts. You realize you’re not making much progress but the problem remains so you feel obligated to think about it some more. You should think more, right? It’s an important decision after all?
Nope. Thinking about the problem is not a terminal goal. Thinking is only useful insofar it leads to action. And if your thinking to action ratio is bad, you’ll get mentally exhausted and you’ll have nothing to show for it. It leads to paralysis where all you do is think and think and think.
If you want to make progress you have to find a way to decompose your problem into actionable parts. Not only will action make you feel better, it’s also going to lead to unexplored territory.
So what kind of actions can you take?
Well, your claim is that major conferences require short term commercial papers. So if you go systematically through the papers published in the last year or so you’ll find either (a) all the papers are boring, stupid, silly or wrong. (b) there are a bunch of really cool papers in there. In case of (a) maybe you’re in the wrong field of research. Maybe you should go into algorithms or formal semantics. In this case look at other computer science papers until you find papers that do excite you. In case of (b) contact the authors of the papers; check out their departments; etc, etc.
To recap: Find interesting papers. Find departments where those interesting papers were written. Contact those departments.
Another strategy. Go to the department library and browse through random books that catch your eye. This is guaranteed to give you inspiration.
This is just from the top of my head. But whatever you do, make sure that you don’t just get stuck in a circle of self-destructive thought. Action is key.
If you’re certain you want to eventually get a faculty job, do a combination of teaching and research, own a house and regularly go on holiday, then I can’t think of any alternatives to the conventional PhD → faculty route. What’s the best way to achieve a faculty job? I don’t know. Probably a combination of networking, people skills and doing great research. If you want a faculty job badly enough you can get one. But once you get it there’s no guarantee you’re going to be happy if what you really want is complete autonomy.
I’m sorry I can’t give any targeted advice.
(PS: some people like the idea of travel more than they like travel and some people like the idea of home-ownership more than they like home-ownership. For instance, if you haven’t traveled a lot in the past 5 years you probably don’t find travel all that important (otherwise you would’ve found a way to travel).)
I think “strategy” is better than “wisdom”. I think “wisdom” is associated with cached Truths and signals superiority. This is bad because this will make our audience too hostile. Strategy, on the other hand, is about process, about working towards a goal, and it’s already used in literature in the context of improving one’s decision making process.
You can get away with saying things like “I want to be strategic about life”, meaning that I want to make choices in such a way that I’m unlikely to regret them at a later stage. Or I can say “I want to become a more strategic thinker” and it’s immediately obvious that I care about reaching goals and that I’m not talking about strategy for the sake of strategy (I happen to care about strategy because of the virtue of curiosity, but this too is fine). The list goes on: “we need to reconsider our strategy for education”, “we’re not being strategic enough about health care—too many people die unnecessarily”. None of these statements put our audience on guard or make us look like unnatural weirdos. [1]
The most important thing is that “irrational” is perceived as an insult and way too close to the sexist emotional/hormonal used to dismiss women. Aside from the sexism saying “whatever, you’re just being irrational” is just as bad as saying “whatever, you’re just being hormonal”. It’s the worst possible thing to say, and when you have a habit of using the word “rational” a lot it’s way too easy to slip up.
[1] fun exercise—substitute “strategy” by “rationality” and see how much more Spock-like it all sounds.