Welcome to Lesswrong. We like rationality because it helps us achieve our goals. You might call it optimizing our lives.
Unfortunately, mass media portrayals of “rationality” make it seem like smart people want to lose all emotions and become Vulcan. That’s a stupid goal, and not what we mean by rationality
If you have something you want to talk about, click Discussion in the heading, then post in the open thread.
On the off chance that you’re actually trying to engage seriously here… you nod here in the general direction of an important point that comes up a fair bit on this site, namely that for human purposes it’s not enough to be arbitrarily good at optimizing, it also matters what I’m optimizing for.
Put another way: sure, one way of becoming really successful at achieving my goals is by discarding all goals that are difficult to achieve. One way of becoming really successful at promoting my values is by discarding my existing values and replacing them with some other values that are easier to promote. This general strategy is often referred to here as “wireheading” and generally dismissed as not worth pursuing.
Admittedly, it’s not precisely clear to me that what you’re describing here matches up to that strategy, but then it’s not precisely clear to me why you consider it the easiest way of being rational either.
Welcome to Lesswrong. We like rationality because it helps us achieve our goals. You might call it optimizing our lives.
You guys seriously should invest in general problem solving exercises. edit: to clarify. You discuss what ways of deciding are wrong. That’s great. The issue is, one accomplishes one’s goals by solving problems. E.g. you don’t like to spend time driving to job? You can choose between driving and public transportation, taking into account the flu infection rate, etc etc. Great, now your life is few percent better. OR you can solve the problem of how do i live closer to my job, which has trillions trillions solutions that are not readily apparent (including a class of solutions which require high intelligence—e.g. you could invent something useful, patent it, and get rich), but which can make your life massively better.
Clarified in the edit. This site very much focusses on choosing rationally (between very few options), what one should believe, and such. If you want to achieve your goals, you need to get better at problem solving, which you do by solving various problems (duh). Problem solving involves picking something good out of a space of enormous number of possibilities.
You guys seriously should invest in general problem solving exercises.
Would just googling “problem solving exercises” be enough? What are you talking about, exactly?
I think what Dmytry is talking about is that Less Wrong does not stand up to its goals.
Eliezer Yudkowsky once wrote that rationality is just the label he uses for his “beliefs about the winning Way—the Way of the agent smiling from on top of the giant heap of utility.”
Wouldn’t it make sense to assess if you are actually winning by solving problems or getting rich etc.? At least if there is more to “raising the sanity waterline” than epistemic rationality, if it is actually supposed to be instrumentally useful.
Yea, basically that. Every fool can make correct choice between two alternatives with a little luck and a coin toss. Every other fool can get it by looking at first fool. You gets heaps of utility by looking in giant solution spaces where this doesn’t work. You don’t get a whole lot by focussing all your intellectual might on doing something that fools do well enough.
See, Eliezer grew up in religious family, and his idea of intelligence is choosing the correct beliefs. I grew up in poor family; my idea of intelligence is much more along the lines of actually succeeding via finding solutions to practical problems. Nobody’s going to pay you just because you correctly don’t believe in God. Not falling for the sunk cost fallacy at very best gets you to square 1 with lower losses—that’s great, and laudable, and is better than sinking more costs, but it’s only microscopic piece of problem solving. The largest failure of reasoning is failure to even get a glimpse at the winning option, because its lost inside huge space.
Welcome to Lesswrong. We like rationality because it helps us achieve our goals. You might call it optimizing our lives.
Unfortunately, mass media portrayals of “rationality” make it seem like smart people want to lose all emotions and become Vulcan. That’s a stupid goal, and not what we mean by rationality
If you have something you want to talk about, click Discussion in the heading, then post in the open thread.
I’ve started a LessWrong wiki page for Straw Vulcan.
so the easiest way on beeing rational would be to stop thinking at all and just exist. ie going to work, then home, sleep and so on.
doesn’t sound like a happy and intelligent life to me
On the off chance that you’re actually trying to engage seriously here… you nod here in the general direction of an important point that comes up a fair bit on this site, namely that for human purposes it’s not enough to be arbitrarily good at optimizing, it also matters what I’m optimizing for.
Put another way: sure, one way of becoming really successful at achieving my goals is by discarding all goals that are difficult to achieve. One way of becoming really successful at promoting my values is by discarding my existing values and replacing them with some other values that are easier to promote. This general strategy is often referred to here as “wireheading” and generally dismissed as not worth pursuing.
Admittedly, it’s not precisely clear to me that what you’re describing here matches up to that strategy, but then it’s not precisely clear to me why you consider it the easiest way of being rational either.
You guys seriously should invest in general problem solving exercises. edit: to clarify. You discuss what ways of deciding are wrong. That’s great. The issue is, one accomplishes one’s goals by solving problems. E.g. you don’t like to spend time driving to job? You can choose between driving and public transportation, taking into account the flu infection rate, etc etc. Great, now your life is few percent better. OR you can solve the problem of how do i live closer to my job, which has trillions trillions solutions that are not readily apparent (including a class of solutions which require high intelligence—e.g. you could invent something useful, patent it, and get rich), but which can make your life massively better.
Would just googling “problem solving exercises” be enough? What are you talking about, exactly?
Clarified in the edit. This site very much focusses on choosing rationally (between very few options), what one should believe, and such. If you want to achieve your goals, you need to get better at problem solving, which you do by solving various problems (duh). Problem solving involves picking something good out of a space of enormous number of possibilities.
I think what Dmytry is talking about is that Less Wrong does not stand up to its goals.
Eliezer Yudkowsky once wrote that rationality is just the label he uses for his “beliefs about the winning Way—the Way of the agent smiling from on top of the giant heap of utility.”
Wouldn’t it make sense to assess if you are actually winning by solving problems or getting rich etc.? At least if there is more to “raising the sanity waterline” than epistemic rationality, if it is actually supposed to be instrumentally useful.
Yea, basically that. Every fool can make correct choice between two alternatives with a little luck and a coin toss. Every other fool can get it by looking at first fool. You gets heaps of utility by looking in giant solution spaces where this doesn’t work. You don’t get a whole lot by focussing all your intellectual might on doing something that fools do well enough.
See, Eliezer grew up in religious family, and his idea of intelligence is choosing the correct beliefs. I grew up in poor family; my idea of intelligence is much more along the lines of actually succeeding via finding solutions to practical problems. Nobody’s going to pay you just because you correctly don’t believe in God. Not falling for the sunk cost fallacy at very best gets you to square 1 with lower losses—that’s great, and laudable, and is better than sinking more costs, but it’s only microscopic piece of problem solving. The largest failure of reasoning is failure to even get a glimpse at the winning option, because its lost inside huge space.
Isn’t this being done?