I’m willing to drop the term “political scientist” for “relevant expert”.
That improves it a lot, but sacrifices specificity. It’s easier to say who is a political scientist, much harder to tell who is relevant expert. Knowing who is expert is vital for peer reviewing. But the problem may be overcome by betting.
the successes of betting markets and the scientific method in such a diverse array of disciplines
I am unaware of successes of betting markets in a diverse array of disciplines. Please tell more (not a rhetorical question, I am really curious). Betting markets are even not employed in science.
As for scientific method, it is far from clear what it refers to here. If you mean the ways how scientific community work, peer reviewed journals, grants and such, I doubt it is the key component of success of science. If you mean such things as hypothesis testing, repeatability of experiments, demands on clarity and precision of formulations, mathematical rigour—I find it hard to imagine how these things could be transferred to politics, for different reasons.
Unlike a questions such as: “should we socialize health care; should we restrict trade; does inflation lower a gdp?” These are places I think we should let political scientists and economists do their jobs. They are political questions. They are not questions which would be most efficiently solved by game theorists, political scientists, and psychologists; political scientists and economists can handle it.
You included political scientists both in group of people who will efficiently handle the question and group of those who will not, was that a typo?
The questions including “should” are tricky. Should we restrict alcohol consumption? Yes, it has wretched many lifes. No, what’s better than drinking cold beer after a long day—and if someone wants to catch cirrhosis, his problem.
I agree that different questions need different experts, we disagree about what actual expertise is needed. You seem to say that the problem of democratic politics is incompetence, in the sense that politicians don’t know the expected utilities of their decisions. But the big problem is that there are no clear-cut utilities out there. Even if we forget common circular preferences and inconsitencies, there are different people with conflicting utility functions who even don’t agree on a meta-principle according to which their utilities are to be compared. Politicians probably practically understand this better than many economists who tend to flatten complex “should” questions into questions of fact like “does inflation lower GDP”. Politicians have access to analogous expert opinions and if they seem to act contrary to them, it needn’t be caused by their ignorance.
When visionaries are designing their utopias, they often fall prey to several mistakes. First, they assume knowledge of other people’s wishes and priorities, and expect more consistency, uniformity and harmony than there actually is. Second, they assume no large-scale opposition against the very utopian system, thus not addressing the problem of the system’s stability. Third, they state the utopia in broad terms, not paying attention to details which can be used to game the system. It is much harder to find flaws in a vague description.
What you have, in my opinion, omitted from consideration:
We have egalitarian instincts which democratic voting rights symbolise. The right to vote, equal for a homeless beggar as well as a president of a multi-national company, can be in itself an important final value which may overweigh lots of other possible benefits.
When a relatively closed group of people rules, others are likely to see conspiracies and evil motives even if there are none and are going to be strongly opposed to the group. Dictatorship of philosophers (scientists, experts) would likely survive only if the non-philosophers are pacified by force, or if a greater common enemy is found. Both variants bring additional disutility.
Betting on policies in futarchy-like systems have two effects. First, it makes those policies more likely to be implemented, and second, it hase some expected gain. Would you bet on a policy which you think will likely work, but whose intended outcome you disagree with? It is a game-theoretic problem, multi-player prisoner’s dilemma, whose outcome depends on the actual decision theory most people use.
Science, as a social institution, isn’t flawless. The insistence on publications leads to lots of nearly useless papers and discourages projects with uncertain outcome, and leads to bias towards positive results. There are fake journals whose function is to provide publications for money. There are diploma mills. When reality is complicated, there are disciplines divided among several disagreeing schools (Austrian, Keynesian and neo-classical economics, string theory vs. loop gravity). Science can live with that. Can politics?
Betting markets reliably predict the weather better than meteorologists. Probably cause the meteorologist doesn’t make more money if he spends 12 hours collecting data, instead of 6. People betting do.
We have egalitarian instincts which democratic voting rights symbolize.
We have lot’s of instincts that can be symbolized by lots of things.
The right to vote, equal for a homeless beggar as well as a president of a multi-national company, can be in itself an important final value which may overweigh lots of other possible benefits.
How? Mind you everyone has the right to play the betting market, and if you get good at it, your bets will start being used to institute policies.
When a relatively closed group of people rules, others are likely to see conspiracies and evil motives even if there are none and are going to be strongly opposed to the group. Dictatorship of philosophers (scientists, experts) would likely survive only if the non-philosophers are pacified by force, or if a greater common enemy is found.
Again, anybody can join the ranks of the elite gamblers, they just have to gamble well enough. All betting records of all elite gamblers will be completely open knowledge. And so will the complete list of proposals. You can start betting whenever you want, but only after you achieve a certain score do your bets actually start effecting policy decisions, and after a higher score you may propose new policies to bet on.
Betting on policies in futarchy-like systems have two effects. First, it makes those policies more likely to be implemented, and second, it hase some expected gain. Would you bet on a policy which you think will likely work, but whose intended outcome you disagree with?
I’m not sure what you mean. I agree with a policy that I think optimizes happiness, and I say that a policy works, if I think it will optimize happiness. So I can’t imagine thinking a policy works that I disagree with.
Science, as a social institution, isn’t flawless. The insistence on publications leads to lots of nearly useless papers and discourages projects with uncertain outcome, and leads to bias towards positive results.
I would point you to this sentence in OP:
I have had enough contact with the history and conquering of extremely difficult scientific problems, to know that there is only one way to solve them — by doing extremely good science (not all right, or good enough science: really good science).
Again, anybody can join the ranks of the elite gamblers, they just have to gamble well enough. All betting records of all elite gamblers will be completely open knowledge.
Anybody can become a scientist or a millionaire, in the sense that there is no law preventing that. Normally most people don’t have enough intelligence, perseverance and/or luck to do it. Most people will not be elite gamblers even if they tried. It seems that natural reaction in such a situation wouldn’t be “well, perhaps I don’t really understand politics”. I would rather be “the system is rigged, they have conspired to vote against the common sense policy to gain personal advantage”. Sure, conspiracy theories can emerge in a voting system too. But voting has positive connotations, it is associated with fairness, it is the natural thing to do when a group of people has to decide about something. Connotations of gambling are far less honourable.
I have had enough contact with the history and conquering of extremely difficult scientific problems, to know that there is only one way to solve them — by doing extremely good science (not all right, or good enough science: really good science).
I have read the sentence in the OP, but I don’t see how it addresses the problems in social workings of science. I am even not sure what insight the sentence had to convey. It sounds like a tautological triviality—solution to an extremely difficult scientific problem is “extremely good science” almost by definition.
I’m not sure what you mean. I agree with a policy that I think optimizes happiness, and I say that a policy works, if I think it will optimize happiness. So I can’t imagine thinking a policy works that I disagree with.
I thought that “works” means “has the expected consequences”, not “maximises happiness”. If “works” means “maximises happiness” and this meaning of “working” is what the policies are supposed to be checked against, then my objection of course doesn’t apply. On the other hand, you are now subject to all criticisms of happiness-maximising utilitarianism, and more, you can be accused of trying to impose your value system on others.
If they gamble on something which doesn’t actually optimize happiness and is only in their interest, they’ll loose money.
I am even not sure what insight the sentence had to convey. It sounds like a tautological triviality—solution to an extremely difficult scientific problem is “extremely good science” almost by definition.
I agree, it is trivial. Politics is a really tough scientific problem, and it requires really good science to solve it. Turns out voting on hypotheses, or researchers isn’t very good science.
What a government might do that isn’t to either:
spread happiness, increase net happiness, or avoid suffering, I can’t imagine.
Still you have to show that 1) politics is a scientific problem (you have asserted it number of times without actually trying to argue for that much) and 2) that betting markets are efficient way to do solve scientific problems (that’s not the way science is normally done today).
What a government might do that isn’t to either: spread happiness, increase net happiness, or avoid suffering, I can’t imagine.
I find it slightly disturbing that you don’t seem to acknowledge that people may have values different from or additional to “net happiness” (look at some LW posts about wireheading) and that there are significant problems with comparing different forms of happiness and with interpersonal comparison as well.
Politics requires that you figure out true sentences, both circumstantial, and hypothetical on a policy’s institution. If you figured out all of these sentences, politics would be solved, therefor it is a scientific problem.
I understand that there are problems with simple utilitarianism; and it only gets worse if you need to compare the happiness of two people. But it seems to me that these problems primarily come up in strange edge-cases. Most of the time specifically formulated by philosophers to expose the approximate nature of net happiness maximization models of human ethics.
Could you give me three examples of a successful policy which doesn’t increase net happiness, or even out the spread of happiness, or make more options for happiness getting? I’ll give up the point if you (or anyone else) can.
Many of the problems come up when we say that what we want is happiness. This is a category error, what we want are certain states. Happiness is a sign that we’ve achieved one of the states we wanted. If you asked me if I would like to get permanently wire-headed, I would say no. What I care about isn’t happiness; It’s what causes my happiness, e.g., certain states out there in the world. Of course I don’t want to change my utility function. Evaluating the utility of changing my utility function from my current utility function, will almost always reveal that it is a bad idea. I’ll start optimizing for the things the second utility function scores high for, and not for the things I score high now.
Could you give me three examples of a successful policy which doesn’t increase net happiness, or even out the spread of happiness, or make more options for happiness getting? I’ll give up the point if you (or anyone else) can.
If successful means “promotes happiness”, then I trivially can’t. If it means “works as planned”, then Holocaust was quite successful in elimination of Jews, to give an extreme example. There are of course plenty of less extreme policies which somehow work as intended, but their effect on happiness is likely negative. Drug legislation almost certainly lowers the number of addicts; I am sure there are people who would consider it worth the negative consequences. Tax increases are usually effective in solving budget problem but making many people less happy; although here it can be argued that state bankruptcy resulting from leaving the problems unsolved would hit happiness much more in the long run, the stated goal is always to solve the budget problem, not to increase happiness.
But you already say that you don’t care about happiness per se, so you should understand what I am objecting to. Imagine there is a plausible method for wireheading without visible downsides and compulsory wireheading is proposed as a policy. Now you should believe that this will be successful policy but it would go against your values. How would you vote in such case?
Nah, I mean “successful” as in: you and I are both capable of agreeing as well as at least not the minority of experts about that use of “successful”. I won’t play silly arguing by def games here. I happen to think that “increases fun/happiness or avoids suffering” is a pretty good guideline for the application of “successful” to policies. If you can show me that there are lots of times when you and I and at least some experts will be tempted to say “That policy was successful.”, i.e., “worked”, “rocked”, “was a good idea”, but say that “That policy did not increase happiness.” or “That policy increased suffering.” as well, then Ill abandon that position and update my meta-policy accordingly after going back to the drawing board. Otherwise I’ll sit pretty.
Imagine there is a plausible method for wireheading without visible downsides and compulsory wireheading is proposed as a policy.
I would bet money that giving people the option to do wire-heading, and stop wire-heading when ever they want, would optimize happiness. If we are right that people in general care about doing stuff out there in the real world, then they should still stop wire-heading every now and again to do something. Maybe this wouldn’t quite optimize “happyness” per se, it wouldn’t optimize for reward pathway stimulation certainly. But it would probably still optimize for some sort of happiness related thing like “fun”, or “reflective self appreciation” or something.
If you didn’t want to play silly games, you would have already agreed to use “successful” synonymously to “happiness-promoting” which you effectively do and we could have moved on. The meaning of successful was by no means central to your ideas so this could be easy to do. That you didn’t do it is a sign that this probably isn’t going to be a productive debate. I retreat.
Generally this means the opposite of ‘failed’. ‘Was a good idea’ is orthogonal to ‘successful’; something can be either one without being the other. You’re playing silly games by implicitly defining ‘successful’ as ‘increased happiness’ and then pretending this means anything.
and stop wire-heading when ever they want
I’ve never heard of a form of wireheading in which this was possible.
I think you must be misreading me somehow. I’m simply saying that I think “if a policy was successful it very probably increased net happiness.” And that if someone applies the phrase “that policy was successful” they will likely also be willing to apply the phrase “that policy increased net happiness.” These are empirical probabilistic claims, which can be falsified, and are certainly not meaningless. LWers don’t use Aristotelian concept theory for definition, for the most part we treat definitions more like pointers to empirical clusters of roughly similar things, as here .
Could you give me three examples of a successful policy which doesn’t increase net happiness, or even out the spread of happiness, or make more options for happiness getting? I’ll give up the point if you (or anyone else) can.
The Holocaust, and more generally most of Hitler’s political policies, as distinct from the military ones.
North Korea’s closed borders.
The US’s policy of propping up US-friendly dictators in the third world.
Ok, but you and I would both say these examples increased suffering, and that they were not good ideas, or nice. Therefor these are not examples of the form i asked for.
Potato is proposing a deffenition as an emperical pointer. It means plenty, it means when people think “success”, they think “happiness up”. He’s just saying that the probabilities of the application of the two phrases are correlated to some significant degree.
No, he’s dodging the question. There are two definitions under discussion, one (the one potato is proposing, also incidentally the nonstandard one) in which he is by definition correct, another in which he has been proven wrong. He’s explicitly attempting to conflate the two:
Could you give me three examples of a successful policy which doesn’t increase net happiness [...] ?
If successful means “promotes happiness”, then I trivially can’t. If it means “works as planned”, then Holocaust was quite successful in elimination of Jews, to give an extreme example.
Nah, I mean “successful” as in: you and I are both capable of agreeing as well as at least not the minority of experts about that use of “successful”. … when you and I and at least some experts will be tempted to say “That policy was successful.”, i.e., “worked”, “rocked”,
You included political scientists both in group of people who will efficiently handle the question and group of those who will not, was that a typo?
I meant that political scientists can handle it alone. Without the aid of psychologists and game theorists, more effectively than if they employed the methods of game theorists and psychologists.
If you mean such things as hypothesis testing, repeatability of experiments, demands on clarity and precision of formulations, mathematical rigor—I find it hard to imagine how these things could be transferred to politics, for different reasons.
It is certainly hard to imagine. But satellites which scan over facial expressions to measure net happiness (totally doable today), and decision theory, provide at least a little bit of imaginative fuel as to how it might be done. Applying the scientific method in a new field is always awkward at first; that’s the burden of a new field. But I find it highly probable, that the best way to make progress in filling in the utilities and probabilities of the political decision tree is through empiricism.
Of course, there has to be such a tree, since some political decisions are better than others. The political utility of a given state may not be exactly specifiable, but that the utility of living in a place with accessible education, out weights that of living in a place where all else remains the same, but education is inaccessible, is rather clear.
That improves it a lot, but sacrifices specificity. It’s easier to say who is a political scientist, much harder to tell who is relevant expert. Knowing who is expert is vital for peer reviewing. But the problem may be overcome by betting.
I am unaware of successes of betting markets in a diverse array of disciplines. Please tell more (not a rhetorical question, I am really curious). Betting markets are even not employed in science.
As for scientific method, it is far from clear what it refers to here. If you mean the ways how scientific community work, peer reviewed journals, grants and such, I doubt it is the key component of success of science. If you mean such things as hypothesis testing, repeatability of experiments, demands on clarity and precision of formulations, mathematical rigour—I find it hard to imagine how these things could be transferred to politics, for different reasons.
You included political scientists both in group of people who will efficiently handle the question and group of those who will not, was that a typo?
The questions including “should” are tricky. Should we restrict alcohol consumption? Yes, it has wretched many lifes. No, what’s better than drinking cold beer after a long day—and if someone wants to catch cirrhosis, his problem.
I agree that different questions need different experts, we disagree about what actual expertise is needed. You seem to say that the problem of democratic politics is incompetence, in the sense that politicians don’t know the expected utilities of their decisions. But the big problem is that there are no clear-cut utilities out there. Even if we forget common circular preferences and inconsitencies, there are different people with conflicting utility functions who even don’t agree on a meta-principle according to which their utilities are to be compared. Politicians probably practically understand this better than many economists who tend to flatten complex “should” questions into questions of fact like “does inflation lower GDP”. Politicians have access to analogous expert opinions and if they seem to act contrary to them, it needn’t be caused by their ignorance.
When visionaries are designing their utopias, they often fall prey to several mistakes. First, they assume knowledge of other people’s wishes and priorities, and expect more consistency, uniformity and harmony than there actually is. Second, they assume no large-scale opposition against the very utopian system, thus not addressing the problem of the system’s stability. Third, they state the utopia in broad terms, not paying attention to details which can be used to game the system. It is much harder to find flaws in a vague description.
What you have, in my opinion, omitted from consideration:
We have egalitarian instincts which democratic voting rights symbolise. The right to vote, equal for a homeless beggar as well as a president of a multi-national company, can be in itself an important final value which may overweigh lots of other possible benefits.
When a relatively closed group of people rules, others are likely to see conspiracies and evil motives even if there are none and are going to be strongly opposed to the group. Dictatorship of philosophers (scientists, experts) would likely survive only if the non-philosophers are pacified by force, or if a greater common enemy is found. Both variants bring additional disutility.
Betting on policies in futarchy-like systems have two effects. First, it makes those policies more likely to be implemented, and second, it hase some expected gain. Would you bet on a policy which you think will likely work, but whose intended outcome you disagree with? It is a game-theoretic problem, multi-player prisoner’s dilemma, whose outcome depends on the actual decision theory most people use.
Science, as a social institution, isn’t flawless. The insistence on publications leads to lots of nearly useless papers and discourages projects with uncertain outcome, and leads to bias towards positive results. There are fake journals whose function is to provide publications for money. There are diploma mills. When reality is complicated, there are disciplines divided among several disagreeing schools (Austrian, Keynesian and neo-classical economics, string theory vs. loop gravity). Science can live with that. Can politics?
Betting markets reliably predict the weather better than meteorologists. Probably cause the meteorologist doesn’t make more money if he spends 12 hours collecting data, instead of 6. People betting do.
We have lot’s of instincts that can be symbolized by lots of things.
How? Mind you everyone has the right to play the betting market, and if you get good at it, your bets will start being used to institute policies.
Again, anybody can join the ranks of the elite gamblers, they just have to gamble well enough. All betting records of all elite gamblers will be completely open knowledge. And so will the complete list of proposals. You can start betting whenever you want, but only after you achieve a certain score do your bets actually start effecting policy decisions, and after a higher score you may propose new policies to bet on.
I’m not sure what you mean. I agree with a policy that I think optimizes happiness, and I say that a policy works, if I think it will optimize happiness. So I can’t imagine thinking a policy works that I disagree with.
I would point you to this sentence in OP:
Anybody can become a scientist or a millionaire, in the sense that there is no law preventing that. Normally most people don’t have enough intelligence, perseverance and/or luck to do it. Most people will not be elite gamblers even if they tried. It seems that natural reaction in such a situation wouldn’t be “well, perhaps I don’t really understand politics”. I would rather be “the system is rigged, they have conspired to vote against the common sense policy to gain personal advantage”. Sure, conspiracy theories can emerge in a voting system too. But voting has positive connotations, it is associated with fairness, it is the natural thing to do when a group of people has to decide about something. Connotations of gambling are far less honourable.
I have read the sentence in the OP, but I don’t see how it addresses the problems in social workings of science. I am even not sure what insight the sentence had to convey. It sounds like a tautological triviality—solution to an extremely difficult scientific problem is “extremely good science” almost by definition.
I thought that “works” means “has the expected consequences”, not “maximises happiness”. If “works” means “maximises happiness” and this meaning of “working” is what the policies are supposed to be checked against, then my objection of course doesn’t apply. On the other hand, you are now subject to all criticisms of happiness-maximising utilitarianism, and more, you can be accused of trying to impose your value system on others.
If they gamble on something which doesn’t actually optimize happiness and is only in their interest, they’ll loose money.
I agree, it is trivial. Politics is a really tough scientific problem, and it requires really good science to solve it. Turns out voting on hypotheses, or researchers isn’t very good science.
What a government might do that isn’t to either: spread happiness, increase net happiness, or avoid suffering, I can’t imagine.
Still you have to show that 1) politics is a scientific problem (you have asserted it number of times without actually trying to argue for that much) and 2) that betting markets are efficient way to do solve scientific problems (that’s not the way science is normally done today).
I find it slightly disturbing that you don’t seem to acknowledge that people may have values different from or additional to “net happiness” (look at some LW posts about wireheading) and that there are significant problems with comparing different forms of happiness and with interpersonal comparison as well.
Politics requires that you figure out true sentences, both circumstantial, and hypothetical on a policy’s institution. If you figured out all of these sentences, politics would be solved, therefor it is a scientific problem.
I understand that there are problems with simple utilitarianism; and it only gets worse if you need to compare the happiness of two people. But it seems to me that these problems primarily come up in strange edge-cases. Most of the time specifically formulated by philosophers to expose the approximate nature of net happiness maximization models of human ethics.
Could you give me three examples of a successful policy which doesn’t increase net happiness, or even out the spread of happiness, or make more options for happiness getting? I’ll give up the point if you (or anyone else) can.
Many of the problems come up when we say that what we want is happiness. This is a category error, what we want are certain states. Happiness is a sign that we’ve achieved one of the states we wanted. If you asked me if I would like to get permanently wire-headed, I would say no. What I care about isn’t happiness; It’s what causes my happiness, e.g., certain states out there in the world. Of course I don’t want to change my utility function. Evaluating the utility of changing my utility function from my current utility function, will almost always reveal that it is a bad idea. I’ll start optimizing for the things the second utility function scores high for, and not for the things I score high now.
If successful means “promotes happiness”, then I trivially can’t. If it means “works as planned”, then Holocaust was quite successful in elimination of Jews, to give an extreme example. There are of course plenty of less extreme policies which somehow work as intended, but their effect on happiness is likely negative. Drug legislation almost certainly lowers the number of addicts; I am sure there are people who would consider it worth the negative consequences. Tax increases are usually effective in solving budget problem but making many people less happy; although here it can be argued that state bankruptcy resulting from leaving the problems unsolved would hit happiness much more in the long run, the stated goal is always to solve the budget problem, not to increase happiness.
But you already say that you don’t care about happiness per se, so you should understand what I am objecting to. Imagine there is a plausible method for wireheading without visible downsides and compulsory wireheading is proposed as a policy. Now you should believe that this will be successful policy but it would go against your values. How would you vote in such case?
Nah, I mean “successful” as in: you and I are both capable of agreeing as well as at least not the minority of experts about that use of “successful”. I won’t play silly arguing by def games here. I happen to think that “increases fun/happiness or avoids suffering” is a pretty good guideline for the application of “successful” to policies. If you can show me that there are lots of times when you and I and at least some experts will be tempted to say “That policy was successful.”, i.e., “worked”, “rocked”, “was a good idea”, but say that “That policy did not increase happiness.” or “That policy increased suffering.” as well, then Ill abandon that position and update my meta-policy accordingly after going back to the drawing board. Otherwise I’ll sit pretty.
I would bet money that giving people the option to do wire-heading, and stop wire-heading when ever they want, would optimize happiness. If we are right that people in general care about doing stuff out there in the real world, then they should still stop wire-heading every now and again to do something. Maybe this wouldn’t quite optimize “happyness” per se, it wouldn’t optimize for reward pathway stimulation certainly. But it would probably still optimize for some sort of happiness related thing like “fun”, or “reflective self appreciation” or something.
If you didn’t want to play silly games, you would have already agreed to use “successful” synonymously to “happiness-promoting” which you effectively do and we could have moved on. The meaning of successful was by no means central to your ideas so this could be easy to do. That you didn’t do it is a sign that this probably isn’t going to be a productive debate. I retreat.
Generally this means the opposite of ‘failed’. ‘Was a good idea’ is orthogonal to ‘successful’; something can be either one without being the other. You’re playing silly games by implicitly defining ‘successful’ as ‘increased happiness’ and then pretending this means anything.
I’ve never heard of a form of wireheading in which this was possible.
I think you must be misreading me somehow. I’m simply saying that I think “if a policy was successful it very probably increased net happiness.” And that if someone applies the phrase “that policy was successful” they will likely also be willing to apply the phrase “that policy increased net happiness.” These are empirical probabilistic claims, which can be falsified, and are certainly not meaningless. LWers don’t use Aristotelian concept theory for definition, for the most part we treat definitions more like pointers to empirical clusters of roughly similar things, as here .
What question am I dodging exactly?
The Holocaust, and more generally most of Hitler’s political policies, as distinct from the military ones.
North Korea’s closed borders.
The US’s policy of propping up US-friendly dictators in the third world.
In other words, all selfish policies.
Ok, but you and I would both say these examples increased suffering, and that they were not good ideas, or nice. Therefor these are not examples of the form i asked for.
So, to clarify: what you are asking for is three examples of a successful policy which
doesn’t increase net happiness, and
doesn’t even out the spread of happiness, and
doesn’t make more options for happiness getting, and
doesn’t increase suffering, and
is a good idea, and
is nice.
If I have misunderstood your criteria, could you explain where?
yep, totes. More specifically, that we would say is successful (in the sense of well done, or not a fail), and also say is 1 − 6.
Are you new man? Check this out: http://wiki.lesswrong.com/wiki/A_Human%27s_Guide_to_Words
Potato is proposing a deffenition as an emperical pointer. It means plenty, it means when people think “success”, they think “happiness up”. He’s just saying that the probabilities of the application of the two phrases are correlated to some significant degree.
No, he’s dodging the question. There are two definitions under discussion, one (the one potato is proposing, also incidentally the nonstandard one) in which he is by definition correct, another in which he has been proven wrong. He’s explicitly attempting to conflate the two:
The recent Wall Street shenanigans suggest otherwise.
I meant that political scientists can handle it alone. Without the aid of psychologists and game theorists, more effectively than if they employed the methods of game theorists and psychologists.
It is certainly hard to imagine. But satellites which scan over facial expressions to measure net happiness (totally doable today), and decision theory, provide at least a little bit of imaginative fuel as to how it might be done. Applying the scientific method in a new field is always awkward at first; that’s the burden of a new field. But I find it highly probable, that the best way to make progress in filling in the utilities and probabilities of the political decision tree is through empiricism.
Of course, there has to be such a tree, since some political decisions are better than others. The political utility of a given state may not be exactly specifiable, but that the utility of living in a place with accessible education, out weights that of living in a place where all else remains the same, but education is inaccessible, is rather clear.
I’ll address 1 − 4 when i get back from school.