It’s like you’ll have to become one of those people who work all their life to save money for their retirement when they are old and lost most of their interests.
That, and the rest, doesn’t sound rational at all. “Maximizing expected utility” doesn’t mean “systematically deferring enjoyment”; it’s just a nerdy way of talking about tradeoffs when taking risks.
The concept of “expected utility” doesn’t seem to have much relevance at the individual level, it’s more something for comparing government policies, or moral philosophies, or agents in game theory/decision theory … or maybe also some narrow things like investing in stock. But not deciding whether to go rock-climbing or not.
That, and the rest, doesn’t sound rational at all.
I agree, but I can’t pinpoint what is wrong. There are other people here who went bonkers (no offense) thanks to the kind of rationality being taught on LW. Actually Roko stated a few times that he would like to have never learnt about existential risks because of the negative impact it had on his social life etc. I argued that “ignorance is bliss” can under no circumstances be right and that I value truth more than happiness. I think I was wrong. I am not referring to bad things happening to people here but solely to the large amount of positive utility associated with a lot of scenarios that force you to pursue instrumental goals that you don’t enjoy at all. Well, it would probably be better to never exist in the first place, living seems to have an overall negative utility if you are not the kind if person who enjoys being or helping Eliezer Yudkowsky.
What are you doing all day, is it the most effective way to earn money or help solving friendly AI directly? I doubt it. And if you know that and still don’t do anything about it then many people here would call you irrational. It doesn’t matter what you like to do because whatever you value, there will always be more of it tomorrow if you postpone doing it today and instead pursue an instrumental goal. You can always do something, even if that means you’d have to sell your blood. No excuses there, it is watertight.
And this will never end. It might sound absurd to talk about trying to do something about the heat death of the universe or trying to hack the Matrix, but is it really improbable enough to outweigh the utility associated with gaining the necessary resources to support 3^^^^3 people for 3^^^^3 years rather than a galactic civilisation for merely 10^50 years? Give me a good argument of why an FAI shouldn’t devote all its resources to trying to leave the universe rather than supporting a galactic civilization for a few years? How does this differ from devoting all resources to working on friendly AI for a few decades? How much fun could you have in the next few decades? Let’s say you’d have to devote 10^2 years of your life to a positive Singularity to gain 10^50. Now how is this different from devoting the resources to support you for 10^50 years to the FAI trying to figure out how to support you for 3^^^^3 years? Where do you draw the line and why?
I can. You are trying to “shut up and multiply” (as Eliezer advises) using the screwed up, totally undiscounted, broken-mathematics version of consequentialism taught here. Instead, you should pay more attention to your own utility than to the utility of the 3^^^3itudes in the distant future, and/or in distant galaxies, and/or in simulated realities. You should pay no more attention to their utility than they pay to yours.
Don’t shut up and multiply until someone fixes the broken consequentialist math which is promoted here. Instead, (as Eliezer also advises) get laid or something. Worry more about about the happiness of the people (including yourself) within a temporal radius of 24 hours, a spatial radius of a few meters, and in your own branch of the ‘space-time continuum’, than you worry about any region of space-time trillions of times the extent, if that region of space time is also millions of times as distant in time, space, or Hilbert-space phase-product.
(I’m sure Tim Tyler is going to jump in and point out that even if you don’t discount the future (etc.) as I recommend, you should still not worry much about the future because it is so hard to predict the consequences of your actions. Pace Tim. That is true, but beside the point!)
If it is important to you (XiXiDu) to do something useful and Singularity related, why don’t you figure out how to fix the broken expected-undiscounted-utility math that is making you unhappy before someone programs it into a seed AI and makes us all unhappy.
Excuse me, but XiXiDu is taking for granted ideas such as Pascal’s Mugging—in fact Pascal’s Mugging seems to be the main trope here—which were explicitly rejected by me and by most other LWians. We’re not quite sure how to fix it, though Hanson’s suggestion is pretty good, but we did reject Pascal’s Mugging!
It’s not obvious to me that after rejecting Pascal’s Mugging there is anything left to say about XiXiDu’s fears or any reason to reject expected utility maximization(!!!).
It’s not obvious to me that after rejecting Pascal’s Mugging there is anything left to say about XiXiDu’s fears or any reason to reject expected utility maximization(!!!).
Well, in so far as it isn’t obvious why Pascal’s Mugging should be rejected by a utility maximizer, his fears are legitimate. It may very well be that a utility maximizer will always be subject to some form of possible mugging. If that issue isn’t resolved the fact that people are rejecting Pascal’s Mugging doesn’t help matters.
It may very well be that a utility maximizer will always be subject to some form of possible mugging.
I fear that the mugger is often our own imagination. If you calculate the expected utility of various outcomes you imagine impossible alternative actions. The alternatives are impossible because you already precommited to choosing the outcome with the largest expected utility. There are three main problems with that:
You swap your complex values for a certain terminal goal with the highest expected utility, indeed your instrumental and terminal goals converge to become the expected utility formula.
There is no minimum amount of empirical evidence necessary to extrapolate the expected utility of an outcome.
The extrapolation of counterfactual alternatives is unbounded, logical implications can reach out indefinitely without ever requiring new empirical evidence.
All this can cause any insignificant inference to exhibit hyperbolic growth in utility.
I don’t trust my brain’s claims of massive utility enough to let it dominate every second of my life. I don’t even think I know what, this second, would be doing the most to help achieve a positive singularity.
I’m also pretty sure that my utility function is bounded, or at least hits diminishing returns really fast.
I know that thinking my head off about every possible high-utility counterfactual will make me sad, depressed, and indecisive, on top of ruining my ability to make progress towards gaining utility.
So I don’t worry about it that much. I try to think about these problems in doses that I can handle, and focus on what I can actually do to help out.
I don’t trust my brain’s claims of massive utility enough to let it dominate every second of my life.
Yet you trust your brain enough to turn down claims of massive utility. Given that our brains could not evolve to yield reliable inutions about such scenarios and given that the parts of rationality that we do understand very well in principle are telling us to maximize expected utility, what does it mean not to trust your brain? In all of the scenarios in question that involve massive amounts of utility your uncertainty is included and being outweighed. It seems that what you are saying is that you don’t trust your higher order thinking skills and instead trust your gut feelings? You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs. How are you going to define and justify such a limit if you don’t trust your brain?
Anyway, I did some quick searches today and found out that the kind of problems I talked about are nothing new and mentioned in various places and contexts:
The ‘expected value’ of the game is the sum of the expected payoffs of all the consequences. Since the expected payoff of each possible consequence is $1, and there are an infinite number of them, this sum is an infinite number of dollars. A rational gambler would enter a game iff the price of entry was less than the expected value. In the St. Petersburg game, any finite price of entry is smaller than the expected value of the game. Thus, the rational gambler would play no matter how large the finite entry price was. But it seems obvious that some prices are too high for a rational agent to pay to play. Many commentators agree with Hacking’s (1980) estimation that “few of us would pay even $25 to enter such a game.” If this is correct—and if most of us are rational—then something has gone wrong with the standard decision-theory calculations of expected value above. This problem, discovered by the Swiss eighteenth-century mathematician Daniel Bernoulli is the St. Petersburg paradox. It’s called that because it was first published by Bernoulli in the St. Petersburg Academy Proceedings (1738; English trans. 1954).
If EDR were accepted, speculations about infinite scenarios, however unlikely and far‐fetched, would come to dominate our ethical deliberations. We might become extremely concerned with bizarre possibilities in which, for example, some kind of deity exists that will use its infinite powers to good or bad ends depending on what we do. No matter how fantastical any such scenario would be, if it is a logically coherent and imaginable possibility it should presumably be assigned a finite positive probability, and according to EDR, the smallest possibility of infinite value would smother all other considerations of mere finite values.
[...]
Suppose that I know that a certain course of action, though much less desirable in every other respect than an available alternative, offers a one‐in‐a‐million chance of avoiding catastrophe involving x people, where x is finite. Whatever else is at stake, this possibility will overwhelm my calculations so long as x is large enough. Even in the finite case, therefore, we might fear that speculations about low‐probability‐high‐stakes scenarios will come to dominate our moral decision making if we follow aggregative consequentialism.
If we consider systems that would value some apparently physically unattainable quantity of resources orders of magnitude more than the apparently accessible resources given standard physics (e.g. resources enough to produce 10^1000 offspring), the potential for conflict again declines for entities with bounded utility functions. Such resources are only attainable given very unlikely novel physical discoveries, making the agent’s position similar to that described in “Pascal’s Mugging” (Bostrom, 2009), with the agent’s decision-making dominated by extremely small probabilities of obtaining vast resources.
You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs
I take risks when I actually have a grasp of what they are. Right now I’m trying to organize a DC meetup group, finish up my robotics team’s season, do all of my homework for the next 2 weeks so that I can go college touring, and combining college visits with LW meetups.
After April, I plan to start capoiera, work on PyMC, actually have DC meetups, work on a scriptable real time strategy game, start contra dancing again, start writing a sequence based on Heuristics and Biases, improve my dietary and exercise habits, and visit Serbia.
All of these things I have a pretty solid grasp of what they entail, and how they impact the world.
I still want to do high-utility things, but I just choose not to live in constant dread of lost opportunity. My general strategy of acquiring utility is to help/make other people get more utility too, and multiply the effects of getting the more low hanging fruit.
Suppose that I know that a certain course of action
with the agent’s decision-making dominated by extremely small probabilities of obtaining vast resources.
The issue with long-shots like this is that I don’t know where to look for them. Seriously. And since they’re such long-shots, I’m not sure how to go about getting them. I know that trying to do so isn’t particularly likely to work.
Yet you trust your brain enough to turn down claims of massive utility.
Sorry, I said that badly. If I knew how to get massive utility, I would try to. Its just the planning is the hard part. The best that I know to do now (note: I am carving out time to think about this harder in the forseeable future) is to get money and build communities. And give some of the money to SIAI. But in the meantime, I’m not going to be agonizing over everything I could have possibly done better.
It’s not obvious to me that after rejecting Pascal’s Mugging there is anything left to say about XiXiDu’s fears or any reason to reject expected utility maximization(!!!).
Well, nothing philosophically. There’s probably quite a lot to say about, or rather in the aid of, one of our fellows who’s clearly in trouble.
The problem appears to be depression, i.e., more corrupt than usual hardware. Thus, despite the manifestations of the trouble as philosophy, I submit this is not the actual problem here.
We are in disagreement then. I reject, not just Pascal’s mugging, but also the style of analysis found in Bostrom’s “Astronomical Waste” paper. As I understand XiXiDu, he has been taught (by people who think like Bostrom) that even the smallest misstep on the way to the Singularity has astronomical consequences and that we who potentially commit these misteps are morally responsible for this astronomical waste.
Is the “Astronomical Waste” paper an example of “Pascal’s Mugging”? If not, how do you distinguish (setting aside the problem of how you justify the distinction)?
We’re not quite sure how to fix it, though Hanson’s suggestion is pretty good …
Do you have a link to Robin’s suggestion? I’m a bit surprised that a practicing economist would suggest something other than discounting. In another Bostrom paper, “The Infinitarian Challenge to Aggregative Ethics”, it appears that Bostrom also recognizes that something is broken, but he, too, doesn’t know how to fix it.
Is the “Astronomical Waste” paper an example of “Pascal’s Mugging”? If not, how do you distinguish (setting aside the problem of how you justify the distinction)?
Exactly, I describe my current confusion in more detail in this thread, especially the comment here and here which led me to conclude this. Fairly long comments, but I wish someone would dissolve my confusion there. I really don’t care if you downvote them to −10, but without some written feedback I can’t tell what exactly is wrong, how I am confused.
Robin Hanson has suggested penalizing the prior probability of hypotheses which argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us. Since only one in 3^^^^3 people can be in a unique position to ordain the existence of at least 3^^^^3 other people who are not symmetrically in such a situation themselves, the prior probability would be penalized by a factor on the same order as the utility.
I’m going to be poking at this question from several angles—I don’t think I’ve got a complete and concise answer.
I think you’ve got a bad case of God’s Eye Point of View—thinking that the most rational and/or moral way to approach the universe is as though you don’t exist.
The thing about GEPOV is that it isn’t total nonsense. You can get more truth if you aren’t territorial about what you already believe, but since you actually are part of the universe and you are your only point of view, trying to leave yourself out completely is its own flavor of falseness.
As you are finding out, ignoring your needs leads to incapacitation. It’s like saying that we mustn’t waste valuable hydrocarbons on oil for the car engine. All the hydrocarbons should be used for gasoline! This eventually stops working. It’s important to satisfy needs which are of different kinds and operate on different time scales.
You may be thinking that, since fun isn’t easily measurable externally, the need for it isn’t real.
I think you’re up against something which isn’t about rationality exactly—it’s what I call the emotional immune system. Depression is partly about not being able to resist (or even being attracted to) ideas which cause damage.
An emotional immune system is about having affection for oneself, and if it’s damaged, it needs to be rebuilt, probably a little at a time.
On the intellectual side, would you want all the people you want to help to defer their own pleasure indefinitely?
On the intellectual side, would you want all the people you want to help to defer their own pleasure indefinitely?
No, but I don’t know what a solution would look like. Most of the time I am just overwhelmed as it feels like everything I come up with isn’t much better than throwing a coin. I just can’t figure out the right balance between fun (experiencing; being selfish), moral conduct (being altruistic), utility maximization (being future-oriented) and my gut feelings (instinct; intuition; emotions). For example, if I have a strong urge to just go out and have fun, should I just give in to that urge or think about it? If I question the urge I often end up thinking about it until it is too late. Every attempt at a possible solution looks like browsing Wikipedia, each article links to other articles that again link to other articles until you end up with something completely unrelated to the initial article. It seems impossible to apply a lot of what is taught on LW in real life.
NancyLebovitz’s comment I think is highly relevant here.
I can only speak from my personal experience, but I’ve found than part of going through Less Wrong and understanding all the great stuff on this website, is understanding the type of creature I am.
At this current moment, I am comparitively a very simple one. In terms of the singularity, and Friendly AI, they are miles from what I am, and I am not at a point where I can emotionally take on those causes. I can intellectual but the fact is the simple creature that I am doesn’t comprehend those connections yet.
I want to one day, but a Baby has to crawl before it can walk.
Much of what I do provides me with satisfaction, joy, happiness. I don’t even fully understand why. But what I do know, is that I need those emotions to not just function, but to improve, to continue the development of myself.
Maybe it might help to reduce yourself to that simple creature. Understand that for a baby to do math, it has to understand symbols. Maybe that what you understand intellectually, in terms of emotional function your not yet ready to deal with.
Just my two cents. sorry if I’m not as concise as I should be.
I do hope the best for you though.
I’m sure Tim Tyler is going to jump in and point out that even if you don’t discount the future (etc.) as I recommend, you should still not worry much about the future because it is so hard to predict the consequences of your actions. Pace Tim. That is true, but beside the point!
Peace—I think that is what you meant to say. We mostly agree. I am not sure you can tell someone else what they “should” be doing, though. That is for them to decide. I expect your egoism is not of the evangelical kind.
Saving the planet does have some merits though. People’s goals often conflict—but many people can endorse saving the planet. It is ecologically friendly, signals concern with Big Things, paints you as a Valiant Hero—and so on. As causes go, there are probably unhealthier ones to fall in with.
I’m sure Tim Tyler is going to jump in and point out …
Pace Tim. That is true, but beside the point!
Peace—I think that is what you meant to say.
I’m kinda changing the subject here, but that wasn’t a typo. “Pace” was what I meant to write. Trouble is, I’m not completely sure what it means. I’ve seen it used in contexts that suggest it means something like “I know you disagree with this, but I don’t want to pick a fight. At least not now.” But I don’t know what it means literally, nor even how to pronounce it.
My guess is that it is church Latin, meaning (as you suggest) ‘peace’. ‘Requiescat in pace’ and all that. I suppose, since it is a foreign language word, I technically should have italicized. Can anyone help out here?
living seems to have an overall negative utility if you are not the kind if person who enjoys being or helping Eliezer Yudkowsky.
There is a difference between negative utility, and less than maximized utility. There are lots of people who enjoy their lives despite not having done as much as they could, even if they know that they could be doing more.
Its only when you dwell on what you haven’t done, aren’t doing, or could have done that you actually become unhappy about it. If you don’t start from maximum utility and see everything as a worse version of that, then you can easily enjoy the good things in your life.
Give me a good argument of why an FAI shouldn’t devote all its resources to trying to leave the universe rather than supporting a galactic civilization for a few years?
Now this looks like a wrong kind of question to consider in this context. The amount of fun your human existence is delivering, in connection with what you abstractly believe is the better course of action, is something relevant, but the details of how FAI would manage the future is not your human existence’s explicit problem, unless you are working on FAI design.
If it’s better for FAI to spend the next 3^^^3 multiverse millenia planning the future, why should that have a reflection in your psychological outlook? That’s an obscure technical question. What matters is whether it’s better, not whether it has a certain individual surface feature.
What are you doing all day, is it the most effective way to earn money or help solving friendly AI directly? I doubt it. And if you know that and still don’t do anything about it then many people here would call you irrational.
Irrational seems like the wrong world here after all the person could be rational but working with a dataset that does not allow them to reach that conclusion yet. There are also people who reach that conclusion irrationally, reach the right conclusion with a flawed method(unreliable) but are not more rational for having the right conclusions.
That, and the rest, doesn’t sound rational at all. “Maximizing expected utility” doesn’t mean “systematically deferring enjoyment”; it’s just a nerdy way of talking about tradeoffs when taking risks.
The concept of “expected utility” doesn’t seem to have much relevance at the individual level, it’s more something for comparing government policies, or moral philosophies, or agents in game theory/decision theory … or maybe also some narrow things like investing in stock. But not deciding whether to go rock-climbing or not.
I agree, but I can’t pinpoint what is wrong. There are other people here who went bonkers (no offense) thanks to the kind of rationality being taught on LW. Actually Roko stated a few times that he would like to have never learnt about existential risks because of the negative impact it had on his social life etc. I argued that “ignorance is bliss” can under no circumstances be right and that I value truth more than happiness. I think I was wrong. I am not referring to bad things happening to people here but solely to the large amount of positive utility associated with a lot of scenarios that force you to pursue instrumental goals that you don’t enjoy at all. Well, it would probably be better to never exist in the first place, living seems to have an overall negative utility if you are not the kind if person who enjoys being or helping Eliezer Yudkowsky.
What are you doing all day, is it the most effective way to earn money or help solving friendly AI directly? I doubt it. And if you know that and still don’t do anything about it then many people here would call you irrational. It doesn’t matter what you like to do because whatever you value, there will always be more of it tomorrow if you postpone doing it today and instead pursue an instrumental goal. You can always do something, even if that means you’d have to sell your blood. No excuses there, it is watertight.
And this will never end. It might sound absurd to talk about trying to do something about the heat death of the universe or trying to hack the Matrix, but is it really improbable enough to outweigh the utility associated with gaining the necessary resources to support 3^^^^3 people for 3^^^^3 years rather than a galactic civilisation for merely 10^50 years? Give me a good argument of why an FAI shouldn’t devote all its resources to trying to leave the universe rather than supporting a galactic civilization for a few years? How does this differ from devoting all resources to working on friendly AI for a few decades? How much fun could you have in the next few decades? Let’s say you’d have to devote 10^2 years of your life to a positive Singularity to gain 10^50. Now how is this different from devoting the resources to support you for 10^50 years to the FAI trying to figure out how to support you for 3^^^^3 years? Where do you draw the line and why?
I can. You are trying to “shut up and multiply” (as Eliezer advises) using the screwed up, totally undiscounted, broken-mathematics version of consequentialism taught here. Instead, you should pay more attention to your own utility than to the utility of the 3^^^3itudes in the distant future, and/or in distant galaxies, and/or in simulated realities. You should pay no more attention to their utility than they pay to yours.
Don’t shut up and multiply until someone fixes the broken consequentialist math which is promoted here. Instead, (as Eliezer also advises) get laid or something. Worry more about about the happiness of the people (including yourself) within a temporal radius of 24 hours, a spatial radius of a few meters, and in your own branch of the ‘space-time continuum’, than you worry about any region of space-time trillions of times the extent, if that region of space time is also millions of times as distant in time, space, or Hilbert-space phase-product.
(I’m sure Tim Tyler is going to jump in and point out that even if you don’t discount the future (etc.) as I recommend, you should still not worry much about the future because it is so hard to predict the consequences of your actions. Pace Tim. That is true, but beside the point!)
If it is important to you (XiXiDu) to do something useful and Singularity related, why don’t you figure out how to fix the broken expected-undiscounted-utility math that is making you unhappy before someone programs it into a seed AI and makes us all unhappy.
Excuse me, but XiXiDu is taking for granted ideas such as Pascal’s Mugging—in fact Pascal’s Mugging seems to be the main trope here—which were explicitly rejected by me and by most other LWians. We’re not quite sure how to fix it, though Hanson’s suggestion is pretty good, but we did reject Pascal’s Mugging!
It’s not obvious to me that after rejecting Pascal’s Mugging there is anything left to say about XiXiDu’s fears or any reason to reject expected utility maximization(!!!).
Well, in so far as it isn’t obvious why Pascal’s Mugging should be rejected by a utility maximizer, his fears are legitimate. It may very well be that a utility maximizer will always be subject to some form of possible mugging. If that issue isn’t resolved the fact that people are rejecting Pascal’s Mugging doesn’t help matters.
I fear that the mugger is often our own imagination. If you calculate the expected utility of various outcomes you imagine impossible alternative actions. The alternatives are impossible because you already precommited to choosing the outcome with the largest expected utility. There are three main problems with that:
You swap your complex values for a certain terminal goal with the highest expected utility, indeed your instrumental and terminal goals converge to become the expected utility formula.
There is no minimum amount of empirical evidence necessary to extrapolate the expected utility of an outcome.
The extrapolation of counterfactual alternatives is unbounded, logical implications can reach out indefinitely without ever requiring new empirical evidence.
All this can cause any insignificant inference to exhibit hyperbolic growth in utility.
I don’t trust my brain’s claims of massive utility enough to let it dominate every second of my life. I don’t even think I know what, this second, would be doing the most to help achieve a positive singularity.
I’m also pretty sure that my utility function is bounded, or at least hits diminishing returns really fast.
I know that thinking my head off about every possible high-utility counterfactual will make me sad, depressed, and indecisive, on top of ruining my ability to make progress towards gaining utility.
So I don’t worry about it that much. I try to think about these problems in doses that I can handle, and focus on what I can actually do to help out.
Yet you trust your brain enough to turn down claims of massive utility. Given that our brains could not evolve to yield reliable inutions about such scenarios and given that the parts of rationality that we do understand very well in principle are telling us to maximize expected utility, what does it mean not to trust your brain? In all of the scenarios in question that involve massive amounts of utility your uncertainty is included and being outweighed. It seems that what you are saying is that you don’t trust your higher order thinking skills and instead trust your gut feelings? You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs. How are you going to define and justify such a limit if you don’t trust your brain?
Anyway, I did some quick searches today and found out that the kind of problems I talked about are nothing new and mentioned in various places and contexts:
The St. Petersburg Paradox
The Infinitarian Challenge to Aggregative Ethics
Omohundro’s “Basic AI Drives” and Catastrophic Risks
I take risks when I actually have a grasp of what they are. Right now I’m trying to organize a DC meetup group, finish up my robotics team’s season, do all of my homework for the next 2 weeks so that I can go college touring, and combining college visits with LW meetups.
After April, I plan to start capoiera, work on PyMC, actually have DC meetups, work on a scriptable real time strategy game, start contra dancing again, start writing a sequence based on Heuristics and Biases, improve my dietary and exercise habits, and visit Serbia.
All of these things I have a pretty solid grasp of what they entail, and how they impact the world.
I still want to do high-utility things, but I just choose not to live in constant dread of lost opportunity. My general strategy of acquiring utility is to help/make other people get more utility too, and multiply the effects of getting the more low hanging fruit.
The issue with long-shots like this is that I don’t know where to look for them. Seriously. And since they’re such long-shots, I’m not sure how to go about getting them. I know that trying to do so isn’t particularly likely to work.
Sorry, I said that badly. If I knew how to get massive utility, I would try to. Its just the planning is the hard part. The best that I know to do now (note: I am carving out time to think about this harder in the forseeable future) is to get money and build communities. And give some of the money to SIAI. But in the meantime, I’m not going to be agonizing over everything I could have possibly done better.
Well, nothing philosophically. There’s probably quite a lot to say about, or rather in the aid of, one of our fellows who’s clearly in trouble.
The problem appears to be depression, i.e., more corrupt than usual hardware. Thus, despite the manifestations of the trouble as philosophy, I submit this is not the actual problem here.
We are in disagreement then. I reject, not just Pascal’s mugging, but also the style of analysis found in Bostrom’s “Astronomical Waste” paper. As I understand XiXiDu, he has been taught (by people who think like Bostrom) that even the smallest misstep on the way to the Singularity has astronomical consequences and that we who potentially commit these misteps are morally responsible for this astronomical waste.
Is the “Astronomical Waste” paper an example of “Pascal’s Mugging”? If not, how do you distinguish (setting aside the problem of how you justify the distinction)?
Do you have a link to Robin’s suggestion? I’m a bit surprised that a practicing economist would suggest something other than discounting. In another Bostrom paper, “The Infinitarian Challenge to Aggregative Ethics”, it appears that Bostrom also recognizes that something is broken, but he, too, doesn’t know how to fix it.
Exactly, I describe my current confusion in more detail in this thread, especially the comment here and here which led me to conclude this. Fairly long comments, but I wish someone would dissolve my confusion there. I really don’t care if you downvote them to −10, but without some written feedback I can’t tell what exactly is wrong, how I am confused.
Can be found via the Wiki:
I don’t quite get it.
I’m going to be poking at this question from several angles—I don’t think I’ve got a complete and concise answer.
I think you’ve got a bad case of God’s Eye Point of View—thinking that the most rational and/or moral way to approach the universe is as though you don’t exist.
The thing about GEPOV is that it isn’t total nonsense. You can get more truth if you aren’t territorial about what you already believe, but since you actually are part of the universe and you are your only point of view, trying to leave yourself out completely is its own flavor of falseness.
As you are finding out, ignoring your needs leads to incapacitation. It’s like saying that we mustn’t waste valuable hydrocarbons on oil for the car engine. All the hydrocarbons should be used for gasoline! This eventually stops working. It’s important to satisfy needs which are of different kinds and operate on different time scales.
You may be thinking that, since fun isn’t easily measurable externally, the need for it isn’t real.
I think you’re up against something which isn’t about rationality exactly—it’s what I call the emotional immune system. Depression is partly about not being able to resist (or even being attracted to) ideas which cause damage.
An emotional immune system is about having affection for oneself, and if it’s damaged, it needs to be rebuilt, probably a little at a time.
On the intellectual side, would you want all the people you want to help to defer their own pleasure indefinitely?
This sounds very true and important.
As far as I can tell, a great deal of thinking is the result of wanting thoughts which match a pre-existing emotional state.
Thoughts do influence emotions, but less reliably.
No, but I don’t know what a solution would look like. Most of the time I am just overwhelmed as it feels like everything I come up with isn’t much better than throwing a coin. I just can’t figure out the right balance between fun (experiencing; being selfish), moral conduct (being altruistic), utility maximization (being future-oriented) and my gut feelings (instinct; intuition; emotions). For example, if I have a strong urge to just go out and have fun, should I just give in to that urge or think about it? If I question the urge I often end up thinking about it until it is too late. Every attempt at a possible solution looks like browsing Wikipedia, each article links to other articles that again link to other articles until you end up with something completely unrelated to the initial article. It seems impossible to apply a lot of what is taught on LW in real life.
Maybe require yourself to have a certain amount of fun per week?
NancyLebovitz’s comment I think is highly relevant here.
I can only speak from my personal experience, but I’ve found than part of going through Less Wrong and understanding all the great stuff on this website, is understanding the type of creature I am. At this current moment, I am comparitively a very simple one. In terms of the singularity, and Friendly AI, they are miles from what I am, and I am not at a point where I can emotionally take on those causes. I can intellectual but the fact is the simple creature that I am doesn’t comprehend those connections yet. I want to one day, but a Baby has to crawl before it can walk. Much of what I do provides me with satisfaction, joy, happiness. I don’t even fully understand why. But what I do know, is that I need those emotions to not just function, but to improve, to continue the development of myself.
Maybe it might help to reduce yourself to that simple creature. Understand that for a baby to do math, it has to understand symbols. Maybe that what you understand intellectually, in terms of emotional function your not yet ready to deal with.
Just my two cents. sorry if I’m not as concise as I should be. I do hope the best for you though.
Peace—I think that is what you meant to say. We mostly agree. I am not sure you can tell someone else what they “should” be doing, though. That is for them to decide. I expect your egoism is not of the evangelical kind.
Saving the planet does have some merits though. People’s goals often conflict—but many people can endorse saving the planet. It is ecologically friendly, signals concern with Big Things, paints you as a Valiant Hero—and so on. As causes go, there are probably unhealthier ones to fall in with.
I’m kinda changing the subject here, but that wasn’t a typo. “Pace” was what I meant to write. Trouble is, I’m not completely sure what it means. I’ve seen it used in contexts that suggest it means something like “I know you disagree with this, but I don’t want to pick a fight. At least not now.” But I don’t know what it means literally, nor even how to pronounce it.
My guess is that it is church Latin, meaning (as you suggest) ‘peace’. ‘Requiescat in pace’ and all that. I suppose, since it is a foreign language word, I technically should have italicized. Can anyone help out here?
Latin (from pax “peace”), “with due respect offered to...”, e.g. “pace Brown” means “I respectfully disagree with Brown”, though the disagreement is often in fact not very respectful!
There is a difference between negative utility, and less than maximized utility. There are lots of people who enjoy their lives despite not having done as much as they could, even if they know that they could be doing more.
Its only when you dwell on what you haven’t done, aren’t doing, or could have done that you actually become unhappy about it. If you don’t start from maximum utility and see everything as a worse version of that, then you can easily enjoy the good things in your life.
you seem to be holding yourself morally responsible for future states. why? my attitude is that it was like this when I got here.
Now this looks like a wrong kind of question to consider in this context. The amount of fun your human existence is delivering, in connection with what you abstractly believe is the better course of action, is something relevant, but the details of how FAI would manage the future is not your human existence’s explicit problem, unless you are working on FAI design.
If it’s better for FAI to spend the next 3^^^3 multiverse millenia planning the future, why should that have a reflection in your psychological outlook? That’s an obscure technical question. What matters is whether it’s better, not whether it has a certain individual surface feature.
Irrational seems like the wrong world here after all the person could be rational but working with a dataset that does not allow them to reach that conclusion yet. There are also people who reach that conclusion irrationally, reach the right conclusion with a flawed method(unreliable) but are not more rational for having the right conclusions.
Why do you care what happens 3^^^^3 years from now?