Most of all it just made me sad and depressed. The whole “expected utility” thing being the worst part. If you take it seriously you’ll forever procrastinate having fun because you can always imagine that postponing some terminal goal and instead doing something instrumental will yield even more utility in future. So if you enjoy mountain climbing you’ll postpone it until it is safer or after the Singularity when you can have much more safe mountain climbing. And then after the Singularity you won’t be able to do it because the resources for a galactic civilization are better used to fight hostile aliens and afterwards fix the heat death of the universe. There’s always more expected utility in fixing problems, it is always about expected utility never about gathering or experiencing utility. And if you don’t believe into risks from AI then there is some other existential risk and if there is no risk then it is poverty in Obscureistan. And if there is nothing at all then you should try to update your estimates because if you’re wrong you’ll lose more than by trying to figure out if you’re wrong. You never hit diminishing returns. And in the end all your complex values are replaced by the tools and heuristics that were originally meant to help you achieve them. It’s like you’ll have to become one of those people who work all their life to save money for their retirement when they are old and lost most of their interests.
Important note: Currently in NYC for 20 days with sole purpose of finding out how to make rationalists in Bay Area (and elsewhere) have as much fun as the ones in NYC. I am doing this because I want to save the world.
XiXiDu, I have been reading your comments for some time, and it seems like your reaction to this whole rationality business is unique. You take it seriously, or at least part of you does; but your perspective is sad and strange and pessimistic. Yes, even more pessimistic than Roko or Mass Driver. What you are taking away from this blog is not what other readers are taking away from it. The next step in your rationalist journey may require something more than a blog can provide.
From one aspiring rationalist to another, I strongly encourage you to talk these things over, in person, with friends who understand them. If you are already doing so, please forgive my unsolicited advice. If you don’t have friends who know Less Wrong material, I encourage you to find or make them. They don’t have to be Less Wrong readers; many of my friends are familiar with different bits and pieces of the Less Wrong philosophy without ever having read Less Wrong.
(Who voted down this sincere expression of personal feeling? Tch.)
This is why remembering to have fun along the way is important. Remember: you are an ape. The Straw Vulcan is a lie. The unlived life is not so worth examining. Remember to be human.
This is why remembering to have fun along the way is important.
I know that argument. But I can’t get hold of it. What can I do, play a game? I’ll have to examine everything in terms of expected utility. If I want to play a game I’ll have to remind myself that I really want to solve friendly AI and therefore have to regard “playing a game” as an instrumental goal rather than a terminal goal. And in this sense, can I justify to play a game? You don’t die if you are unhappy, I could just work overtime as street builder to earn even more money to donate it to the SIAI. There is no excuse to play a game because being unhappy for a few decades can not outweigh the expected utility of a positive Singularity and it doesn’t reduce your efficiency as much as playing games and going to movies. There is simply no excuse to have fun. And that will be the same after the Singularity too.
The reason it’s important is because it counts as basic mental maintenance, just as eating reasonably and exercising a bit and so on are basic bodily maintenance. You cannot achieve any goal without basic self-care.
You are not a moral failure for not personally achieving an arbitrary degree of moral perfection.
You sound depressed, which would mean your hardware was even more corrupt and biased than usual. This won’t help achieve a positive Singularity either. Driving yourself crazier with guilt at not being able to work for a positive Singularity won’t help your effectiveness, so you need to stop doing that.
You are allowed to rest and play. You need to let yourself rest. Take a deep breath! Sleep! Go on holiday! Talk to friends you trust! See your doctor! Please do something. You sound like you are dashing your mind to pieces against the rock of the profoundly difficult, and you are not under any obligation to do such a thing, to punish yourself so.
As a result of this thinking, are you devoting every moment of your time and every Joule of your energy towards avoiding a negative Singularity?
No?
No, me neither. If I were to reason this way, the inevitable result for me would be that I couldn’t bear to think about it at all and I’d live my whole life neither happily nor productively, and I suspect the same is true for you. The risk of burning out and forgetting about the whole thing is high, and that doesn’t maximize utility either. You will be able to bring about bigger changes much more effectively if you look after yourself. So, sure, it’s worth wondering if you can do more to bring about a good outcome for humanity—but don’t make gigantic changes that could lead to burnout. Start from where you are, and step things up as you are able.
Lets say the Singularity is likely to happen in 2045 like Kurzweil says, and you want to maximize the chances that it’s positive. The idea that you should get to work making as much money to donate to SIAI, or that you should start researching fAGI (depending on your talents). What you do tomorrow doesn’t matter. What matters is the average output over the next 35 years.
This is important because a strategy where you have a emotional breakdown in 2020 fails. If you get so miserable you kill yourself you’ve failed at your goal. You need to make sure that this fallible agent, XIXIDu, stays at a very high level of productivity for the next 35 years. That almost never happens if you’re not fulfilling the needs your monkey brain demands.
Immediate gratification isn’t a terminal goal, you’ve figured this out, but it does work as an instrumental goal on the path of a greater goal.
One thing that I’ve come with when thinking about personal budgeting, of all things, is the concept of granularity. For someone who is poor, the situation is analogous to yours. The dad, lets say, of the household might be having a similar attack of conscience as you are on whether he should buy a candy bar at the gas station, when there are bills that can’t be paid.
But it turns out that a small enough purchase, such as a really cheap candy bar (for the sake of argument), doesn’t actually make any difference. No bill is going to go from being unpaid to paid because that candy was bought rather than unbought.
So relax. Buy a candy bar every once in a while. It won’t make a difference.
I don’t tell people this very often. In fact I’m not sure I can recall ever telling anyone this before, but then I wouldn’t necessarily remember it. But yes, in this case and in these exact circumstances, you need to get laid.
Totally can relate to this. I was dealing with depression long before LW, but improved rationality sure made my depression much more fun and exciting. Sarcastically, I could say that LW gave me the tools to be really good at self-criticism.
I can’t exactly give you any advice on this, as I’m still dealing with this myself and I honestly don’t really know what works or even what the goal exactly is. Just wanted to say that the feeling “this compromise ‘have some fun now’ crap shouldn’t be necessary if I really were rational!” is only too familiar.
It lead me to constantly question my own values and how much I was merely signalling (mostly to myself). Like, “if I procrastinate on $goal or if I don’t enjoy doing $maximally_effective_but_boring_activity, then I probably don’t really want $goal”, but that just leads into deeper madness. And even when I understand (from results, mostly, or comparisons to more effective people) that I must be doing something wrong, I break down until I can exactly identify what it is. So I self-optimize so that I can be better at self-optimizing, but I never get around to doing anything.
(That’s not to say that LW was overall a negative influence for me. Quite the opposite. It’s just that adding powerful cognitive tools to a not-too-sane mind has a lot of nasty side-effects.)
“if I procrastinate on $goal or if I don’t enjoy doing $maximallyeffectivebutboringactivity, then I probably don’t really want $goal”, but that just leads into deeper madness.
If I understood this correctly (as you procrastinating on something, and concluding that you don’t actually want it), then most people around here call that akrasia.
Which isn’t really something to go mad about. Basically, your brain is a stapled together hodgepodge of systems which barely work together well enough to have worked in the ancestral environment.
Nowadays, we know and can do much more stuff. But there’s no reason to expect that your built in neural circuitry can turn your desire to accomplish something into tangible action, especially when your actions are only in the long term, and non-viscerally, related to you accomplishing your goal.
It’s not just akrasia, or rather, the implication of strong akrasia really weirds me out.
The easiest mechanism to implement goals would not be vulnerable to akrasia. At best it would be used to conserve limited resources, but that’s clearly not the case here. In fact, some goals work just fine, while others fail. This is especially notable when the same activity can have very different levels of akrasia depending on why I’m doing it. Blaming this on hodge-podge circuitry seems false to me (in the general case).
So I look for other explanations, and signaling is a pretty good starting point. What I thought was a real goal was just a social facade, e.g. I don’t want to study, I just want to be seen as having a degree. (Strong evidence for this is that I enjoy reading books for some personal research when I hated literally the same books when I had to read them for class.)
Because of this, I’m generally not convinced that my ability to do stuff is broken (at least not as badly), but rather, that I’m mistaken about what I really want. But as Xixidu mentioned, when you start applying rationality to that, you end up changing your own values in the process and not always in a pretty way.
At least at my end, I’m pretty sure that part of my problem isn’t that signalling is causing me to override my real desires, it’s that there’s something about feeling that I have to signal leads to me not wanting to cooperate, even if the action is something that I would otherwise want to do, or at least not mind all that much.
Writing this has made the issue clearer for me than it’s been, but it’s not completely clear—I think there’s a combination of fear and anger involved, and it’s a goddam shame that my customers (a decent and friendly bunch) are activating stuff that got built up when I was a kid.
I don’t want to study, I just want to be seen as having a degree. (Strong evidence for this is that I enjoy reading books for some personal research when I hated literally the same books when I had to read them for class.)
Fair enough, I guess I misunderstood what you were saying.
But as Xixidu mentioned, when you start applying rationality to that, you end up changing your own values in the process and not always in a pretty way.
I guess its not guaranteed to turn out well, and when I was still working through my value-conflicts it wasn’t fun. In the end though, the clarity that I got from knowing a few of my actual goals and values feels pretty liberating. Knowing (some of) what I want makes it soooo much easier for me to figure out how to do things that will make me happy, and with less regret or second thoughts after I decide.
Integrate your utility over time. There are plenty of cheap (in terms of future utility) things that you can do now to enjoy yourself.
Like, eating healthy feels nice and keeps you in better shape for getting more utility. You should do it. Friends help you achieve future goals, and making and interacting with them is fun.
Reframe your “have to”s as “want to”s, if that’s true.
Integrate your utility over time. There are plenty of cheap (in terms of future utility) things that you can do now to enjoy yourself.
I know, it would be best to enjoy the journey. But I am not that kind of person. I hate the eventual conclusion being made on LW. I am not saying that it is wrong, which is the problem. For me it only means that life sucks. If you can’t stop caring then life sucks. For a few years after I was able to overcome religion I was pretty happy. I decided that nothing matters and I could just enjoy life, that I am not responsible. But that seems inconsistent as caring about others is caring about yourself. You also wouldn’t run downstairs faster than necessary just because it is fun to run fast, it is not worth a fracture. And there begins the miserable journey where you never stop to enjoy because it is not worth it. It is like rationality is a parasite that is hijacking you and turns you into a consequentialist that maximizes only rational conduct.
Memetic “basilisk” issue: this subthread may be important:
It is like rationality is a parasite that is hijacking you and turns you into a consequentialist that maximizes only rational conduct.
This (combined with such as Roko’s meltdown, as Nisan notes above) appears to be evidence of the possibility of LessWrong rationalism as memetic basilisk. (Thus suggesting the “basilisks” so far, e.g. the forbidden post, may have whatever’s problematic in the LW memeplex as prerequisite, which is … disconcerting.) As muflax notes:
It’s just that adding powerful cognitive tools to a not-too-sane mind has a lot of nasty side-effects.
What’s a proper approach to use with those who literally can’t handle that much truth?
What’s a proper approach to use with those who literally can’t handle that much truth?
Good question, though we might also want to take a careful look at whether there’s something a little askew about the truth we’re offering.
How can the folks who can’t handle this stuff easily or perhaps at all be identified?
Rationality helps some depressed people and knocks others down farther.
Even if people at risk can be identified, I can’t imagine a spoiler system which would keep all of them away from the material. On the other hand, maybe there are ways to warn off at least some people.
Well, that question is hardly unique to this forum.
My own preferred tactic depends on whether I consider someone capable of making an informed decision about what they are willing to try to handle—that is, they have enough information, and they are capable of making such judgments, and they aren’t massively distracted.
If I do, I tell them that there’s something I’m reluctant to tell them, because I’m concerned that it will leave them worse off than my silence, but I’m leaving the choice up to them.
If not, then I keep quiet.
In a public forum, though, that tactic is unavailable.
I think you need to be a bit more selfish. The way I see it, the distant future can most likely take care of itself, and if it can’t, then you won’t be able to save it anyway.
If you suddenly were given a very good reason to believe that things are going to turn out Okay regardless of what you personally do, what would you do then?
It’s like you’ll have to become one of those people who work all their life to save money for their retirement when they are old and lost most of their interests.
That, and the rest, doesn’t sound rational at all. “Maximizing expected utility” doesn’t mean “systematically deferring enjoyment”; it’s just a nerdy way of talking about tradeoffs when taking risks.
The concept of “expected utility” doesn’t seem to have much relevance at the individual level, it’s more something for comparing government policies, or moral philosophies, or agents in game theory/decision theory … or maybe also some narrow things like investing in stock. But not deciding whether to go rock-climbing or not.
That, and the rest, doesn’t sound rational at all.
I agree, but I can’t pinpoint what is wrong. There are other people here who went bonkers (no offense) thanks to the kind of rationality being taught on LW. Actually Roko stated a few times that he would like to have never learnt about existential risks because of the negative impact it had on his social life etc. I argued that “ignorance is bliss” can under no circumstances be right and that I value truth more than happiness. I think I was wrong. I am not referring to bad things happening to people here but solely to the large amount of positive utility associated with a lot of scenarios that force you to pursue instrumental goals that you don’t enjoy at all. Well, it would probably be better to never exist in the first place, living seems to have an overall negative utility if you are not the kind if person who enjoys being or helping Eliezer Yudkowsky.
What are you doing all day, is it the most effective way to earn money or help solving friendly AI directly? I doubt it. And if you know that and still don’t do anything about it then many people here would call you irrational. It doesn’t matter what you like to do because whatever you value, there will always be more of it tomorrow if you postpone doing it today and instead pursue an instrumental goal. You can always do something, even if that means you’d have to sell your blood. No excuses there, it is watertight.
And this will never end. It might sound absurd to talk about trying to do something about the heat death of the universe or trying to hack the Matrix, but is it really improbable enough to outweigh the utility associated with gaining the necessary resources to support 3^^^^3 people for 3^^^^3 years rather than a galactic civilisation for merely 10^50 years? Give me a good argument of why an FAI shouldn’t devote all its resources to trying to leave the universe rather than supporting a galactic civilization for a few years? How does this differ from devoting all resources to working on friendly AI for a few decades? How much fun could you have in the next few decades? Let’s say you’d have to devote 10^2 years of your life to a positive Singularity to gain 10^50. Now how is this different from devoting the resources to support you for 10^50 years to the FAI trying to figure out how to support you for 3^^^^3 years? Where do you draw the line and why?
I can. You are trying to “shut up and multiply” (as Eliezer advises) using the screwed up, totally undiscounted, broken-mathematics version of consequentialism taught here. Instead, you should pay more attention to your own utility than to the utility of the 3^^^3itudes in the distant future, and/or in distant galaxies, and/or in simulated realities. You should pay no more attention to their utility than they pay to yours.
Don’t shut up and multiply until someone fixes the broken consequentialist math which is promoted here. Instead, (as Eliezer also advises) get laid or something. Worry more about about the happiness of the people (including yourself) within a temporal radius of 24 hours, a spatial radius of a few meters, and in your own branch of the ‘space-time continuum’, than you worry about any region of space-time trillions of times the extent, if that region of space time is also millions of times as distant in time, space, or Hilbert-space phase-product.
(I’m sure Tim Tyler is going to jump in and point out that even if you don’t discount the future (etc.) as I recommend, you should still not worry much about the future because it is so hard to predict the consequences of your actions. Pace Tim. That is true, but beside the point!)
If it is important to you (XiXiDu) to do something useful and Singularity related, why don’t you figure out how to fix the broken expected-undiscounted-utility math that is making you unhappy before someone programs it into a seed AI and makes us all unhappy.
Excuse me, but XiXiDu is taking for granted ideas such as Pascal’s Mugging—in fact Pascal’s Mugging seems to be the main trope here—which were explicitly rejected by me and by most other LWians. We’re not quite sure how to fix it, though Hanson’s suggestion is pretty good, but we did reject Pascal’s Mugging!
It’s not obvious to me that after rejecting Pascal’s Mugging there is anything left to say about XiXiDu’s fears or any reason to reject expected utility maximization(!!!).
It’s not obvious to me that after rejecting Pascal’s Mugging there is anything left to say about XiXiDu’s fears or any reason to reject expected utility maximization(!!!).
Well, in so far as it isn’t obvious why Pascal’s Mugging should be rejected by a utility maximizer, his fears are legitimate. It may very well be that a utility maximizer will always be subject to some form of possible mugging. If that issue isn’t resolved the fact that people are rejecting Pascal’s Mugging doesn’t help matters.
It may very well be that a utility maximizer will always be subject to some form of possible mugging.
I fear that the mugger is often our own imagination. If you calculate the expected utility of various outcomes you imagine impossible alternative actions. The alternatives are impossible because you already precommited to choosing the outcome with the largest expected utility. There are three main problems with that:
You swap your complex values for a certain terminal goal with the highest expected utility, indeed your instrumental and terminal goals converge to become the expected utility formula.
There is no minimum amount of empirical evidence necessary to extrapolate the expected utility of an outcome.
The extrapolation of counterfactual alternatives is unbounded, logical implications can reach out indefinitely without ever requiring new empirical evidence.
All this can cause any insignificant inference to exhibit hyperbolic growth in utility.
I don’t trust my brain’s claims of massive utility enough to let it dominate every second of my life. I don’t even think I know what, this second, would be doing the most to help achieve a positive singularity.
I’m also pretty sure that my utility function is bounded, or at least hits diminishing returns really fast.
I know that thinking my head off about every possible high-utility counterfactual will make me sad, depressed, and indecisive, on top of ruining my ability to make progress towards gaining utility.
So I don’t worry about it that much. I try to think about these problems in doses that I can handle, and focus on what I can actually do to help out.
I don’t trust my brain’s claims of massive utility enough to let it dominate every second of my life.
Yet you trust your brain enough to turn down claims of massive utility. Given that our brains could not evolve to yield reliable inutions about such scenarios and given that the parts of rationality that we do understand very well in principle are telling us to maximize expected utility, what does it mean not to trust your brain? In all of the scenarios in question that involve massive amounts of utility your uncertainty is included and being outweighed. It seems that what you are saying is that you don’t trust your higher order thinking skills and instead trust your gut feelings? You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs. How are you going to define and justify such a limit if you don’t trust your brain?
Anyway, I did some quick searches today and found out that the kind of problems I talked about are nothing new and mentioned in various places and contexts:
The ‘expected value’ of the game is the sum of the expected payoffs of all the consequences. Since the expected payoff of each possible consequence is $1, and there are an infinite number of them, this sum is an infinite number of dollars. A rational gambler would enter a game iff the price of entry was less than the expected value. In the St. Petersburg game, any finite price of entry is smaller than the expected value of the game. Thus, the rational gambler would play no matter how large the finite entry price was. But it seems obvious that some prices are too high for a rational agent to pay to play. Many commentators agree with Hacking’s (1980) estimation that “few of us would pay even $25 to enter such a game.” If this is correct—and if most of us are rational—then something has gone wrong with the standard decision-theory calculations of expected value above. This problem, discovered by the Swiss eighteenth-century mathematician Daniel Bernoulli is the St. Petersburg paradox. It’s called that because it was first published by Bernoulli in the St. Petersburg Academy Proceedings (1738; English trans. 1954).
If EDR were accepted, speculations about infinite scenarios, however unlikely and far‐fetched, would come to dominate our ethical deliberations. We might become extremely concerned with bizarre possibilities in which, for example, some kind of deity exists that will use its infinite powers to good or bad ends depending on what we do. No matter how fantastical any such scenario would be, if it is a logically coherent and imaginable possibility it should presumably be assigned a finite positive probability, and according to EDR, the smallest possibility of infinite value would smother all other considerations of mere finite values.
[...]
Suppose that I know that a certain course of action, though much less desirable in every other respect than an available alternative, offers a one‐in‐a‐million chance of avoiding catastrophe involving x people, where x is finite. Whatever else is at stake, this possibility will overwhelm my calculations so long as x is large enough. Even in the finite case, therefore, we might fear that speculations about low‐probability‐high‐stakes scenarios will come to dominate our moral decision making if we follow aggregative consequentialism.
If we consider systems that would value some apparently physically unattainable quantity of resources orders of magnitude more than the apparently accessible resources given standard physics (e.g. resources enough to produce 10^1000 offspring), the potential for conflict again declines for entities with bounded utility functions. Such resources are only attainable given very unlikely novel physical discoveries, making the agent’s position similar to that described in “Pascal’s Mugging” (Bostrom, 2009), with the agent’s decision-making dominated by extremely small probabilities of obtaining vast resources.
You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs
I take risks when I actually have a grasp of what they are. Right now I’m trying to organize a DC meetup group, finish up my robotics team’s season, do all of my homework for the next 2 weeks so that I can go college touring, and combining college visits with LW meetups.
After April, I plan to start capoiera, work on PyMC, actually have DC meetups, work on a scriptable real time strategy game, start contra dancing again, start writing a sequence based on Heuristics and Biases, improve my dietary and exercise habits, and visit Serbia.
All of these things I have a pretty solid grasp of what they entail, and how they impact the world.
I still want to do high-utility things, but I just choose not to live in constant dread of lost opportunity. My general strategy of acquiring utility is to help/make other people get more utility too, and multiply the effects of getting the more low hanging fruit.
Suppose that I know that a certain course of action
with the agent’s decision-making dominated by extremely small probabilities of obtaining vast resources.
The issue with long-shots like this is that I don’t know where to look for them. Seriously. And since they’re such long-shots, I’m not sure how to go about getting them. I know that trying to do so isn’t particularly likely to work.
Yet you trust your brain enough to turn down claims of massive utility.
Sorry, I said that badly. If I knew how to get massive utility, I would try to. Its just the planning is the hard part. The best that I know to do now (note: I am carving out time to think about this harder in the forseeable future) is to get money and build communities. And give some of the money to SIAI. But in the meantime, I’m not going to be agonizing over everything I could have possibly done better.
It’s not obvious to me that after rejecting Pascal’s Mugging there is anything left to say about XiXiDu’s fears or any reason to reject expected utility maximization(!!!).
Well, nothing philosophically. There’s probably quite a lot to say about, or rather in the aid of, one of our fellows who’s clearly in trouble.
The problem appears to be depression, i.e., more corrupt than usual hardware. Thus, despite the manifestations of the trouble as philosophy, I submit this is not the actual problem here.
We are in disagreement then. I reject, not just Pascal’s mugging, but also the style of analysis found in Bostrom’s “Astronomical Waste” paper. As I understand XiXiDu, he has been taught (by people who think like Bostrom) that even the smallest misstep on the way to the Singularity has astronomical consequences and that we who potentially commit these misteps are morally responsible for this astronomical waste.
Is the “Astronomical Waste” paper an example of “Pascal’s Mugging”? If not, how do you distinguish (setting aside the problem of how you justify the distinction)?
We’re not quite sure how to fix it, though Hanson’s suggestion is pretty good …
Do you have a link to Robin’s suggestion? I’m a bit surprised that a practicing economist would suggest something other than discounting. In another Bostrom paper, “The Infinitarian Challenge to Aggregative Ethics”, it appears that Bostrom also recognizes that something is broken, but he, too, doesn’t know how to fix it.
Is the “Astronomical Waste” paper an example of “Pascal’s Mugging”? If not, how do you distinguish (setting aside the problem of how you justify the distinction)?
Exactly, I describe my current confusion in more detail in this thread, especially the comment here and here which led me to conclude this. Fairly long comments, but I wish someone would dissolve my confusion there. I really don’t care if you downvote them to −10, but without some written feedback I can’t tell what exactly is wrong, how I am confused.
Robin Hanson has suggested penalizing the prior probability of hypotheses which argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us. Since only one in 3^^^^3 people can be in a unique position to ordain the existence of at least 3^^^^3 other people who are not symmetrically in such a situation themselves, the prior probability would be penalized by a factor on the same order as the utility.
I’m going to be poking at this question from several angles—I don’t think I’ve got a complete and concise answer.
I think you’ve got a bad case of God’s Eye Point of View—thinking that the most rational and/or moral way to approach the universe is as though you don’t exist.
The thing about GEPOV is that it isn’t total nonsense. You can get more truth if you aren’t territorial about what you already believe, but since you actually are part of the universe and you are your only point of view, trying to leave yourself out completely is its own flavor of falseness.
As you are finding out, ignoring your needs leads to incapacitation. It’s like saying that we mustn’t waste valuable hydrocarbons on oil for the car engine. All the hydrocarbons should be used for gasoline! This eventually stops working. It’s important to satisfy needs which are of different kinds and operate on different time scales.
You may be thinking that, since fun isn’t easily measurable externally, the need for it isn’t real.
I think you’re up against something which isn’t about rationality exactly—it’s what I call the emotional immune system. Depression is partly about not being able to resist (or even being attracted to) ideas which cause damage.
An emotional immune system is about having affection for oneself, and if it’s damaged, it needs to be rebuilt, probably a little at a time.
On the intellectual side, would you want all the people you want to help to defer their own pleasure indefinitely?
On the intellectual side, would you want all the people you want to help to defer their own pleasure indefinitely?
No, but I don’t know what a solution would look like. Most of the time I am just overwhelmed as it feels like everything I come up with isn’t much better than throwing a coin. I just can’t figure out the right balance between fun (experiencing; being selfish), moral conduct (being altruistic), utility maximization (being future-oriented) and my gut feelings (instinct; intuition; emotions). For example, if I have a strong urge to just go out and have fun, should I just give in to that urge or think about it? If I question the urge I often end up thinking about it until it is too late. Every attempt at a possible solution looks like browsing Wikipedia, each article links to other articles that again link to other articles until you end up with something completely unrelated to the initial article. It seems impossible to apply a lot of what is taught on LW in real life.
NancyLebovitz’s comment I think is highly relevant here.
I can only speak from my personal experience, but I’ve found than part of going through Less Wrong and understanding all the great stuff on this website, is understanding the type of creature I am.
At this current moment, I am comparitively a very simple one. In terms of the singularity, and Friendly AI, they are miles from what I am, and I am not at a point where I can emotionally take on those causes. I can intellectual but the fact is the simple creature that I am doesn’t comprehend those connections yet.
I want to one day, but a Baby has to crawl before it can walk.
Much of what I do provides me with satisfaction, joy, happiness. I don’t even fully understand why. But what I do know, is that I need those emotions to not just function, but to improve, to continue the development of myself.
Maybe it might help to reduce yourself to that simple creature. Understand that for a baby to do math, it has to understand symbols. Maybe that what you understand intellectually, in terms of emotional function your not yet ready to deal with.
Just my two cents. sorry if I’m not as concise as I should be.
I do hope the best for you though.
I’m sure Tim Tyler is going to jump in and point out that even if you don’t discount the future (etc.) as I recommend, you should still not worry much about the future because it is so hard to predict the consequences of your actions. Pace Tim. That is true, but beside the point!
Peace—I think that is what you meant to say. We mostly agree. I am not sure you can tell someone else what they “should” be doing, though. That is for them to decide. I expect your egoism is not of the evangelical kind.
Saving the planet does have some merits though. People’s goals often conflict—but many people can endorse saving the planet. It is ecologically friendly, signals concern with Big Things, paints you as a Valiant Hero—and so on. As causes go, there are probably unhealthier ones to fall in with.
I’m sure Tim Tyler is going to jump in and point out …
Pace Tim. That is true, but beside the point!
Peace—I think that is what you meant to say.
I’m kinda changing the subject here, but that wasn’t a typo. “Pace” was what I meant to write. Trouble is, I’m not completely sure what it means. I’ve seen it used in contexts that suggest it means something like “I know you disagree with this, but I don’t want to pick a fight. At least not now.” But I don’t know what it means literally, nor even how to pronounce it.
My guess is that it is church Latin, meaning (as you suggest) ‘peace’. ‘Requiescat in pace’ and all that. I suppose, since it is a foreign language word, I technically should have italicized. Can anyone help out here?
living seems to have an overall negative utility if you are not the kind if person who enjoys being or helping Eliezer Yudkowsky.
There is a difference between negative utility, and less than maximized utility. There are lots of people who enjoy their lives despite not having done as much as they could, even if they know that they could be doing more.
Its only when you dwell on what you haven’t done, aren’t doing, or could have done that you actually become unhappy about it. If you don’t start from maximum utility and see everything as a worse version of that, then you can easily enjoy the good things in your life.
Give me a good argument of why an FAI shouldn’t devote all its resources to trying to leave the universe rather than supporting a galactic civilization for a few years?
Now this looks like a wrong kind of question to consider in this context. The amount of fun your human existence is delivering, in connection with what you abstractly believe is the better course of action, is something relevant, but the details of how FAI would manage the future is not your human existence’s explicit problem, unless you are working on FAI design.
If it’s better for FAI to spend the next 3^^^3 multiverse millenia planning the future, why should that have a reflection in your psychological outlook? That’s an obscure technical question. What matters is whether it’s better, not whether it has a certain individual surface feature.
What are you doing all day, is it the most effective way to earn money or help solving friendly AI directly? I doubt it. And if you know that and still don’t do anything about it then many people here would call you irrational.
Irrational seems like the wrong world here after all the person could be rational but working with a dataset that does not allow them to reach that conclusion yet. There are also people who reach that conclusion irrationally, reach the right conclusion with a flawed method(unreliable) but are not more rational for having the right conclusions.
So if you enjoy mountain climbing you’ll postpone it until it is safer or after the Singularity when you can have much more safe mountain climbing.
That presumes no time discounting.
Time discounting is neither rational nor irrational. It’s part of the way one’s utility function is defined, and judgements of instrumental rationality can only be made by reference to a utility function. So there’s not necessarily any conflict between expected utility maximization and having fun now: indeed, one could even have a utility function that only cared about things that happened during the next five seconds, and attached zero utility to everything afterwards. I’m obviously not suggesting that anyone should try to start thinking like that, but I do suggest introducing a little more discounting into your utility measurements.
That’s even without taking into account the advice about needing rest that other people have brought up, and which I agree with completely. I tried going by the “denial of pleasures” route before, and the result was a burnout which began around three years ago and which is still hampering my productivity. If you don’t allow yourself to have fun, you will crash and burn sooner or later.
Couldn’t you just take all these negative stuff you came up with in connection to rationality, mark them as things to avoid, and then define rationality as efficiently pursuing whatever you actually find desirable?
That would be ignoring the arguments, as opposed to addressing them. How you define “rationality” shouldn’t matter for what particular substantive arguments incite you to do.
If you accept the “rationality is winning” definition, it makes little sense to come up with downsides about rationality, that’s what I was trying to point out.
It is quite similar to what you said in this comment.
If you accept the “rationality is winning” definition, it makes little sense to come up with downsides about rationality, that’s what I was trying to point out.
A wrong way to put it. If a decision is optimal, there still remain specific arguments for why it shouldn’t be taken. Optimality is estimated overall, not for any singled out argument, that can therefore individually lose. See “policy debates shouldn’t appear one-sided”.
If, all else equal, it’s possible to amend a downside, then it’s a bad idea to keep it. But tradeoffs are present in any complicated decision, there will be specialized heuristics that disapprove of a plan, even if overall it’s optimized.
In our case, we have the heuristic of “personal fun”, which is distinct from overall morality. If you’re optimizing morality, you should expect personal fun to remain suboptimal, even if just a little bit.
(Yet another question is that rationality can give independent boost to the ability to have personal fun, which can offset this effect.)
All else equal, if having less fun improves expected utility, you should have less fun. But all else is not equal, it’s not clear to me that the search for more impact often leads to particularly no-fun plans. In other words, some low-hanging fun cuts are to be expected, you shouldn’t play WoW for weeks on end, but getting too far into the no-fun territory would be detrimental to your impact, and the best ways of increasing your impact probably retain a lot of fun. Also, happiness set point would probably keep you afloat.
Most of all it just made me sad and depressed. The whole “expected utility” thing being the worst part. If you take it seriously you’ll forever procrastinate having fun because you can always imagine that postponing some terminal goal and instead doing something instrumental will yield even more utility in future. So if you enjoy mountain climbing you’ll postpone it until it is safer or after the Singularity when you can have much more safe mountain climbing. And then after the Singularity you won’t be able to do it because the resources for a galactic civilization are better used to fight hostile aliens and afterwards fix the heat death of the universe. There’s always more expected utility in fixing problems, it is always about expected utility never about gathering or experiencing utility. And if you don’t believe into risks from AI then there is some other existential risk and if there is no risk then it is poverty in Obscureistan. And if there is nothing at all then you should try to update your estimates because if you’re wrong you’ll lose more than by trying to figure out if you’re wrong. You never hit diminishing returns. And in the end all your complex values are replaced by the tools and heuristics that were originally meant to help you achieve them. It’s like you’ll have to become one of those people who work all their life to save money for their retirement when they are old and lost most of their interests.
What on EARTH are you trying to -
Important note: Currently in NYC for 20 days with sole purpose of finding out how to make rationalists in Bay Area (and elsewhere) have as much fun as the ones in NYC. I am doing this because I want to save the world.
Saving the world by overloading it with fun? Now where have I heard that before...
XiXiDu, I have been reading your comments for some time, and it seems like your reaction to this whole rationality business is unique. You take it seriously, or at least part of you does; but your perspective is sad and strange and pessimistic. Yes, even more pessimistic than Roko or Mass Driver. What you are taking away from this blog is not what other readers are taking away from it. The next step in your rationalist journey may require something more than a blog can provide.
From one aspiring rationalist to another, I strongly encourage you to talk these things over, in person, with friends who understand them. If you are already doing so, please forgive my unsolicited advice. If you don’t have friends who know Less Wrong material, I encourage you to find or make them. They don’t have to be Less Wrong readers; many of my friends are familiar with different bits and pieces of the Less Wrong philosophy without ever having read Less Wrong.
(Who voted down this sincere expression of personal feeling? Tch.)
This is why remembering to have fun along the way is important. Remember: you are an ape. The Straw Vulcan is a lie. The unlived life is not so worth examining. Remember to be human.
I know that argument. But I can’t get hold of it. What can I do, play a game? I’ll have to examine everything in terms of expected utility. If I want to play a game I’ll have to remind myself that I really want to solve friendly AI and therefore have to regard “playing a game” as an instrumental goal rather than a terminal goal. And in this sense, can I justify to play a game? You don’t die if you are unhappy, I could just work overtime as street builder to earn even more money to donate it to the SIAI. There is no excuse to play a game because being unhappy for a few decades can not outweigh the expected utility of a positive Singularity and it doesn’t reduce your efficiency as much as playing games and going to movies. There is simply no excuse to have fun. And that will be the same after the Singularity too.
The reason it’s important is because it counts as basic mental maintenance, just as eating reasonably and exercising a bit and so on are basic bodily maintenance. You cannot achieve any goal without basic self-care.
For the solving friendly AI problem in particular: the current leader in the field has noticed his work suffers if he doesn’t allow play time. You are allowed play time.
You are not a moral failure for not personally achieving an arbitrary degree of moral perfection.
You sound depressed, which would mean your hardware was even more corrupt and biased than usual. This won’t help achieve a positive Singularity either. Driving yourself crazier with guilt at not being able to work for a positive Singularity won’t help your effectiveness, so you need to stop doing that.
You are allowed to rest and play. You need to let yourself rest. Take a deep breath! Sleep! Go on holiday! Talk to friends you trust! See your doctor! Please do something. You sound like you are dashing your mind to pieces against the rock of the profoundly difficult, and you are not under any obligation to do such a thing, to punish yourself so.
As a result of this thinking, are you devoting every moment of your time and every Joule of your energy towards avoiding a negative Singularity?
No?
No, me neither. If I were to reason this way, the inevitable result for me would be that I couldn’t bear to think about it at all and I’d live my whole life neither happily nor productively, and I suspect the same is true for you. The risk of burning out and forgetting about the whole thing is high, and that doesn’t maximize utility either. You will be able to bring about bigger changes much more effectively if you look after yourself. So, sure, it’s worth wondering if you can do more to bring about a good outcome for humanity—but don’t make gigantic changes that could lead to burnout. Start from where you are, and step things up as you are able.
Lets say the Singularity is likely to happen in 2045 like Kurzweil says, and you want to maximize the chances that it’s positive. The idea that you should get to work making as much money to donate to SIAI, or that you should start researching fAGI (depending on your talents). What you do tomorrow doesn’t matter. What matters is the average output over the next 35 years.
This is important because a strategy where you have a emotional breakdown in 2020 fails. If you get so miserable you kill yourself you’ve failed at your goal. You need to make sure that this fallible agent, XIXIDu, stays at a very high level of productivity for the next 35 years. That almost never happens if you’re not fulfilling the needs your monkey brain demands.
Immediate gratification isn’t a terminal goal, you’ve figured this out, but it does work as an instrumental goal on the path of a greater goal.
Ditto
One thing that I’ve come with when thinking about personal budgeting, of all things, is the concept of granularity. For someone who is poor, the situation is analogous to yours. The dad, lets say, of the household might be having a similar attack of conscience as you are on whether he should buy a candy bar at the gas station, when there are bills that can’t be paid.
But it turns out that a small enough purchase, such as a really cheap candy bar (for the sake of argument), doesn’t actually make any difference. No bill is going to go from being unpaid to paid because that candy was bought rather than unbought.
So relax. Buy a candy bar every once in a while. It won’t make a difference.
I took too long to link to this.
I don’t tell people this very often. In fact I’m not sure I can recall ever telling anyone this before, but then I wouldn’t necessarily remember it. But yes, in this case and in these exact circumstances, you need to get laid.
Could you expand on why offering this advice makes sense to you in this situation, when it hasn’t otherwise?
Totally can relate to this. I was dealing with depression long before LW, but improved rationality sure made my depression much more fun and exciting. Sarcastically, I could say that LW gave me the tools to be really good at self-criticism.
I can’t exactly give you any advice on this, as I’m still dealing with this myself and I honestly don’t really know what works or even what the goal exactly is. Just wanted to say that the feeling “this compromise ‘have some fun now’ crap shouldn’t be necessary if I really were rational!” is only too familiar.
It lead me to constantly question my own values and how much I was merely signalling (mostly to myself). Like, “if I procrastinate on $goal or if I don’t enjoy doing $maximally_effective_but_boring_activity, then I probably don’t really want $goal”, but that just leads into deeper madness. And even when I understand (from results, mostly, or comparisons to more effective people) that I must be doing something wrong, I break down until I can exactly identify what it is. So I self-optimize so that I can be better at self-optimizing, but I never get around to doing anything.
(That’s not to say that LW was overall a negative influence for me. Quite the opposite. It’s just that adding powerful cognitive tools to a not-too-sane mind has a lot of nasty side-effects.)
If I understood this correctly (as you procrastinating on something, and concluding that you don’t actually want it), then most people around here call that akrasia.
Which isn’t really something to go mad about. Basically, your brain is a stapled together hodgepodge of systems which barely work together well enough to have worked in the ancestral environment.
Nowadays, we know and can do much more stuff. But there’s no reason to expect that your built in neural circuitry can turn your desire to accomplish something into tangible action, especially when your actions are only in the long term, and non-viscerally, related to you accomplishing your goal.
It’s not just akrasia, or rather, the implication of strong akrasia really weirds me out.
The easiest mechanism to implement goals would not be vulnerable to akrasia. At best it would be used to conserve limited resources, but that’s clearly not the case here. In fact, some goals work just fine, while others fail. This is especially notable when the same activity can have very different levels of akrasia depending on why I’m doing it. Blaming this on hodge-podge circuitry seems false to me (in the general case).
So I look for other explanations, and signaling is a pretty good starting point. What I thought was a real goal was just a social facade, e.g. I don’t want to study, I just want to be seen as having a degree. (Strong evidence for this is that I enjoy reading books for some personal research when I hated literally the same books when I had to read them for class.)
Because of this, I’m generally not convinced that my ability to do stuff is broken (at least not as badly), but rather, that I’m mistaken about what I really want. But as Xixidu mentioned, when you start applying rationality to that, you end up changing your own values in the process and not always in a pretty way.
At least at my end, I’m pretty sure that part of my problem isn’t that signalling is causing me to override my real desires, it’s that there’s something about feeling that I have to signal leads to me not wanting to cooperate, even if the action is something that I would otherwise want to do, or at least not mind all that much.
Writing this has made the issue clearer for me than it’s been, but it’s not completely clear—I think there’s a combination of fear and anger involved, and it’s a goddam shame that my customers (a decent and friendly bunch) are activating stuff that got built up when I was a kid.
Fair enough, I guess I misunderstood what you were saying.
I guess its not guaranteed to turn out well, and when I was still working through my value-conflicts it wasn’t fun. In the end though, the clarity that I got from knowing a few of my actual goals and values feels pretty liberating. Knowing (some of) what I want makes it soooo much easier for me to figure out how to do things that will make me happy, and with less regret or second thoughts after I decide.
Integrate your utility over time. There are plenty of cheap (in terms of future utility) things that you can do now to enjoy yourself.
Like, eating healthy feels nice and keeps you in better shape for getting more utility. You should do it. Friends help you achieve future goals, and making and interacting with them is fun.
Reframe your “have to”s as “want to”s, if that’s true.
I know, it would be best to enjoy the journey. But I am not that kind of person. I hate the eventual conclusion being made on LW. I am not saying that it is wrong, which is the problem. For me it only means that life sucks. If you can’t stop caring then life sucks. For a few years after I was able to overcome religion I was pretty happy. I decided that nothing matters and I could just enjoy life, that I am not responsible. But that seems inconsistent as caring about others is caring about yourself. You also wouldn’t run downstairs faster than necessary just because it is fun to run fast, it is not worth a fracture. And there begins the miserable journey where you never stop to enjoy because it is not worth it. It is like rationality is a parasite that is hijacking you and turns you into a consequentialist that maximizes only rational conduct.
Memetic “basilisk” issue: this subthread may be important:
This (combined with such as Roko’s meltdown, as Nisan notes above) appears to be evidence of the possibility of LessWrong rationalism as memetic basilisk. (Thus suggesting the “basilisks” so far, e.g. the forbidden post, may have whatever’s problematic in the LW memeplex as prerequisite, which is … disconcerting.) As muflax notes:
What’s a proper approach to use with those who literally can’t handle that much truth?
Good question, though we might also want to take a careful look at whether there’s something a little askew about the truth we’re offering.
How can the folks who can’t handle this stuff easily or perhaps at all be identified?
Rationality helps some depressed people and knocks others down farther.
Even if people at risk can be identified, I can’t imagine a spoiler system which would keep all of them away from the material. On the other hand, maybe there are ways to warn off at least some people.
Well, that question is hardly unique to this forum.
My own preferred tactic depends on whether I consider someone capable of making an informed decision about what they are willing to try to handle—that is, they have enough information, and they are capable of making such judgments, and they aren’t massively distracted.
If I do, I tell them that there’s something I’m reluctant to tell them, because I’m concerned that it will leave them worse off than my silence, but I’m leaving the choice up to them.
If not, then I keep quiet.
In a public forum, though, that tactic is unavailable.
It is common for brains to get hijacked by parasites:
Dan Dennett: Ants, terrorism, and the awesome power of memes
Thanks for the link.
I note that when Dennett lists dangerous memes, he skips the one that gets the most people killed—nationalism.
Dont despair, help will come :)
I think you need to be a bit more selfish. The way I see it, the distant future can most likely take care of itself, and if it can’t, then you won’t be able to save it anyway.
If you suddenly were given a very good reason to believe that things are going to turn out Okay regardless of what you personally do, what would you do then?
That, and the rest, doesn’t sound rational at all. “Maximizing expected utility” doesn’t mean “systematically deferring enjoyment”; it’s just a nerdy way of talking about tradeoffs when taking risks.
The concept of “expected utility” doesn’t seem to have much relevance at the individual level, it’s more something for comparing government policies, or moral philosophies, or agents in game theory/decision theory … or maybe also some narrow things like investing in stock. But not deciding whether to go rock-climbing or not.
I agree, but I can’t pinpoint what is wrong. There are other people here who went bonkers (no offense) thanks to the kind of rationality being taught on LW. Actually Roko stated a few times that he would like to have never learnt about existential risks because of the negative impact it had on his social life etc. I argued that “ignorance is bliss” can under no circumstances be right and that I value truth more than happiness. I think I was wrong. I am not referring to bad things happening to people here but solely to the large amount of positive utility associated with a lot of scenarios that force you to pursue instrumental goals that you don’t enjoy at all. Well, it would probably be better to never exist in the first place, living seems to have an overall negative utility if you are not the kind if person who enjoys being or helping Eliezer Yudkowsky.
What are you doing all day, is it the most effective way to earn money or help solving friendly AI directly? I doubt it. And if you know that and still don’t do anything about it then many people here would call you irrational. It doesn’t matter what you like to do because whatever you value, there will always be more of it tomorrow if you postpone doing it today and instead pursue an instrumental goal. You can always do something, even if that means you’d have to sell your blood. No excuses there, it is watertight.
And this will never end. It might sound absurd to talk about trying to do something about the heat death of the universe or trying to hack the Matrix, but is it really improbable enough to outweigh the utility associated with gaining the necessary resources to support 3^^^^3 people for 3^^^^3 years rather than a galactic civilisation for merely 10^50 years? Give me a good argument of why an FAI shouldn’t devote all its resources to trying to leave the universe rather than supporting a galactic civilization for a few years? How does this differ from devoting all resources to working on friendly AI for a few decades? How much fun could you have in the next few decades? Let’s say you’d have to devote 10^2 years of your life to a positive Singularity to gain 10^50. Now how is this different from devoting the resources to support you for 10^50 years to the FAI trying to figure out how to support you for 3^^^^3 years? Where do you draw the line and why?
I can. You are trying to “shut up and multiply” (as Eliezer advises) using the screwed up, totally undiscounted, broken-mathematics version of consequentialism taught here. Instead, you should pay more attention to your own utility than to the utility of the 3^^^3itudes in the distant future, and/or in distant galaxies, and/or in simulated realities. You should pay no more attention to their utility than they pay to yours.
Don’t shut up and multiply until someone fixes the broken consequentialist math which is promoted here. Instead, (as Eliezer also advises) get laid or something. Worry more about about the happiness of the people (including yourself) within a temporal radius of 24 hours, a spatial radius of a few meters, and in your own branch of the ‘space-time continuum’, than you worry about any region of space-time trillions of times the extent, if that region of space time is also millions of times as distant in time, space, or Hilbert-space phase-product.
(I’m sure Tim Tyler is going to jump in and point out that even if you don’t discount the future (etc.) as I recommend, you should still not worry much about the future because it is so hard to predict the consequences of your actions. Pace Tim. That is true, but beside the point!)
If it is important to you (XiXiDu) to do something useful and Singularity related, why don’t you figure out how to fix the broken expected-undiscounted-utility math that is making you unhappy before someone programs it into a seed AI and makes us all unhappy.
Excuse me, but XiXiDu is taking for granted ideas such as Pascal’s Mugging—in fact Pascal’s Mugging seems to be the main trope here—which were explicitly rejected by me and by most other LWians. We’re not quite sure how to fix it, though Hanson’s suggestion is pretty good, but we did reject Pascal’s Mugging!
It’s not obvious to me that after rejecting Pascal’s Mugging there is anything left to say about XiXiDu’s fears or any reason to reject expected utility maximization(!!!).
Well, in so far as it isn’t obvious why Pascal’s Mugging should be rejected by a utility maximizer, his fears are legitimate. It may very well be that a utility maximizer will always be subject to some form of possible mugging. If that issue isn’t resolved the fact that people are rejecting Pascal’s Mugging doesn’t help matters.
I fear that the mugger is often our own imagination. If you calculate the expected utility of various outcomes you imagine impossible alternative actions. The alternatives are impossible because you already precommited to choosing the outcome with the largest expected utility. There are three main problems with that:
You swap your complex values for a certain terminal goal with the highest expected utility, indeed your instrumental and terminal goals converge to become the expected utility formula.
There is no minimum amount of empirical evidence necessary to extrapolate the expected utility of an outcome.
The extrapolation of counterfactual alternatives is unbounded, logical implications can reach out indefinitely without ever requiring new empirical evidence.
All this can cause any insignificant inference to exhibit hyperbolic growth in utility.
I don’t trust my brain’s claims of massive utility enough to let it dominate every second of my life. I don’t even think I know what, this second, would be doing the most to help achieve a positive singularity.
I’m also pretty sure that my utility function is bounded, or at least hits diminishing returns really fast.
I know that thinking my head off about every possible high-utility counterfactual will make me sad, depressed, and indecisive, on top of ruining my ability to make progress towards gaining utility.
So I don’t worry about it that much. I try to think about these problems in doses that I can handle, and focus on what I can actually do to help out.
Yet you trust your brain enough to turn down claims of massive utility. Given that our brains could not evolve to yield reliable inutions about such scenarios and given that the parts of rationality that we do understand very well in principle are telling us to maximize expected utility, what does it mean not to trust your brain? In all of the scenarios in question that involve massive amounts of utility your uncertainty is included and being outweighed. It seems that what you are saying is that you don’t trust your higher order thinking skills and instead trust your gut feelings? You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs. How are you going to define and justify such a limit if you don’t trust your brain?
Anyway, I did some quick searches today and found out that the kind of problems I talked about are nothing new and mentioned in various places and contexts:
The St. Petersburg Paradox
The Infinitarian Challenge to Aggregative Ethics
Omohundro’s “Basic AI Drives” and Catastrophic Risks
I take risks when I actually have a grasp of what they are. Right now I’m trying to organize a DC meetup group, finish up my robotics team’s season, do all of my homework for the next 2 weeks so that I can go college touring, and combining college visits with LW meetups.
After April, I plan to start capoiera, work on PyMC, actually have DC meetups, work on a scriptable real time strategy game, start contra dancing again, start writing a sequence based on Heuristics and Biases, improve my dietary and exercise habits, and visit Serbia.
All of these things I have a pretty solid grasp of what they entail, and how they impact the world.
I still want to do high-utility things, but I just choose not to live in constant dread of lost opportunity. My general strategy of acquiring utility is to help/make other people get more utility too, and multiply the effects of getting the more low hanging fruit.
The issue with long-shots like this is that I don’t know where to look for them. Seriously. And since they’re such long-shots, I’m not sure how to go about getting them. I know that trying to do so isn’t particularly likely to work.
Sorry, I said that badly. If I knew how to get massive utility, I would try to. Its just the planning is the hard part. The best that I know to do now (note: I am carving out time to think about this harder in the forseeable future) is to get money and build communities. And give some of the money to SIAI. But in the meantime, I’m not going to be agonizing over everything I could have possibly done better.
Well, nothing philosophically. There’s probably quite a lot to say about, or rather in the aid of, one of our fellows who’s clearly in trouble.
The problem appears to be depression, i.e., more corrupt than usual hardware. Thus, despite the manifestations of the trouble as philosophy, I submit this is not the actual problem here.
We are in disagreement then. I reject, not just Pascal’s mugging, but also the style of analysis found in Bostrom’s “Astronomical Waste” paper. As I understand XiXiDu, he has been taught (by people who think like Bostrom) that even the smallest misstep on the way to the Singularity has astronomical consequences and that we who potentially commit these misteps are morally responsible for this astronomical waste.
Is the “Astronomical Waste” paper an example of “Pascal’s Mugging”? If not, how do you distinguish (setting aside the problem of how you justify the distinction)?
Do you have a link to Robin’s suggestion? I’m a bit surprised that a practicing economist would suggest something other than discounting. In another Bostrom paper, “The Infinitarian Challenge to Aggregative Ethics”, it appears that Bostrom also recognizes that something is broken, but he, too, doesn’t know how to fix it.
Exactly, I describe my current confusion in more detail in this thread, especially the comment here and here which led me to conclude this. Fairly long comments, but I wish someone would dissolve my confusion there. I really don’t care if you downvote them to −10, but without some written feedback I can’t tell what exactly is wrong, how I am confused.
Can be found via the Wiki:
I don’t quite get it.
I’m going to be poking at this question from several angles—I don’t think I’ve got a complete and concise answer.
I think you’ve got a bad case of God’s Eye Point of View—thinking that the most rational and/or moral way to approach the universe is as though you don’t exist.
The thing about GEPOV is that it isn’t total nonsense. You can get more truth if you aren’t territorial about what you already believe, but since you actually are part of the universe and you are your only point of view, trying to leave yourself out completely is its own flavor of falseness.
As you are finding out, ignoring your needs leads to incapacitation. It’s like saying that we mustn’t waste valuable hydrocarbons on oil for the car engine. All the hydrocarbons should be used for gasoline! This eventually stops working. It’s important to satisfy needs which are of different kinds and operate on different time scales.
You may be thinking that, since fun isn’t easily measurable externally, the need for it isn’t real.
I think you’re up against something which isn’t about rationality exactly—it’s what I call the emotional immune system. Depression is partly about not being able to resist (or even being attracted to) ideas which cause damage.
An emotional immune system is about having affection for oneself, and if it’s damaged, it needs to be rebuilt, probably a little at a time.
On the intellectual side, would you want all the people you want to help to defer their own pleasure indefinitely?
This sounds very true and important.
As far as I can tell, a great deal of thinking is the result of wanting thoughts which match a pre-existing emotional state.
Thoughts do influence emotions, but less reliably.
No, but I don’t know what a solution would look like. Most of the time I am just overwhelmed as it feels like everything I come up with isn’t much better than throwing a coin. I just can’t figure out the right balance between fun (experiencing; being selfish), moral conduct (being altruistic), utility maximization (being future-oriented) and my gut feelings (instinct; intuition; emotions). For example, if I have a strong urge to just go out and have fun, should I just give in to that urge or think about it? If I question the urge I often end up thinking about it until it is too late. Every attempt at a possible solution looks like browsing Wikipedia, each article links to other articles that again link to other articles until you end up with something completely unrelated to the initial article. It seems impossible to apply a lot of what is taught on LW in real life.
Maybe require yourself to have a certain amount of fun per week?
NancyLebovitz’s comment I think is highly relevant here.
I can only speak from my personal experience, but I’ve found than part of going through Less Wrong and understanding all the great stuff on this website, is understanding the type of creature I am. At this current moment, I am comparitively a very simple one. In terms of the singularity, and Friendly AI, they are miles from what I am, and I am not at a point where I can emotionally take on those causes. I can intellectual but the fact is the simple creature that I am doesn’t comprehend those connections yet. I want to one day, but a Baby has to crawl before it can walk. Much of what I do provides me with satisfaction, joy, happiness. I don’t even fully understand why. But what I do know, is that I need those emotions to not just function, but to improve, to continue the development of myself.
Maybe it might help to reduce yourself to that simple creature. Understand that for a baby to do math, it has to understand symbols. Maybe that what you understand intellectually, in terms of emotional function your not yet ready to deal with.
Just my two cents. sorry if I’m not as concise as I should be. I do hope the best for you though.
Peace—I think that is what you meant to say. We mostly agree. I am not sure you can tell someone else what they “should” be doing, though. That is for them to decide. I expect your egoism is not of the evangelical kind.
Saving the planet does have some merits though. People’s goals often conflict—but many people can endorse saving the planet. It is ecologically friendly, signals concern with Big Things, paints you as a Valiant Hero—and so on. As causes go, there are probably unhealthier ones to fall in with.
I’m kinda changing the subject here, but that wasn’t a typo. “Pace” was what I meant to write. Trouble is, I’m not completely sure what it means. I’ve seen it used in contexts that suggest it means something like “I know you disagree with this, but I don’t want to pick a fight. At least not now.” But I don’t know what it means literally, nor even how to pronounce it.
My guess is that it is church Latin, meaning (as you suggest) ‘peace’. ‘Requiescat in pace’ and all that. I suppose, since it is a foreign language word, I technically should have italicized. Can anyone help out here?
Latin (from pax “peace”), “with due respect offered to...”, e.g. “pace Brown” means “I respectfully disagree with Brown”, though the disagreement is often in fact not very respectful!
There is a difference between negative utility, and less than maximized utility. There are lots of people who enjoy their lives despite not having done as much as they could, even if they know that they could be doing more.
Its only when you dwell on what you haven’t done, aren’t doing, or could have done that you actually become unhappy about it. If you don’t start from maximum utility and see everything as a worse version of that, then you can easily enjoy the good things in your life.
you seem to be holding yourself morally responsible for future states. why? my attitude is that it was like this when I got here.
Now this looks like a wrong kind of question to consider in this context. The amount of fun your human existence is delivering, in connection with what you abstractly believe is the better course of action, is something relevant, but the details of how FAI would manage the future is not your human existence’s explicit problem, unless you are working on FAI design.
If it’s better for FAI to spend the next 3^^^3 multiverse millenia planning the future, why should that have a reflection in your psychological outlook? That’s an obscure technical question. What matters is whether it’s better, not whether it has a certain individual surface feature.
Irrational seems like the wrong world here after all the person could be rational but working with a dataset that does not allow them to reach that conclusion yet. There are also people who reach that conclusion irrationally, reach the right conclusion with a flawed method(unreliable) but are not more rational for having the right conclusions.
Why do you care what happens 3^^^^3 years from now?
That presumes no time discounting.
Time discounting is neither rational nor irrational. It’s part of the way one’s utility function is defined, and judgements of instrumental rationality can only be made by reference to a utility function. So there’s not necessarily any conflict between expected utility maximization and having fun now: indeed, one could even have a utility function that only cared about things that happened during the next five seconds, and attached zero utility to everything afterwards. I’m obviously not suggesting that anyone should try to start thinking like that, but I do suggest introducing a little more discounting into your utility measurements.
That’s even without taking into account the advice about needing rest that other people have brought up, and which I agree with completely. I tried going by the “denial of pleasures” route before, and the result was a burnout which began around three years ago and which is still hampering my productivity. If you don’t allow yourself to have fun, you will crash and burn sooner or later.
Couldn’t you just take all these negative stuff you came up with in connection to rationality, mark them as things to avoid, and then define rationality as efficiently pursuing whatever you actually find desirable?
That would be ignoring the arguments, as opposed to addressing them. How you define “rationality” shouldn’t matter for what particular substantive arguments incite you to do.
If you accept the “rationality is winning” definition, it makes little sense to come up with downsides about rationality, that’s what I was trying to point out.
It is quite similar to what you said in this comment.
A wrong way to put it. If a decision is optimal, there still remain specific arguments for why it shouldn’t be taken. Optimality is estimated overall, not for any singled out argument, that can therefore individually lose. See “policy debates shouldn’t appear one-sided”.
If, all else equal, it’s possible to amend a downside, then it’s a bad idea to keep it. But tradeoffs are present in any complicated decision, there will be specialized heuristics that disapprove of a plan, even if overall it’s optimized.
In our case, we have the heuristic of “personal fun”, which is distinct from overall morality. If you’re optimizing morality, you should expect personal fun to remain suboptimal, even if just a little bit.
(Yet another question is that rationality can give independent boost to the ability to have personal fun, which can offset this effect.)
All else equal, if having less fun improves expected utility, you should have less fun. But all else is not equal, it’s not clear to me that the search for more impact often leads to particularly no-fun plans. In other words, some low-hanging fun cuts are to be expected, you shouldn’t play WoW for weeks on end, but getting too far into the no-fun territory would be detrimental to your impact, and the best ways of increasing your impact probably retain a lot of fun. Also, happiness set point would probably keep you afloat.