I don’t trust my brain’s claims of massive utility enough to let it dominate every second of my life. I don’t even think I know what, this second, would be doing the most to help achieve a positive singularity.
I’m also pretty sure that my utility function is bounded, or at least hits diminishing returns really fast.
I know that thinking my head off about every possible high-utility counterfactual will make me sad, depressed, and indecisive, on top of ruining my ability to make progress towards gaining utility.
So I don’t worry about it that much. I try to think about these problems in doses that I can handle, and focus on what I can actually do to help out.
I don’t trust my brain’s claims of massive utility enough to let it dominate every second of my life.
Yet you trust your brain enough to turn down claims of massive utility. Given that our brains could not evolve to yield reliable inutions about such scenarios and given that the parts of rationality that we do understand very well in principle are telling us to maximize expected utility, what does it mean not to trust your brain? In all of the scenarios in question that involve massive amounts of utility your uncertainty is included and being outweighed. It seems that what you are saying is that you don’t trust your higher order thinking skills and instead trust your gut feelings? You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs. How are you going to define and justify such a limit if you don’t trust your brain?
Anyway, I did some quick searches today and found out that the kind of problems I talked about are nothing new and mentioned in various places and contexts:
The ‘expected value’ of the game is the sum of the expected payoffs of all the consequences. Since the expected payoff of each possible consequence is $1, and there are an infinite number of them, this sum is an infinite number of dollars. A rational gambler would enter a game iff the price of entry was less than the expected value. In the St. Petersburg game, any finite price of entry is smaller than the expected value of the game. Thus, the rational gambler would play no matter how large the finite entry price was. But it seems obvious that some prices are too high for a rational agent to pay to play. Many commentators agree with Hacking’s (1980) estimation that “few of us would pay even $25 to enter such a game.” If this is correct—and if most of us are rational—then something has gone wrong with the standard decision-theory calculations of expected value above. This problem, discovered by the Swiss eighteenth-century mathematician Daniel Bernoulli is the St. Petersburg paradox. It’s called that because it was first published by Bernoulli in the St. Petersburg Academy Proceedings (1738; English trans. 1954).
If EDR were accepted, speculations about infinite scenarios, however unlikely and far‐fetched, would come to dominate our ethical deliberations. We might become extremely concerned with bizarre possibilities in which, for example, some kind of deity exists that will use its infinite powers to good or bad ends depending on what we do. No matter how fantastical any such scenario would be, if it is a logically coherent and imaginable possibility it should presumably be assigned a finite positive probability, and according to EDR, the smallest possibility of infinite value would smother all other considerations of mere finite values.
[...]
Suppose that I know that a certain course of action, though much less desirable in every other respect than an available alternative, offers a one‐in‐a‐million chance of avoiding catastrophe involving x people, where x is finite. Whatever else is at stake, this possibility will overwhelm my calculations so long as x is large enough. Even in the finite case, therefore, we might fear that speculations about low‐probability‐high‐stakes scenarios will come to dominate our moral decision making if we follow aggregative consequentialism.
If we consider systems that would value some apparently physically unattainable quantity of resources orders of magnitude more than the apparently accessible resources given standard physics (e.g. resources enough to produce 10^1000 offspring), the potential for conflict again declines for entities with bounded utility functions. Such resources are only attainable given very unlikely novel physical discoveries, making the agent’s position similar to that described in “Pascal’s Mugging” (Bostrom, 2009), with the agent’s decision-making dominated by extremely small probabilities of obtaining vast resources.
You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs
I take risks when I actually have a grasp of what they are. Right now I’m trying to organize a DC meetup group, finish up my robotics team’s season, do all of my homework for the next 2 weeks so that I can go college touring, and combining college visits with LW meetups.
After April, I plan to start capoiera, work on PyMC, actually have DC meetups, work on a scriptable real time strategy game, start contra dancing again, start writing a sequence based on Heuristics and Biases, improve my dietary and exercise habits, and visit Serbia.
All of these things I have a pretty solid grasp of what they entail, and how they impact the world.
I still want to do high-utility things, but I just choose not to live in constant dread of lost opportunity. My general strategy of acquiring utility is to help/make other people get more utility too, and multiply the effects of getting the more low hanging fruit.
Suppose that I know that a certain course of action
with the agent’s decision-making dominated by extremely small probabilities of obtaining vast resources.
The issue with long-shots like this is that I don’t know where to look for them. Seriously. And since they’re such long-shots, I’m not sure how to go about getting them. I know that trying to do so isn’t particularly likely to work.
Yet you trust your brain enough to turn down claims of massive utility.
Sorry, I said that badly. If I knew how to get massive utility, I would try to. Its just the planning is the hard part. The best that I know to do now (note: I am carving out time to think about this harder in the forseeable future) is to get money and build communities. And give some of the money to SIAI. But in the meantime, I’m not going to be agonizing over everything I could have possibly done better.
I don’t trust my brain’s claims of massive utility enough to let it dominate every second of my life. I don’t even think I know what, this second, would be doing the most to help achieve a positive singularity.
I’m also pretty sure that my utility function is bounded, or at least hits diminishing returns really fast.
I know that thinking my head off about every possible high-utility counterfactual will make me sad, depressed, and indecisive, on top of ruining my ability to make progress towards gaining utility.
So I don’t worry about it that much. I try to think about these problems in doses that I can handle, and focus on what I can actually do to help out.
Yet you trust your brain enough to turn down claims of massive utility. Given that our brains could not evolve to yield reliable inutions about such scenarios and given that the parts of rationality that we do understand very well in principle are telling us to maximize expected utility, what does it mean not to trust your brain? In all of the scenarios in question that involve massive amounts of utility your uncertainty is included and being outweighed. It seems that what you are saying is that you don’t trust your higher order thinking skills and instead trust your gut feelings? You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs. How are you going to define and justify such a limit if you don’t trust your brain?
Anyway, I did some quick searches today and found out that the kind of problems I talked about are nothing new and mentioned in various places and contexts:
The St. Petersburg Paradox
The Infinitarian Challenge to Aggregative Ethics
Omohundro’s “Basic AI Drives” and Catastrophic Risks
I take risks when I actually have a grasp of what they are. Right now I’m trying to organize a DC meetup group, finish up my robotics team’s season, do all of my homework for the next 2 weeks so that I can go college touring, and combining college visits with LW meetups.
After April, I plan to start capoiera, work on PyMC, actually have DC meetups, work on a scriptable real time strategy game, start contra dancing again, start writing a sequence based on Heuristics and Biases, improve my dietary and exercise habits, and visit Serbia.
All of these things I have a pretty solid grasp of what they entail, and how they impact the world.
I still want to do high-utility things, but I just choose not to live in constant dread of lost opportunity. My general strategy of acquiring utility is to help/make other people get more utility too, and multiply the effects of getting the more low hanging fruit.
The issue with long-shots like this is that I don’t know where to look for them. Seriously. And since they’re such long-shots, I’m not sure how to go about getting them. I know that trying to do so isn’t particularly likely to work.
Sorry, I said that badly. If I knew how to get massive utility, I would try to. Its just the planning is the hard part. The best that I know to do now (note: I am carving out time to think about this harder in the forseeable future) is to get money and build communities. And give some of the money to SIAI. But in the meantime, I’m not going to be agonizing over everything I could have possibly done better.