It’s probably too small scale to be statistically significant. The God acts on large sample sizes and problems with many different bottlenecks. I would guess that most of the cost was tied up in a single technique.
Philip_W
Status works like OP describes, when going from “dregs” to “valued community member”. Social safety is a very basic need, and EA membership undermines that for many people by getting them to compare themselves to famous EAs, rather than to a more realistic peer group. This is especially true in regions with a lower density of EAs, or where all the ‘real’ EAs pack up and move to higher density regions.
I think the OP meant “high” as a relative term, compared to many people who feel like dregs.
People don’t have that amount of fine control over their own psychology. Depression isn’t something people ‘do to themselves’ either, at least not with the common implications of that phrase.
Also, this was a minimal definition based on a quick search of relevant literature for demonstrated effects, as I intended to indicate with “at least”. Effects of objectification in the perpetrator are harder to disentangle.
Sociology and psychology. Determine patterns in human desires and behaviour, and determine universal rules. Either that, or scale up your resources and get yourself an fAI.
‘Happiness’ is a vague term which refers to various prominent sensations and to a more general state, as vague and abstract as CEV (e.g. “Life, Liberty, and the pursuit of Happiness”). ‘Headache’, on the other hand, primarily refers to the sensation.
If you take an aspirin for a headache, your head muscles don’t stop clenching (or whatever else the cause is); it just feels like it for a while. A better pill would stop the clenching, and a better treatment still would make you aware of the physiological cause of the clenching and allow you to change it to your liking.
Having a good factual model of a person would be necessary, and perhaps sufficient, for making that judgment favourably. When moving beyond making people more equal and free in their means, the model should be significantly better than their self-model. After that, the analyst would probably value the thus observed people caring about self-determination in the territory (so no deceiving them to think they’re self-determining), and act accordingly.
If people declare that analysing people well enough to know their moral values is itself being a busybody, it becomes harder. First I would note that using the internet without unusual data protection already means a (possibly begrudging) acceptance of such busybodies, up to a point. But in a more inconvenient world, consent or prevention of acute danger are as far as I would be willing to go in just a comment.
In the analogy, water represents the point of the quote (possibly as applied to CEV). You’re saying there is no point. I don’t understand what you’re trying to say in a way that is meaningful, but I won’t bother asking because ‘you can’t do my thinking for me’.
Edit: fiiiine, what do you mean?
Be careful when defining the winner as someone other than the one currently sitting on a mound of utility.
Most lesswrong users at least profess to want to be above social status games, so calling people out on it increases expected comment quality and personal social status/karma, at least a little.
You may not be able to make a horse drink, but you can still lead it to water rather than merely point out it’s thirsty. Teaching is a thing that people do with demonstrated beneficial results across a wide range of topics. Why would this be an exception?
I don’t think that helps AndHisHorse figure out the point.
Congratulations!
I might just have to go try it now.
‘he’ in that sentence (‘that isn’t the procedure he chose’) still referred to Joe. Zubon’s description doesn’t justify the claim, it’s a description of the consequence of the claim.
My original objection was that ‘they’ (“I think they would have given up on this branch already.”) have a different procedure than Joe has (“all you have to do is do a brute force search of the space of all possible actions, and then pick the one with the consequences that you like the most.”). Whomever ‘they’ refers to, you’re expecting them to care about human suffering and be more careful than Joe is. Joe is a living counterexample to the notion that anyone with that kind of power would have given up on our branch already, since he explicitly throws caution to the wind and runs a brute force search of all Joe::future universes using infinite processing power, which would produce an endless array of rejection-worthy universes run at arbitrary levels of detail.
What do you mean with “never-entered” (or “entered”) states? Ones Joe doesn’t (does) declare real to live out? If so, the two probably correlate but Joe may be mistaken. A full simulation of our universe running on sufficient hardware would contain qualia, so the infinitely powerful process which gives Joe the knowledge which he uses to decide which universe is best may contain qualia as well, especially if the process is optimised for ability-to-make Joe-certain-of-his-decision rather than Joe’s utility function.
How about now?
While Joe could follow each universe and cut it off when it starts showing disutility, that isn’t the procedure he chose. He opted to create universes and then “undo” them.
I’m not sure whether “undoing” a universe would make the qualia in it not exist. Even if it is removed from time, it isn’t removed from causal history, because the decision to “undo” it depends on the history of the universe.
Read it more carefully. One or several paragraphs before the designated-human aliens, it is mentioned that CelestAI found many sources of complex radio waves which weren’t deemed “human”.
From your username it looks like you’re Dutch (it is literally “the flying Dutchman” in Dutch), so I’m surprised you’ve never heard of the Dutch bible belt and their favourite political party, the SGP. They get about 1.5% of the vote in the national elections and seem pretty legit. And those are just the Christians fervent enough to oppose women’s suffrage. The other two Christian parties have around 15% of the vote, and may contain proper believers as well.
I think he means “I cooperate with the Paperclipper IFF it would one-box on Newcomb’s problem with myself (with my present knowledge) playing the role of Omega, where I get sent to rationality hell if I guess wrong”. In other words: If Elezier believes that if Elezier and Clippy were in the situation that Elezier would prepare for one-boxing if he expected Clippy to one-box and two-box if he expected Clippy to two-box, Clippy would one-box, then Elezier will cooperate with Clippy. Or in other words still: If Elezier believes Clippy to be ignorant and rational enough that it can’t predict Elezier’s actions but uses game theory at the same level as him, then Elezier will cooperate.
In the uniterated prisoner’s dilemma, there is no evidence, so it comes down to priors. If all players are rational mutual one-boxers, and all players are blind except for knowing they’re all mutual one-boxers, then they should expect everyone to make the same choice. If you just decide that you’ll defect/one-box to outsmart others, you may expect everyone to do so, so you’ll be worse off than if you decided not to defect (and therefore nobody else would rationally do so either). Even if you decide to defect based on a true random number generator, then for
(2,2) (0,3)
(3,0) (1,1)
the best option is still to cooperate 100% of the time.
If there are less rational agents afoot, the game changes. The expected reward for cooperation becomes 2(xr+(1-d-r)) and the reward for defection becomes 3(xr+(1-d-r))+d+(1-x)r=1+2(xr+(1-d-r)), where r is the fraction of agents who are rational, d is the fraction expected to defect, x is the probability with which you (and by extension other rational agents) will cooperate, and (1-d-r) is the fraction of agents who will always cooperate. Optimise for x in 2x(xr+(1-d-r))+(1-x)(1+2(xr+(1-d-r)))=1-x+2(xr-1-d-r)=x(2r-1)-(1+2d+2r); which means you should cooperate 100% of the time if the fraction of agents who are rational r > 0.5, and defect 100% of the time if r < 0.5.
In the iterated prisoner’s dilemma, this becomes more algebraically complicated since cooperation is evidence for being cooperative. So, qualitatively, superintelligences which have managed to open bridges between universes are probably/hopefully (P>0.5) rational, so they should cooperate on the last round, and by extension on every round before that. If someone defects, that’s strong evidence to them not being rational or having bad priors, and if the probability of them being rational drops below 0.5, you should switch to defecting. I’m not sure if you should cooperate if your opponent cooperates after defecting on the first round. Common sense says to give them another chance, but that may be anthropomorphising the opponent.
If the prior probability of inter-universal traders like Clippy and thought experiment::Elezier is r>0.5, and thought experiment::Elezier has managed not to make his mental makeup knowable to Clippy and vice versa, then both Elezier and Clippy ought to expect r>0.5. Therefore they should both decide to cooperate. If Elezier suspects that Clippy knows Elezier well enough to predict his actions, then for Elezier ‘d’ becomes large (Elezier suspects Clippy will defect if Elezier decides to cooperate). Elezier unfortunately can’t let himself be convinced that Clippy would cooperate at this point, because if Clippy knows Elezier, then Clippy can fake that evidence. This means both players also have strong motivation not to create suspicion in the other player: knowing the other player would still mean you lose, if the other player finds out you know. Still, if it saves a billion people, both players would want to investigate the other to take victory in the final iteration of the prisoner’s dilemma (using methods which provide as little evidence of the investigation as possible; the appropriate response to catching spies of any sort is defection).
In a sense they did eat gold, like we eat stacks of printed paper, or perhaps nowadays little numbers on computer screens.
Small correction: you want to buy the widget as long as x > 7⁄8 .
You should also almost never expect x>1, because that means you should immediately spend your money on that cause until x becomes 1 or you run out of credit. x=1 means that something is the best marginal way to allocate money that you know of right now.