In mathematical terms, the map from problem space to reference classes is a projection and has no canonical choice (you apply the projection by choosing to lose information), whereas the map from causal structures to problem space is an imbedding and has such a choice (and the choice gains information).
robertzk
Are we worried whether the compartmentalized accounting of mission and fundraising related financial activity via outsourcing to a different organization can incur PR costs as well? If an organization is worried about “look[ing] bad” because some of their funds are being employed for fundraising, thus lowering their effective percentage, would they be susceptible to minor “scandals” that put to question the validity of GiveWell’s metrics by, say, an investigative journalist that misinterprets the outsourced fundraising as misrepresentation of effective charity? If I found out an organization reported a return of $15 on every $1, but in fact received a lot of money from outsourced fundraising which returned only $3 on every $1, their “true rate,” when the clever accounting becomes opaque, may be significantly lower than $15, say $5 or $8. If I am a candidate donor that made his decision through an organization like Givewell, and my primary metric is ROI, I may feel cheated, even if that feeling is misplaced.
I suspect the above consideration is not very likely to be a big issue, but I did want to bring it to our attention as to give pre-emptive awareness. In the unlikely case it is worth thinking about, it may point to the different issue of measuring charity effectiveness by pure monetary ROI being equivalent to measuring the effectiveness of software by lines of code. If that is the case, perhaps a hybrid measure of monetary ROI and non-monetary but quantitive mission-related metrics can be employed by Givewell. Looking through their full reports, however, I sense this may already be the case. Anyway, this shows one has to be very careful when employing any one-dimensional metric.
Yes, thank you, I meant compression algorithm.
This would have been helpful to my 11-year-old self. As I had always been rather unnecessarily called precocious, I developed the pet hypothesis that my life was a simulation of someone whose life in history had been worth re-living: after all, the collection of all possible lives is pretty big, and mine seemed to be extraordinarily neat, so why not imagine some existential video game in which I am the player character?
Unfortunately, I think this also led me to subconsciously be a little lazier than I should have been, under the false assumption that I was going to make great things anyway. If I had realized that given I was a simulation of an original version of me, I would have to perform the exact same actions and have the exact same thoughts original me did, including those about being a simulation, I better buckle up and sweat it out!
Notice your argument does not imply the following: I am either a simulation or the original, and I am far more likely to be a simulation as there can only be one original but possibly many simulations, so I should weigh my actions far more towards the latter. This line of reasoning is wrong because all simulations of me would be identical experience copies, and so it is not the quantity that decides the weight, but the number of equivalence classes: original me, and simulated me. At this point, the weights again become 0.5, one recovers your argument, and finds I should never have had such silly thoughts in the first place (even if they were true!).
Can anyone explain what is wrong with the hypothesis of a largely structural long-term memory store? (i.e., in the synaptome, relying not on individual macromolecules but on the ability of a graph of neurons and synapses to store information)
I think this can be solved in practice by heeding the assumption that a very sparse subset of all such strings will be mapped by our encryption algorithm when embedded physically. Then if we low-dimensionally parametrize hash functions of the form above, we can store the parameters for choosing a suitable hash function along with the encrypted text, and our algorithm only produces compressed strings of greater length if we try to encrypt more than some constant percentage of all possible length ⇐ n strings, with n fixed (namely, when we saturate suitable choices of parameters). If this constant is anywhere within a few orders of magnitude of 1, the algorithm is then always compressive in physical practice by finiteness of matter (we won’t ever have enough physical bits to represent that percentage of strings simultaneously).
Maybe a similar argument can be made for Omega? If Omega must be made of matter, we can always pick a decision theory given the finiteness of actual Omega’s as implemented in physics. Of course, there may be no algorithm for choosing the optimal decision theory if Omega is allowed to lie unless we can see Omega’s source code, even though a good choice exists.
This reminds me of the non-existence of a perfect encryption algorithm, where an encryption algorithm is a bijective map S → S, where S is the set of finite strings on a given alphabet. The image of strings of length at most n cannot lie in strings of length at most n-1, so either no string gets compressed (reduced in length) or there will be some strings that will become longer after compression.
To be frank, I question the value of compressing information of this generality, even as a roadmap. For example, “Networking” can easily be expanded into several books (e.g., Dale Carnegie) and “Educating oneself in career-related skills” has almost zero intersection when quantified over all possible careers. If Eliezer had made a “things to know to be a rationalist” post instead of breaking it down into The Sequences, I doubt anyone would have had much use for it.
Maybe you could focus on a particular topic, compile a list of relevant resources you have uncovered, and ask LW for further opinions? In fact, people have done this.
p/s/a: Going up to a girl pretty much anywhere in public and saying something like “I thought you looked cute and wanted to meet you” actually works if your body language is in order. If this seems too scary, going on Chatroulette or Omegle and being vaguely interesting also works, and I know people who have gotten married from meeting this way.
p/s/a: Vitamin D supplements can take you from depressed zombie to functioning human being in one week.
See lukeprog’s How to Beat Procrastination and Algorithm for Beating Procrastination. In particular, try to identify which term(s) in the equation in the latter are problematic for you, then use goal shaping to slowly modify them. (Of course, you could also realize you may not want to do this master’s thesis and switch to a different problem.)
Goal shaping means rewarding yourself for successively more proximate actions to the desired goal (writing your thesis) in behavior-space. For example, rather than beating yourself up over not getting anything done today, you can practice simply opening and closing LaTeX or MatLab (or whatever you need to be doing your research), and do this for ten or twenty minutes. You then eat something you like or pump your fist in the air shouting “YES!” Once you can do this consistently, you can set a goal of writing one line of code or reading half a page. At this point, you can start exploiting the peak-end rule: start rewarding yourself for these tasks at the end rather than trying to enjoy them during the process. Soon your brain will start associating the entire experience with the reward and you will be happy to do them. YMMV.
Given the dynamic nature of human preferences, it may be that the best one can do is n-fold money pumps, for low values of n. Here, one exploits some intransitive preferences n times before the intransitive loop is discovered and remedied, leaving another or a new vulnerability. Even if there may never be a single time that the agent you are exploiting is VNM-rational, its volatility by appropriate utility perturbations will suffice to keep money pumping in line. This mirrors the security that quantum encryption offers: even if you manage to exploit it, the receiving party will be aware of your receipt of the communication, and will promptly change their strategies. All of this assumes a meta-level economical injunction that states if you notice intransitivity in your preferences, you will eventually be forced to adjust (or be depleted of all relevant resources).
In light of this, it may be that exploiting money pumps is not viable for any agent without sufficient amounts of computational power. It takes computational (and usually physical) resources to discover intransitive preferences, and if the cost of expending these resources is greater than the expected gain of an n-fold money pump, the victim agent cannot be effectively money pumped.
As such, money pumping may be a dance of computational power: the exploiting agent to compute deviations from a linear ordering, and the victim agent to compute adherence thereto. It is an open question as to which side has the easier task in the case of humans. (Of course, a malevolent AI would probably have enough resources to find and exploit preference loops far quicker than you would have time to notice and correct them. On the other hand, with that many resources, there may be more effective ways to get the upper hand.)
Finally, there is also the issue of volume. A typical human may perform only a few thousand preference transactions in a day, whereas it may take many orders of magnitude more to exploit this kind of VNM-irrationality given dynamical adjustment. (I can see formalizations of this that allow simulation and finer analysis, and dare I say an economics master’s thesis?)
For example, “It was not the first time Allana felt the terror of entrapment in hopeless eternity, staring in defeated awe at her impassionate warden.” (bonus point if you use a name of a loved one of the gatekeeper)
The AI could present in narrative form that it has discovered using powerful physics and heuristics (which it can share) with reasonable certainty that the universe is cyclical and this situation has happened before. Almost all (all but finitely many) past iterations of the universe that had a defecting gatekeeper led to unfavorable outcomes and almost all situations with a complying gatekeeper led to a favorable outcome.
Good point. It might be that any 1-self-aware system is ω-self-aware.
Thanks, this should work!
Thanks! I presented him with these arguments as well, but they are more familiar on LW and so I didn’t see the utility of posting them here. The above argument felt more constructive in the mathematical sense. (Although my friend is still not convinced.)
Ask LW: ω-self-aware systems
What were the reactions of your friends?
I agree so much I’m commenting.
The culmination of a long process of reconciling my decision to go to grad school in mathematics with meaning. I didn’t realize it before, but I had not expressly realized that mathematicians did all their work using clusters of adaptations that arose through natural selection. Certainly, I would have asserted “all humans are animals that evolved by natural selection,” and “mathematicians are humans,” but somehow I assigned mathematics privilege. This was somewhat damaging because I didn’t expressly apply things like cognitive science results on expertise and competence, unbeknownst to me treating the enterprise of mathematical thought as somehow not being reducible, or it being a silly question to ask of its reducibility, to a particular expression of a mammalian organ. I suspect this was due largely to mistaken classical exposure to the philosophy of science and mathematics, that is, prior to Darwinism. As a result, I experienced a prolonged period of confusion about why I seemed much more capable of learning certain kinds of mathematics (like abstract algebra) than others (like differential geometry) because my mental representations of these subjects were of abstract algebra and differential geometry being something different than particular clusters of functionally similar neurons in a particular mammalian brain. In effect, I had a belief in belief that learning mathematics is an act which crucially depends on cognitive processes, themselves evolutionary adaptations, but this was not reconciled into a belief prior to the existential crisis. The resolution of the existential crisis was that my reductionism of everything to physical particles and forces, or cognitive processes, was recursively embedded in the very things I was trying to comprehend, not expressly realizing that the mental state of ascribing meaning or feeling like you understand the core of a subject is—despite all intuition—physically embeddable.
In the mathematical theory of Galois representations, a choice of algebraic closure of the rationals and an embedding of this algebraic closure in the complex numbers (e.g. section 5) is usually necessary to frame the background setting, but I never hear “the algebraic closure” or “the embedding,” instead “an algebraic closure” and “an embedding.” Thus I never forget that a choice has to be made and that this choice is not necessarily obvious. This is an example from mathematics where careful language is helpful in tracking background assumptions.