Thanks, fixed!
Academian
Willpower Depletion vs Willpower Distraction
CFAR is looking for a videographer for next Wednesday
you cannot use the category of “quantum random” to actual coin flip, because an object to be truly so it must be in a superposition of at least two different pure states, a situation that with a coin at room temperature has yet to be achieved (and will continue to be so for a very long time).
Given the level of subtlety in the question, which gets at the relative nature of superposition, this claim doesn’t quite make sense. If I am entangled with a a state that you are not entangled with, it may “be superposed” from your perspective but not from either of my various perspectives.
For example: a projection of the universe can be in state
(you observe NULL)⊗(I observe UP)⊗(photon is spin UP) + (you observe NULL)⊗(I observe DOWN)⊗(photon is spin DOWN) = (you observe NULL)⊗((I observe UP)⊗(photon is spin UP) + (I observe DOWN)⊗(photon is spin DOWN))
The fact that your state factors out means you are disentangled from the joint state of me and the particle, and so together the particle and I are “in a superimposed state” from “your perspective”. However, my state does not factor out here; there are (at least) two of me, each observing a different outcome and not a superimposed photon.
Anyway, having cleared that up, I’m not convinced that there is enough mutual information connecting my frontal lobe and the coin for the state of the coin to be entangled with me (i.e. not “in a superposed state”) before I observe it. I realize this is testable, e.g., if the state amplitudes of the coin can be forced to have complex arguments differing in a predictable way so as to produce an expected and measurable interference paterns. This is what we have failed to produce at a macroscopic level, and it is this failure that you are talking about when you say
a situation that with a coin at room temperature has yet to be achieved (and will continue to be so for a very long time).
I do not believe I have been shown a convincing empirical test ruling out the possibility that the state is not, from my brain’s perspective, in a superposition of vastly many states with amplitudes whose complex arguments are difficult to predict or control well enough to produce clear interference patterns, and half of which are “heads” state and half of which are “tails” states. But I am very ready to be corrected on this, so if anyone can help me out, please do!
Are coin flips quantum random to my conscious brain-parts?
Not justify: instead, explain.
I disagree. Justification is the act of explaining something in a way that makes it seem less dirty.
If you’re curious about someone else’s emotions or perspective, first, remember that there are two ways to encode knowledge of how someone else feels: by having a description of their feelings, or by empathizing and actually feeling them yourself. It is more costly—in terms of emotional energy—to empathize with someone, but if you care enough about them to afford them that cost, I think it’s the way to go. You can ask them to help you understand how they feel, or help you to see things the way they do. If you succeed, they’ll appreciate having someone who can share their perspective.
[LINK] General-audience documentary on cosmology, anthropics, and superintelligence
My summary of this idea has been that life is a non-convex optimization problem. Hill-climbing will only get you to the top of the hill that you’re on; getting to other hills requires periodic re-initializing. Existing non-convex optimization techniques are often heuristic rather than provably optimal, and when they are provable, they’re slow.
And the point of CFAR is to help people become better filtering good ideas from bad. It is plainly not to produce people who automatically believe the best verbal argument anyone presents to them without regard for what filters that argument has been through, or what incentives the Skilled Arguer might have to utter the Very Convincing Argument for X instead of the Very Very Convincing Argument for Y. And certainly not to have people ignore their instincts; e.g. CFAR constantly recommends Thinking Fast and Slow by Kahneman, and teaches exercises to extract more information from emotional and physical senses.
What if we also add a requirement that the FAI doesn’t make anyone worse off in expected utility compared to no FAI?
I don’t think that seems reasonable at all, especially when some agents want to engage in massively negative-sum games with others (like those you describe), or have massively discrete utility functions that prevent them from compromising with others (like those you describe). I’m okay with some agents being worse off with the FAI, if that’s the kind of agents they are.
Luckily, I think people, given time to reflect and grown and learn, are not like that, which is probably what made the idea seem reasonable to you.
Non-VNM agents satisfying only axiom 1 have coherent preferences… they just don’t mix well with probabilities.
Dumb solution: an FAI could have a sense of justice which downweights the utility function of people who are killing and/or procreating to game their representation in AI’s utility function, or something like that do disincentivize it. (It’s dumb because I don’t know how to operationalize justice; maybe enough people would not cheat and want to punish the cheaters that the FAI would figure that out.)
Also, given what we mostly believe about moral progress, I think defining morality in terms of the CEV of all people who ever lived is probably okay… they’d probably learn to dislike slavery in the AI’s simulation of them.
Thanks for writing this up!
I don’t see how it could be true even in the sense described in the article without violating Well Foundation somehow
Here’s why I think you don’t get a violation of the axiom of well-foundation from Joel’s answer, starting from way-back-when-things-made-sense. If you want to skim and intuit the context, just read the bold parts.
1) Humans are born and see rocks and other objects. In their minds, a language forms for talking about objects, existence, and truth. When they say “rocks” in their head, sensory neurons associated with the presence of rocks fire. When they say “rocks exist”, sensory neurons associated with “true” fire.
2) Eventually the humans get really excited and invent a system of rules for making cave drawings like “∃” and “x” and “∈” which they call ZFC, which asserts the existence of infinite sets. In particular, many of the humans interpret the cave drawing “∃” to mean “there exists”. That is, many of the same neurons fire when they read “∃” as when they say “exists” to themselves. Some of the humans are careful not to necessarily believe the ZFC cave drawing, and imagine a guy named ZFC who is saying those things… “ZFC says there exists...”.
3) Some humans find ways to write a string of ZFC cave drawings which, when interpreted—when allowed to make human neurons fire—in the usual way, mean to the humans that ZFC is consistent. Instead of writing out that string, I’ll just write in place of it.
4) Some humans apply the ZFC rules to turn the ZFC axiom-cave-drawings and the cave drawing into a cave drawing that looks like this:
“∃ a set X and a relation e such that <(X,e) is a model of ZFC>”
where <(X,e) is a model of ZFC> is a string of ZFC cave drawings that means to the humans that (X,e) is a model of ZFC. That is, for each axiom A of ZFC, they produce another ZFC cave drawing A’ where “∃y” is always replaced by “∃y∈X”, and “∈” is always replaced by “e”, and then derive that cave drawing from the cave drawing ” and ” according to the ZFC rules.
Some cautious humans try not to believe that X really exists… only that ZFC and the consistency of ZFC imply that X exists. In fact if X did exist and ZFC meant what it usually does, then X would be infinite.
4) The humans derive another cave drawing from ZFC+:
“∃Y∈X and f∈X such that <(Y,f) is a model of ZFC>”,
6) The humans derive yet another cave drawing,
“∃ZeY and geX such that <(Z,g) is a model of ZFC>”.
Some of the humans, like me, think for a moment that Z∈Y∈X, and that if ZFC can prove this pattern continues then ZFC will assert the existence of an infinite regress of sets violating the axiom of well-foundation… but actually, we only have “ZeY∈X” … ZFC only says that Z is related to Y by the extra-artificial e-relation that ZFC said existed on X.
I think that’s why you don’t get a contradiction of well-foundation.
testing this symbol: ∃
That was imprecise, but I was trying to comment on this part of the dialogue using the language that it had established
Ah, I was asking you because I thought using that language meant you’d made sense of it ;) The language of us “living in a (model of) set theory” is something I’ve heard before (not just from you and Eliezer), which made me think I was missing something. Us living in a dynamical system makes sense, and a dynamical system can contain a model of set theory, so at least we can “live with” models of set theory… we interact with (parts of) models of set theory when we play with collections of physical objects.
Models being static is a matter of interpretation.
Of course, time has been a fourth dimension for ages ;) My point is that set theory doesn’t seem to have a reasonable dynamical interpretation that we could live in, and I think I’ve concluded it’s confusing to talk like that. I can only make sense of “living with” or “believing in” models.
Help me out here…
One of the participants in this dialogue … seems too convinced he knows what model he’s in.
I can imagine living a simulation… I just don’t understand yet what you mean by living in a model in the sense of logic and model theory, because a model is a static thing. I heard someone once before talk about “what are we in?”, as though the physical universe were a model, in the sense of model theory. He wasn’t able to operationalize what he meant by it, though. So, what do you mean when you say this? Are you considering the physical universe a first-order structure) somehow? If so, how? And concerning its role as a model, what formal system are you considering it a model of?
Until I’m destroyed, of course!
… but since Qiaochu asked that we take ultrafinitism seriously, I’ll give a serious answer: something else will probably replace ultrafinitism as my preferred (maximum a posteriori) view of math and the world within 20 years or so. That is, I expect to determine that the question of whether ultrafinitism is true is not quite the right question to be asking, and have a better question by then, with a different best guess at the answer… just because similar changes of perspective have happened to me several times already in my life.
Great question! It was in the winter of 2013, about a year and a half ago.