I’ve been trying to get MIRI to switch to stop calling this blackmail (extortion for information) and start calling it extortion (because it’s the definition of extortion). Can we use this opportunity to just make the switch?
Academian
I support this, whole-heartedly :) CFAR has already created a great deal of value without focusing specifically on AI x-risk, and I think it’s high time to start trading the breadth of perspective CFAR has gained from being fairly generalist for some more direct impact on saving the world.
“Brier scoring” is not a very natural scoring rule (log scoring is better; Jonah and Eliezer already covered the main reasons, and it’s what I used when designing the Credence Game for similar reasons). It also sets off a negative reaction in me when I see someone naming their world-changing strategy after it. It makes me think the people naming their strategy don’t have enough mathematician friends to advise them otherwise… which, as evidenced by these comments, is not the case for CFAR ;) Possible re-naming options that contrast well with “signal boosting”
Score boosting
Signal filtering
Signal vetting
This is a cryonics-fails story, not a cryonics-works-and-is-bad story.
Seems not much worse than actual-death, given that in this scenario you could still choose to actually-die if you didn’t like your post-cryonics life.
Seems not much worse than actual-death, given that in this scenario you (or the person who replaces you) could still choose to actually-die if you didn’t like your post-cryonics life.
Seems not much worse than actual-death, given that in this scenario you could still choose to actually-die if you didn’t like your post-cryonics life.
This is an example where cryonics fails, and so not the kind of example I’m looking for in this thread. Sorry if that wasn’t clear from the OP! I’m leaving this comment to hopefully prevent more such examples from distracting potential posters.
Hmm, this seems like it’s not a cryonics-works-for-you scenario, and I did mean to exclude this type of example, though maybe not super clearly:
OP: There’s a separate question of whether the outcome is positive enough to be worth the money, which I’d rather discuss in a different thread.
(2) A rich sadist finds it somehow legally or logistically easier to lay hands on the brains/minds of cryonics patients than of living people, and runs some virtual torture scenarios on me where I’m not allowed to die for thousands of subjective years or more.
(1) A well-meaning but slightly-too-obsessed cryonics scientist wakes up some semblance of me in a semi-conscious virtual delirium for something like 1000 very unpleasant subjective years of tinkering to try recovering me. She eventually quits, and I never wake up again.
See Nate’s comment above:
http://lesswrong.com/lw/n39/why_cfar_the_view_from_2015/cz99
And, FWIW, I would also consider anything that spends less than $100k causing a small number of top-caliber researchers to become full-time AI safety researchers to be extremely “effective”.
[This is in fact a surprisingly difficult problem to solve. Aside from personal experience seeing the difficulty of causing people to become safety researchers, I have also been told by some rich, successful AI companies earnestly trying to set up safety research divisions (yay!) that they are unable to hire appropriately skilled people to work full-time on safety.]
Just donated $500 and pledged $6500 more in matching funds (10% of my salary).
I would expect not for a paid workshop! Unlike CFAR’s core workshops, which are highly polished and get median 9⁄10 and 10⁄10 “are you glad you came” ratings, MSFP
was free and experimental,
produced two new top-notch AI x-risk researchers for MIRI (in my personal judgement as a mathematician, and excluding myself), and
produced several others who were willing hires by the end of the program and who I would totally vote to hire if there were more resources available (in the form of both funding and personnel) to hire them.
1) Logical depth seems super cool to me, and is perhaps the best way I’ve seen for quantifying “interestingness” without mistakenly equating it with “unlikeliness” or “incompressibility”.
2) Despite this, Manfred’s brain-encoding-halting-times example illustrates a way a D(u/h) / D(u) optimized future could be terrible… do you think this future would not obtain, because despite being human-brain-based, would not in fact make much use of being on a human brain? That is, it would have extremely high D(u) and therefore be penalized?
I think it would be easy to rationalize/over-fit our intuitions about this formula to convince ourselves that it matches our intuitions about what is a good future. More realistically, I suspect that our favorite futures have relatively high D(u/h) / D(u) but not the highest value of D(u/h) / D(u).
Great question! It was in the winter of 2013, about a year and a half ago.
Thanks, fixed!
you cannot use the category of “quantum random” to actual coin flip, because an object to be truly so it must be in a superposition of at least two different pure states, a situation that with a coin at room temperature has yet to be achieved (and will continue to be so for a very long time).
Given the level of subtlety in the question, which gets at the relative nature of superposition, this claim doesn’t quite make sense. If I am entangled with a a state that you are not entangled with, it may “be superposed” from your perspective but not from either of my various perspectives.
For example: a projection of the universe can be in state
(you observe NULL)⊗(I observe UP)⊗(photon is spin UP) + (you observe NULL)⊗(I observe DOWN)⊗(photon is spin DOWN) = (you observe NULL)⊗((I observe UP)⊗(photon is spin UP) + (I observe DOWN)⊗(photon is spin DOWN))
The fact that your state factors out means you are disentangled from the joint state of me and the particle, and so together the particle and I are “in a superimposed state” from “your perspective”. However, my state does not factor out here; there are (at least) two of me, each observing a different outcome and not a superimposed photon.
Anyway, having cleared that up, I’m not convinced that there is enough mutual information connecting my frontal lobe and the coin for the state of the coin to be entangled with me (i.e. not “in a superposed state”) before I observe it. I realize this is testable, e.g., if the state amplitudes of the coin can be forced to have complex arguments differing in a predictable way so as to produce an expected and measurable interference paterns. This is what we have failed to produce at a macroscopic level, and it is this failure that you are talking about when you say
a situation that with a coin at room temperature has yet to be achieved (and will continue to be so for a very long time).
I do not believe I have been shown a convincing empirical test ruling out the possibility that the state is not, from my brain’s perspective, in a superposition of vastly many states with amplitudes whose complex arguments are difficult to predict or control well enough to produce clear interference patterns, and half of which are “heads” state and half of which are “tails” states. But I am very ready to be corrected on this, so if anyone can help me out, please do!
Not justify: instead, explain.
I disagree. Justification is the act of explaining something in a way that makes it seem less dirty.
On reflection, I endorse the conclusion and arguments in this post. I also like that it’s short and direct. Stylistically, it argues for a behavior change among LessWrong readers who sometimes make surveys, rather than being targeted at general LessWrong readers. In particular, the post doesn’t spend much time or space building interest about surveys or taking a circumspect view of them. For this reason, I might suggest a change to the original post to add something to the top like “Target audience: LessWrong readers who often or occasionally make formal or informal surveys about the future of tech; Epistemic status: action-oriented; recommends behavior changes.” It might be nice to have a longer version of the post that takes a more circumspect view of surveys and coordination surveys, that is more optimized for interestingness to general LessWrong readers, and that is less focused on recommending a change of behavior to a specific subset of readers. I wouldn’t want this shorter more direct version to be fully replaced by the longer more broadly interesting version, though, because I’m still glad to have a short and sweet statement somewhere that just directly and publically explains the recommended behavior change.