I’ve been trying to get MIRI to switch to stop calling this blackmail (extortion for information) and start calling it extortion (because it’s the definition of extortion). Can we use this opportunity to just make the switch?
Academian
Coordination Surveys: why we should survey to organize responsibilities, not just predictions
Unrolling social metacognition: Three levels of meta are not enough.
I support this, whole-heartedly :) CFAR has already created a great deal of value without focusing specifically on AI x-risk, and I think it’s high time to start trading the breadth of perspective CFAR has gained from being fairly generalist for some more direct impact on saving the world.
“Brier scoring” is not a very natural scoring rule (log scoring is better; Jonah and Eliezer already covered the main reasons, and it’s what I used when designing the Credence Game for similar reasons). It also sets off a negative reaction in me when I see someone naming their world-changing strategy after it. It makes me think the people naming their strategy don’t have enough mathematician friends to advise them otherwise… which, as evidenced by these comments, is not the case for CFAR ;) Possible re-naming options that contrast well with “signal boosting”
Score boosting
Signal filtering
Signal vetting
This is a cryonics-fails story, not a cryonics-works-and-is-bad story.
Seems not much worse than actual-death, given that in this scenario you could still choose to actually-die if you didn’t like your post-cryonics life.
Seems not much worse than actual-death, given that in this scenario you (or the person who replaces you) could still choose to actually-die if you didn’t like your post-cryonics life.
Seems not much worse than actual-death, given that in this scenario you could still choose to actually-die if you didn’t like your post-cryonics life.
This is an example where cryonics fails, and so not the kind of example I’m looking for in this thread. Sorry if that wasn’t clear from the OP! I’m leaving this comment to hopefully prevent more such examples from distracting potential posters.
Hmm, this seems like it’s not a cryonics-works-for-you scenario, and I did mean to exclude this type of example, though maybe not super clearly:
OP: There’s a separate question of whether the outcome is positive enough to be worth the money, which I’d rather discuss in a different thread.
(2) A rich sadist finds it somehow legally or logistically easier to lay hands on the brains/minds of cryonics patients than of living people, and runs some virtual torture scenarios on me where I’m not allowed to die for thousands of subjective years or more.
(1) A well-meaning but slightly-too-obsessed cryonics scientist wakes up some semblance of me in a semi-conscious virtual delirium for something like 1000 very unpleasant subjective years of tinkering to try recovering me. She eventually quits, and I never wake up again.
Survey: What’s the most negative*plausible cryonics-works story that you know?
See Nate’s comment above:
http://lesswrong.com/lw/n39/why_cfar_the_view_from_2015/cz99
And, FWIW, I would also consider anything that spends less than $100k causing a small number of top-caliber researchers to become full-time AI safety researchers to be extremely “effective”.
[This is in fact a surprisingly difficult problem to solve. Aside from personal experience seeing the difficulty of causing people to become safety researchers, I have also been told by some rich, successful AI companies earnestly trying to set up safety research divisions (yay!) that they are unable to hire appropriately skilled people to work full-time on safety.]
Just donated $500 and pledged $6500 more in matching funds (10% of my salary).
I would expect not for a paid workshop! Unlike CFAR’s core workshops, which are highly polished and get median 9⁄10 and 10⁄10 “are you glad you came” ratings, MSFP
was free and experimental,
produced two new top-notch AI x-risk researchers for MIRI (in my personal judgement as a mathematician, and excluding myself), and
produced several others who were willing hires by the end of the program and who I would totally vote to hire if there were more resources available (in the form of both funding and personnel) to hire them.
Deliberate Grad School
1) Logical depth seems super cool to me, and is perhaps the best way I’ve seen for quantifying “interestingness” without mistakenly equating it with “unlikeliness” or “incompressibility”.
2) Despite this, Manfred’s brain-encoding-halting-times example illustrates a way a D(u/h) / D(u) optimized future could be terrible… do you think this future would not obtain, because despite being human-brain-based, would not in fact make much use of being on a human brain? That is, it would have extremely high D(u) and therefore be penalized?
I think it would be easy to rationalize/over-fit our intuitions about this formula to convince ourselves that it matches our intuitions about what is a good future. More realistically, I suspect that our favorite futures have relatively high D(u/h) / D(u) but not the highest value of D(u/h) / D(u).
On reflection, I endorse the conclusion and arguments in this post. I also like that it’s short and direct. Stylistically, it argues for a behavior change among LessWrong readers who sometimes make surveys, rather than being targeted at general LessWrong readers. In particular, the post doesn’t spend much time or space building interest about surveys or taking a circumspect view of them. For this reason, I might suggest a change to the original post to add something to the top like “Target audience: LessWrong readers who often or occasionally make formal or informal surveys about the future of tech; Epistemic status: action-oriented; recommends behavior changes.” It might be nice to have a longer version of the post that takes a more circumspect view of surveys and coordination surveys, that is more optimized for interestingness to general LessWrong readers, and that is less focused on recommending a change of behavior to a specific subset of readers. I wouldn’t want this shorter more direct version to be fully replaced by the longer more broadly interesting version, though, because I’m still glad to have a short and sweet statement somewhere that just directly and publically explains the recommended behavior change.