CFAR’s new mission statement (on our website)
- Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” by 12 Dec 2016 19:39 UTC; 65 points) (
- CFAR’s new focus, and AI Safety by 3 Dec 2016 18:09 UTC; 51 points) (
- 10 Dec 2016 18:32 UTC; 1 point) 's comment on CFAR’s new focus, and AI Safety by (
I don’t know who the intended audience for this is, but I think it’s worth flagging that it seemed extremely jargon-heavy to me. I expect this to be off-putting to at least some people you actually want to attract (if it were one of my first interactions with CFAR I would be less inclined to engage again). In several cases you link to explanations of the jargon. This helps, but doesn’t really solve the problem that you’re asking the reader to do a large amount of work.
Some examples from the first few paragraphs:
clear and unhidden
original seeing
original making
existential risk
informational content [non-standard use]
thinker/doer
know the right passwords
double crux
outreach efforts
I got the same feeling, and I would add “inside view” to the list.
I also think that ‘inside view’ might be a bit of an overloaded term. However, the meaning I think CFAR meant, was ‘gears based models’ and that’s even worse CFAR jargon.
Thanks for posting this, I think it’s good to make these things explicit even if it requires effort. One piece of feedback: I think someone who reads this who doesn’t already know what “existential risk” and “AI safety” are will be confused (they suddenly show up in the second bullet point without being defined, though it’s possible I’m missing some context here).
Thanks; good point; will add links.
I found this document kind of interesting, but it felt less like what I normally understand as a mission statement, and more like “Anna’s thoughts on CFAR’s identity”. I think there’s a place for the latter, but I’d be really interested in seeing (a concise version of) the former, too.
If I had to guess right now I’d expect it to say something like:
… but I kind of expect you to think I have the emphasis there wrong in some way.
I like this and the overall website redesign.
A few notes on design (slightly off-topic but potentially valuable):
The pale gray, text-heavy but readable layout, and new, more angular “brain” images, suggest seriousness and mental incisiveness, which I think is in keeping with the new mission.
I like the touches of orange. It’s a nice change from the overly blue themes of tech-related images, it’s cheerful and high-contrast, and it has nice Whiggish connotations. It suggests a certain healthy fighting spirit.
Maybe you’ll cover this in a future post, but I’m curious about the outcomes of CFAR’s past AI-specific workshops, especially “CFAR for ML Researchers” and the “Workshop on AI Safety Strategy”.
In case there are folks following Discussion but not Main: this mission statement was released along with:
CFAR’s new focus, and AI Safety
Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”
The mission says: “we provide high-quality training to a small number of people we think are in an unusually good position to help the world”.
Is there evidence that the people you are training actually ARE in an unusually good position to help the world? Compared to the baseline of same-IQ people from the first world, of course.
Depending on which parts of physics one has in mind, this seems possibly almost exactly backwards (!!). Quoting from Vladimir_M’s post Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields:
The reference to Smolin is presumably to The Trouble With Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next . Penrose’s recent book Fashion, Faith, and Fantasy in the New Physics of the Universe also seems relevant.
This is fair; I had in mind basic high school / Newtonian physics of everyday objects. (E.g., “If I drop this penny off this building, how long will it take to hit the ground?”, or, more messily, “If I drive twice as fast, what impact would that have on the kinetic energy with which I would crash into a tree / what impact would that have on how badly deformed my car and I would be if I crash into a tree?”).
This is tricky: basic high school physics lies to you all the time. Example: it says that a penny and a large paper airplane weighting the same as the penny will hit the ground at the same time.
In general, getting the right answers from physics involves knowing the assumptions of of models used and at which points they break down. Physics will tell you, but not at the high school level and you have to remember to ask.
I don’t believe that you actually have any intention of “reducing existential risk”. Or rather, if you do, you don’t seem to be placing much focus on it.
This statement demonstrates a really a poor understanding of basic (random) processes and analogies. You are absolutely right that a person driving in a car that has decided to drive to a certain distance before turning around should not let uncertainty of direction lead to a reduction of speed. You are absolutely wrong in suggesting that the analogy has any place here.
The conclusion works in the car scenario because the driver cannot take multiple options simultaneously. If he could, say by going at half speed in both directions, that would almost certainly be the best option. CFAR can go in at least nine direction at once if it want to.
In fact, there’s a math behind this.
That’s not the only point I take issue with, but your statement is so poorly grounded and adamant that I don’t think it would be worthwhile to poke at it piecemeal. If you think I’m wrong, you can start by telling us the model (or models) within which your mission statement helps resolve existential risk.