Jargon/concepts for one: there are a lot of Less Wrong-specific terms and concept clusters that aren’t found in the cognitive science literature. To a degree, associating rationality with existential risk, AI, cryonics, etc–not everyone on LW endorses these, but they are talked about.
I would hope that they’re not going to focus on X-Risk, AI, and cryonics much at all, given that it’s not mentioned in the schedule and does not seem to fit with the material they are promising to deliver.
No, and it wasn’t in the curriculum of the spring and summer minicamps either, which I think is a good thing–those topics tend to be polarizing. However, there was a fair amount of casual discussion among the participants on these topics. This workshop is targeting a different subset of the LW population (at the very least, it’s definitely not targeting me) so I don’t know if that would still be the case.
Jargon/concepts for one: there are a lot of Less Wrong-specific terms and concept clusters that aren’t found in the cognitive science literature. To a degree, associating rationality with existential risk, AI, cryonics, etc–not everyone on LW endorses these, but they are talked about.
I would hope that they’re not going to focus on X-Risk, AI, and cryonics much at all, given that it’s not mentioned in the schedule and does not seem to fit with the material they are promising to deliver.
No, and it wasn’t in the curriculum of the spring and summer minicamps either, which I think is a good thing–those topics tend to be polarizing. However, there was a fair amount of casual discussion among the participants on these topics. This workshop is targeting a different subset of the LW population (at the very least, it’s definitely not targeting me) so I don’t know if that would still be the case.