People are right/just to fear you developing political language if you appear to be actively
The end of this sentence appears to be missing.
More generally, I appreciate this post, and I think it’s a good distillation—as someone who can’t read what it’s a distillation of.
I also think that evaluating distillation quality well is easier with access to the conversation/data being distilled.
Absent any examples of conversations becoming public, it looks like distillation is the way things are going. While I don’t have any reason to suspect there are one or more conspiracies, given this:
It’s *also* the case that private channels enable collusion
was brought up, I am curious how robust distillations are (intended to be) against such things, as well as how one goes about incentivizing “publishing”. For example, I have a model where pre-registered results are better* because they limit certain things like publication bias. I don’t have such a model for “conversations”, which, while valuable, are a different research paradigm. (I don’t have as much of a model for, in general, how to figure out the best thing to do, absent experiments.)
*”Better” in terms of result strength, and not necessarily the best thing (in a utilitarian sense).
btw, full sentence here was supposed to be something like:
People are right/just to fear you developing political language if you appear to be actively trying to wield political weapons against people while you develop it
The key thing I’d want (and do encourage) from Benquo and Jessicata and others is to flag where the distillation seems to be missing important things or mischaracterizing things. (A key property of a good conversation-distillation is that all parties agree that it represents them well)
That said, in this case, I’m mostly just directly using everyone’s words as they originally said them. Distortions might come from my selection process – it so happened that me/Benquo/Jessica wrote comments that seemed like fairly comprehensive takes on our worldviews so hopefully that’s not an issue here.
But I could imagine it being an issue if/when I try to summarize the 8-hour-in-person conversation, which didn’t leave as much written record. (My plan is to write it up in google doc form and give everyone who participated in the conversation opportunity to comment on it before posting publicly)
Collusion
“Collusion” was something that Benquo had specifically mentioned as a concern.
(early on, I had sent him an email that was sort of weird, where I was doing a combination of “speaking privately” but also not really speaking any more frankly than I would have in public. I think it made sense at the time for me to do this because I didn’t have a clear sense of how much trust there was between us. But I think it made sense for that to be a red-flag for Benquo)
I agree that if you’re worried about Benquo/me colluding, there’s not a great way to assuage your concerns fully. But I’m hoping the general practice of doing public distillations that aim to be as clear/honest as possible is at least a step in the right direction.
(My first stab at an additional stepis to have common practices of signaling meta-trust, such as flagging places where some kind of collusion was at least plausibly suspicious. This is already fairly common in the form of declaring conflicts of interest. Although I have some alternate concerns about how that allocates attention that I’ll try to write up later)
The end of this sentence appears to be missing.
More generally, I appreciate this post, and I think it’s a good distillation—as someone who can’t read what it’s a distillation of.
I also think that evaluating distillation quality well is easier with access to the conversation/data being distilled.
Absent any examples of conversations becoming public, it looks like distillation is the way things are going. While I don’t have any reason to suspect there are one or more conspiracies, given this:
was brought up, I am curious how robust distillations are (intended to be) against such things, as well as how one goes about incentivizing “publishing”. For example, I have a model where pre-registered results are better* because they limit certain things like publication bias. I don’t have such a model for “conversations”, which, while valuable, are a different research paradigm. (I don’t have as much of a model for, in general, how to figure out the best thing to do, absent experiments.)
*”Better” in terms of result strength, and not necessarily the best thing (in a utilitarian sense).
btw, full sentence here was supposed to be something like:
The key thing I’d want (and do encourage) from Benquo and Jessicata and others is to flag where the distillation seems to be missing important things or mischaracterizing things. (A key property of a good conversation-distillation is that all parties agree that it represents them well)
That said, in this case, I’m mostly just directly using everyone’s words as they originally said them. Distortions might come from my selection process – it so happened that me/Benquo/Jessica wrote comments that seemed like fairly comprehensive takes on our worldviews so hopefully that’s not an issue here.
But I could imagine it being an issue if/when I try to summarize the 8-hour-in-person conversation, which didn’t leave as much written record. (My plan is to write it up in google doc form and give everyone who participated in the conversation opportunity to comment on it before posting publicly)
Collusion
“Collusion” was something that Benquo had specifically mentioned as a concern.
(early on, I had sent him an email that was sort of weird, where I was doing a combination of “speaking privately” but also not really speaking any more frankly than I would have in public. I think it made sense at the time for me to do this because I didn’t have a clear sense of how much trust there was between us. But I think it made sense for that to be a red-flag for Benquo)
I agree that if you’re worried about Benquo/me colluding, there’s not a great way to assuage your concerns fully. But I’m hoping the general practice of doing public distillations that aim to be as clear/honest as possible is at least a step in the right direction.
(My first stab at an additional step is to have common practices of signaling meta-trust, such as flagging places where some kind of collusion was at least plausibly suspicious. This is already fairly common in the form of declaring conflicts of interest. Although I have some alternate concerns about how that allocates attention that I’ll try to write up later)