As I’ve object-level argued above, I believe these summaries fall into the category of misrepresentation, not just interpretation. And I don’t believe an author should maintain such misrepresentations in their text in light of evidence about them.
In any event, certainly a link to my comment is better than nothing. At this point I’m just looking for any gesture in the direction of avoiding my name from being associated with positions I do not hold.
On general principle, I think embedding links to people who think they’ve been mischaracterized in a post is fair and good. And if you had crisp pointers to where I mischaracterized you, I would absolutely link to it.
But what you have is a long comment thread asserting that I mischaracterized you and that the mischaracterizations are obvious, and multiplepeople telling you they’re not, plus (as of this writing)14 disagreement karma on 15 votes. Linking people to this isn’t informative, it’s a tax, and I view it as a continuation of the pattern of making vegan nutrition costly to talk about.
But of course it’s a terrible norm to trust authors to decide if someone’s challenge to their work is good enough. My ideal solution is something like a bet- I include your disclaimer, we run a poll and people say reading your clarifications was uninformative it costs you something meaningful. And if the votes say I was inaccurate, I’ll at a minimum remove your section and put up a big disclaimer, and rethink the post as whole, since it rests on my ability to fairly summarize people.
Right now I don’t know of a way to do this that isn’t too vulnerable to carpet bagging. I also don’t know what stakes would be meaningful from you. But conceptually I think this is the right thing to do.
FWIW I just spent a lot of time reading all of the comments (in the original thread and this one) and my position is that Martín Soto’s criticism of his representation is valid, and that he has been obviously mischaracterized.
I think Martín Soto explained it pretty well in his comments here, but I can try to explain it myself. (Can’t guarantee I will do it well, I originally commented because the stated opinions of commenters voicing an opposing view seemed to be used as evidence)
The post directly represents Soto as thinking that nieve veganism is a made up problem, despite him not saying that, or giving any indication that he thought the problem was fabricated (he literally states that he doesn’t doubt the anecdotes of the commenter speaking about the college group). He just shared that in his experience, knowledge about vegan supplimenting needs was extremely widespread and the norm.
The post also represents Soto as desiring a “policy of suppressing public discussion of nutrition issues with plant-exclusive diets”
That’s not what he said, and it’s not an accurate interpretation.
He innitally commented thanking them for the post in question, providing some criticism and questions. Elizabeth later asked about whether he thinks vegan nutrition issues should be discussed, and for his thoughts on the right way to discuss vegan nutrition issues.
He seems to agree they should be discussed, but he offers a lot of thoughts about how the framing of those discussions is important, along with some other considerations.
He says that in his opinion the consequences of pushing the line she was pushing in the way she was pushing it were probably net negative, but that’s very different from advocating a policy of suppressing public discussion about a topic.
Saying something along the lines of: “this speech about this topic framed in this way probably does more harm than good in my opinion”
Is very different than saying something like: “there should be a policy of suppressing speech about this topic”
Advocating generalized norms around suppressing speech about a broad topic, is not the same as stating an opinion that certain speech falling under that topic and framed a certain way might do more harm than good.
Tristan wrote a beautiful comment about prioritizing creating a culture of reverence for life/against suffering, and how that wasn’t very amenable to compromise.
My guess is that if you took someone with Tristan’s beliefs and put more challenging discussion circumstances- less writing skill, less clarity, more emotional activation, trying to operate in an incompatible frame rather than a direct question on their own values- you might get something that looked a lot like the way you describe Martin’s comments. And what looked like blatant contradictions to me are a result of carving reality at different joints.
I don’t want to ask you to speak for Martin in particular, but does that model of friction in communication on this issue in general feel plausible to you?
When trying to model your disagreement with Martin and his position, I think the best sort of analogy I can think of is that of tobacco companies employing ‘fear, uncertainty, and doubt’ tactics in order to prevent people from seriously considering quitting smoking.
Smokers experience cognitive dissonance when they have strong desires to smoke, coupled with knowledge that smoking is likely not in their best interest. They can supress this cognitive dissonance by changing their behaviour and quitting smoking, or by finding something that introduces sufficient doubt about whether that behavior is in their self interest, the latter being much easier. They only need a marginal amount of uncertainty and doubt in order to suppress the dissonance, because their reasoning is heavily motivated, and that’s all tobacco companies needed to offer.
I think Martin is essentially trying to make a case that your post(s) about veganism are functionally providing sufficient marginal ‘uncertainty and doubt’ for non-vegans to suppress any inclination that they ought to reconsider their behaviour. Even if that isn’t at all the intention of the post(s), or a reasonable takeaway (for meat eaters).
I think this explains much or most of the confusing friction which came up around your posts involving veganism. Vegans have certain intuitions regarding the kinds of things that non-vegans will use to maintain the sufficient ‘uncertainty and doubt’ required to suppress the mental toll of their cognitive dissonance. So even though it was hard to find explicit disagreement, it also felt clear to a lot of people that the framing, rhetorical approach, and data selection entailed in the post(s) would mostly have the effect of readers permitting themselves license to forgo reckoning with the case for veganism.
So I think it’s relevant whether one affords animals a higher magnitude of moral consideration, or internalized an attitude which places animals in the in-group, However I don’t think that accounts for everything here.
Some public endeavors in truth seeking can satisfy the motivated anti-truth seeking of people encountering it. I interpret the top comment of this post as evidence of that.
I’m not sure if I conveyed everything I meant to here, but I think I should make sure the main point here makes sense before expanding.
This seems like an inside view of the feelings that lead to using arguments as soldiers. The motivation is sympathetic and the reasoning is solid enough to weather low-effort attacks, but at the end of the day it is treating arguments as means to ends rather than attempts to discover ground level truth. And Effective Altruism and LessWrong have defined themselves as places where we operate on the object level and evaluate each argument on its own merit, not as a pawn in a war.
The systems can tolerate a certain amount of failure (which is good, because it’s going to happen). But the more people treat arguments as soldiers, the weaker the norm and aspiration to collaboratively truthseek, even when it’s inconvenient, becomes. Do it too much, and the norm will go away entirely.
You might argue that it’s good to destroy high-decoupling norms, because they’re innately bad or because animal welfare is so important it is worth ruining any institution that gets in its way. But AFAICT, the truthseeking norms of EA and LW have been extremely hospitable environments for animal welfare advocates[1], specifically because of the high decoupling. High decoupling is what let people consider the argument that factory farming was a moral atrocity even though it was very inconvenient for them.
So when vegan advocates operate using arguments as soldiers they are not only destroying truthseeking infrastructure that was valuable to many causes. They are destroying infrastructure that has already done a great deal of good for their cause in particular. They are using arguments as soldiers to destroy their own buildings.
relative to baseline. Evidence off the top of my head:
* EAs go vegan, vegetarian, and reducitarian at much higher than baseline rates. This is less true of rationalists, but I believe is still above baseline. I know many people who loathe most vegan advocacy and nonetheless reduce meat, or in rare cases go all the way to vegan, because they could decouple the suffering arguments from the people making them. * EA money has transformed farmed animal welfare and AFAIK is the ~only source of funding for things like insect suffering (couldn’t immediately find numbers, source is “a friend in EA animal welfare told me so”) * AFAIK, veganism’s biggest antagonist on LW and EAF over the last year has been me. And I’ve expressed that antagonism by… let me check my notes… getting dozens of vegans nutrition tested and on (AFAIK) vegan supplements. That project would have gone bigger if I’d been able to find a vegan collaborator, but I couldn’t find one (and I did actively look, although exhausted my options pretty quickly). My main posts in this sequence go out of their way to express deep respect for vegans’ moral convictions and recognize animal suffering as morally relevant.
Maybe there’s a poster who’s actively hostile to animal welfare that I didn’t notice, but if I didn’t hear about them they can’t possibly have done that much.
First off, thanks for including that edit (which is certainly better than nothing), although that still doesn’t neglect that (given the public status of the post) your summaries will be the only thing almost everyone sees (as much as you link to these comments or my original text), and that in this thread I have certainly just been trying to get my positions not misrepresented (so I find it completely false that I’m purposefully imposing an unnecessary tax, even if it’s true that engaging with this misrepresentation debate takes some effort, like any epistemic endeavor).
Here’s the two main reasons why I wouldn’t find your proposal above fair:
I expect most people who will see this post / read your summaries of my position to have already seen it (although someone correct me if I’m wrong about viewership dynamics in LessWrong). As a consequence, I’d gain much less from such a disclaimer / rethinking of the post being incorporated now (although of course it would be positive for me / something I could point people towards). Of course, this is not solely a consequence of your actions, but also of my delayed response times (as I had already anticipated in our clarifications thread).
A second order effect is that most people who have seen the post up until now will have been “skimmers” (because it was just in frontpage, just released, etc.), while probably more of the people who read the post in the future will be more thorough readers (because they “went digging for it”, etc.). As I’ve tried to make explicit in the past, my worry is more about the social dynamics consequences of having such a post (with such a framing) receive a lot of public attention, than with any scientific inquiry into nutrition, or any emphasis on public health. Thus, I perceive most of the disvalue coming from the skimmers’ reactions to such a public signal. More on this below.
My worry is exactly that such a post (with such a framing) will not be correctly processed by too many readers (and more concretely, the “skimmers”, or the median upvoter/downvoter), in the sense that they will take away (mostly emotionally / gutturally) the wrong update (especially action-wise) from the actual information in the post (and previous posts). Yes: I am claiming that I cannot assume perfect epistemics from LessWrong readers. More concretely, I am claiming that there is a predictable bias in one of two emotional / ethical directions, which exists mainly due to the broader ethical / cultural context we experience (from which LessWrong is not insulated). Even if we want LessWrong to become a transparent hub of information sharing (in which indeed epistemic virtue is correctly assumed of the other), I claim that the best way to get there is not through completely implementing this transparent information sharing immediately in the hopes that individuals / groups will respond correctly. This would amount to ignoring a part of reality that steers our behavior too much to be neglected: social dynamics and culturally inherited biases. I claim the best way to get there is by implementing this transparency wherever it’s clearly granted, but necessarily being strategic in situations when some unwanted dynamics and biases are at play. The alternative, being completely transparent (“hands off the simulacrum levels”), amounts to leaving a lot of instrumental free energy on the table for these already existing dynamics and biases to hoard (as they have always done). It amounts to having a dualistic (as opposed to embedded) picture of reality, in which epistemics cannot be affected by the contingent or instrumental. And furthermore, I claim this topic (public health related to animal ethics) is unfortunately one of the tricky situations in which such strategicness (as opposed to naive transparency) is the best approach (even if it requires some more efforts on our part). Of course, you can disagree with these claims, but I hope it’s clear why I don’t find a public jury is to be trusted on this matter.
You might respond “huh, but we’re not talking about deciding things about animal ethics here. We’re talking about deciding rationally whether some comments were or weren’t useful. We certainly should be able to at least trust the crowd on that?” I don’t think that’s the case for this topic, given how strong the “vegans bad” / “vegans annoying” immune reaction is for most people generally (that is, the background bias present in our culture / the internet).
As an example, in this thread there are some people (like you and Jim) who have engaged with my responses / position fairly deeply, and for now disagreed. I don’t expect the bulk of the upvotes / downvotes in this thread (or if we were to carry out such a public vote) to come from this camp, but more from “skimmers” and first reactions (that wouldn’t enter the nuance of my position, which is, granted, slightly complex). Indeed (and of course based on my anecdotal experience on the internet and different circles, including EA circles), I expect way too many anonymous readers/voters to, upon seeing something like human health and animal ethics weighed off in this way, would just jump on the bandwagon of punishing the veganism meme for the hell of it. And let me also note that, while further engagement and explicit reasoning should help with recognizing those nuances (although you have reached a different conclusion), I don’t expect this to eliminate some strong emotional reactions to this topic, that drive our rational points (“we are not immune to propaganda”). And again, given the cultural background, I expect these to go more in one direction than the other.
So, what shall we do? The only thing that seems viable close to your proposal would be having the voters be “a selected crowd”, but I don’t know how to select it (if we had half and half this could look too much like a culture war, although probably that’d be even better than the random crowd due to explicitly engaging deeply with the text). Although maybe we could agree on 2-3 people. To be honest, that’s sounding like a lot of work, and as I mentioned I don’t think there’s that much more in this debate for me. But I truly think I have been strongly misrepresented, so if we did find 2-3 people who seemed impartial and epistemically virtuous I’d deem it positive to have them look at my newest, overly explicit explanation and express opinions.
So, since your main worry was that I hadn’t made my explanation of misrepresentation explicit enough (and indeed, I agree that I hadn’t yet written it out in completely explicit detail, simply because I knew that would require a lot of time), I have in this new comment provided the most explicit version I can muster myself to compose. I have made it explicit (and as a consequence long) enough that I don’t think I have many more thoughts to add, and it is a faithful representation of my opinions about how I’ve been misrepresented. I think having that out there, for you (and Jim, etc.) to be able to completely read my thoughts and re-consider whether I was misrepresented, and for any passer-by who wants to stop by to see, is the best I can do for now. In fact, I would recommend (granted you don’t change your mind more strongly due to reading that) that your edit linked to this new, completely explicit version, instead of my original comment written in 10 minutes.
I will also note (since you seemed to care about the public opinions of people about the misrepresentation issue) that 3 people (not counting Slapstick here) (only one vegan) have privately reached out to me to say they agree that I have been strongly misrepresented. Maybe there’s a dynamic here in which some people agree more with my points but stay more silent due to being in the periphery of the community (maybe because of perceived wrong-epistemics in exchanges like this one, or having different standards for information-sharing / what constitutes misrepresentation, etc.).
As I’ve object-level argued above, I believe these summaries fall into the category of misrepresentation, not just interpretation. And I don’t believe an author should maintain such misrepresentations in their text in light of evidence about them.
In any event, certainly a link to my comment is better than nothing. At this point I’m just looking for any gesture in the direction of avoiding my name from being associated with positions I do not hold.
On general principle, I think embedding links to people who think they’ve been mischaracterized in a post is fair and good. And if you had crisp pointers to where I mischaracterized you, I would absolutely link to it.
But what you have is a long comment thread asserting that I mischaracterized you and that the mischaracterizations are obvious, and multiple people telling you they’re not, plus (as of this writing)14 disagreement karma on 15 votes. Linking people to this isn’t informative, it’s a tax, and I view it as a continuation of the pattern of making vegan nutrition costly to talk about.
But of course it’s a terrible norm to trust authors to decide if someone’s challenge to their work is good enough. My ideal solution is something like a bet- I include your disclaimer, we run a poll and people say reading your clarifications was uninformative it costs you something meaningful. And if the votes say I was inaccurate, I’ll at a minimum remove your section and put up a big disclaimer, and rethink the post as whole, since it rests on my ability to fairly summarize people.
Right now I don’t know of a way to do this that isn’t too vulnerable to carpet bagging. I also don’t know what stakes would be meaningful from you. But conceptually I think this is the right thing to do.
FWIW I just spent a lot of time reading all of the comments (in the original thread and this one) and my position is that Martín Soto’s criticism of his representation is valid, and that he has been obviously mischaracterized.
Can you say how?
(I don’t think you’re obligated to or anything, seems good for you to just note your experience, but hearing more details would be helpful)
I think Martín Soto explained it pretty well in his comments here, but I can try to explain it myself. (Can’t guarantee I will do it well, I originally commented because the stated opinions of commenters voicing an opposing view seemed to be used as evidence)
The post directly represents Soto as thinking that nieve veganism is a made up problem, despite him not saying that, or giving any indication that he thought the problem was fabricated (he literally states that he doesn’t doubt the anecdotes of the commenter speaking about the college group). He just shared that in his experience, knowledge about vegan supplimenting needs was extremely widespread and the norm.
The post also represents Soto as desiring a “policy of suppressing public discussion of nutrition issues with plant-exclusive diets”
That’s not what he said, and it’s not an accurate interpretation.
He innitally commented thanking them for the post in question, providing some criticism and questions. Elizabeth later asked about whether he thinks vegan nutrition issues should be discussed, and for his thoughts on the right way to discuss vegan nutrition issues.
He seems to agree they should be discussed, but he offers a lot of thoughts about how the framing of those discussions is important, along with some other considerations.
He says that in his opinion the consequences of pushing the line she was pushing in the way she was pushing it were probably net negative, but that’s very different from advocating a policy of suppressing public discussion about a topic.
Saying something along the lines of: “this speech about this topic framed in this way probably does more harm than good in my opinion”
Is very different than saying something like: “there should be a policy of suppressing speech about this topic”
Advocating generalized norms around suppressing speech about a broad topic, is not the same as stating an opinion that certain speech falling under that topic and framed a certain way might do more harm than good.
I’d like to ask your opinion on something.
Tristan wrote a beautiful comment about prioritizing creating a culture of reverence for life/against suffering, and how that wasn’t very amenable to compromise.
My guess is that if you took someone with Tristan’s beliefs and put more challenging discussion circumstances- less writing skill, less clarity, more emotional activation, trying to operate in an incompatible frame rather than a direct question on their own values- you might get something that looked a lot like the way you describe Martin’s comments. And what looked like blatant contradictions to me are a result of carving reality at different joints.
I don’t want to ask you to speak for Martin in particular, but does that model of friction in communication on this issue in general feel plausible to you?
When trying to model your disagreement with Martin and his position, I think the best sort of analogy I can think of is that of tobacco companies employing ‘fear, uncertainty, and doubt’ tactics in order to prevent people from seriously considering quitting smoking.
Smokers experience cognitive dissonance when they have strong desires to smoke, coupled with knowledge that smoking is likely not in their best interest. They can supress this cognitive dissonance by changing their behaviour and quitting smoking, or by finding something that introduces sufficient doubt about whether that behavior is in their self interest, the latter being much easier. They only need a marginal amount of uncertainty and doubt in order to suppress the dissonance, because their reasoning is heavily motivated, and that’s all tobacco companies needed to offer.
I think Martin is essentially trying to make a case that your post(s) about veganism are functionally providing sufficient marginal ‘uncertainty and doubt’ for non-vegans to suppress any inclination that they ought to reconsider their behaviour. Even if that isn’t at all the intention of the post(s), or a reasonable takeaway (for meat eaters).
I think this explains much or most of the confusing friction which came up around your posts involving veganism. Vegans have certain intuitions regarding the kinds of things that non-vegans will use to maintain the sufficient ‘uncertainty and doubt’ required to suppress the mental toll of their cognitive dissonance. So even though it was hard to find explicit disagreement, it also felt clear to a lot of people that the framing, rhetorical approach, and data selection entailed in the post(s) would mostly have the effect of readers permitting themselves license to forgo reckoning with the case for veganism.
So I think it’s relevant whether one affords animals a higher magnitude of moral consideration, or internalized an attitude which places animals in the in-group, However I don’t think that accounts for everything here.
Some public endeavors in truth seeking can satisfy the motivated anti-truth seeking of people encountering it. I interpret the top comment of this post as evidence of that.
I’m not sure if I conveyed everything I meant to here, but I think I should make sure the main point here makes sense before expanding.
This seems like an inside view of the feelings that lead to using arguments as soldiers. The motivation is sympathetic and the reasoning is solid enough to weather low-effort attacks, but at the end of the day it is treating arguments as means to ends rather than attempts to discover ground level truth. And Effective Altruism and LessWrong have defined themselves as places where we operate on the object level and evaluate each argument on its own merit, not as a pawn in a war.
The systems can tolerate a certain amount of failure (which is good, because it’s going to happen). But the more people treat arguments as soldiers, the weaker the norm and aspiration to collaboratively truthseek, even when it’s inconvenient, becomes. Do it too much, and the norm will go away entirely.
You might argue that it’s good to destroy high-decoupling norms, because they’re innately bad or because animal welfare is so important it is worth ruining any institution that gets in its way. But AFAICT, the truthseeking norms of EA and LW have been extremely hospitable environments for animal welfare advocates[1], specifically because of the high decoupling. High decoupling is what let people consider the argument that factory farming was a moral atrocity even though it was very inconvenient for them.
So when vegan advocates operate using arguments as soldiers they are not only destroying truthseeking infrastructure that was valuable to many causes. They are destroying infrastructure that has already done a great deal of good for their cause in particular. They are using arguments as soldiers to destroy their own buildings.
relative to baseline. Evidence off the top of my head:
* EAs go vegan, vegetarian, and reducitarian at much higher than baseline rates. This is less true of rationalists, but I believe is still above baseline. I know many people who loathe most vegan advocacy and nonetheless reduce meat, or in rare cases go all the way to vegan, because they could decouple the suffering arguments from the people making them.
* EA money has transformed farmed animal welfare and AFAIK is the ~only source of funding for things like insect suffering (couldn’t immediately find numbers, source is “a friend in EA animal welfare told me so”)
* AFAIK, veganism’s biggest antagonist on LW and EAF over the last year has been me. And I’ve expressed that antagonism by… let me check my notes… getting dozens of vegans nutrition tested and on (AFAIK) vegan supplements. That project would have gone bigger if I’d been able to find a vegan collaborator, but I couldn’t find one (and I did actively look, although exhausted my options pretty quickly). My main posts in this sequence go out of their way to express deep respect for vegans’ moral convictions and recognize animal suffering as morally relevant.
Maybe there’s a poster who’s actively hostile to animal welfare that I didn’t notice, but if I didn’t hear about them they can’t possibly have done that much.
That’s a good question. I have many thoughts about this and I’m working on a more thorough response.
My very simple answer is that I do think that’s generally plausible (or at least that you’re getting at something significant).
(See also this new comment.)
First off, thanks for including that edit (which is certainly better than nothing), although that still doesn’t neglect that (given the public status of the post) your summaries will be the only thing almost everyone sees (as much as you link to these comments or my original text), and that in this thread I have certainly just been trying to get my positions not misrepresented (so I find it completely false that I’m purposefully imposing an unnecessary tax, even if it’s true that engaging with this misrepresentation debate takes some effort, like any epistemic endeavor).
Here’s the two main reasons why I wouldn’t find your proposal above fair:
I expect most people who will see this post / read your summaries of my position to have already seen it (although someone correct me if I’m wrong about viewership dynamics in LessWrong). As a consequence, I’d gain much less from such a disclaimer / rethinking of the post being incorporated now (although of course it would be positive for me / something I could point people towards).
Of course, this is not solely a consequence of your actions, but also of my delayed response times (as I had already anticipated in our clarifications thread).
A second order effect is that most people who have seen the post up until now will have been “skimmers” (because it was just in frontpage, just released, etc.), while probably more of the people who read the post in the future will be more thorough readers (because they “went digging for it”, etc.). As I’ve tried to make explicit in the past, my worry is more about the social dynamics consequences of having such a post (with such a framing) receive a lot of public attention, than with any scientific inquiry into nutrition, or any emphasis on public health. Thus, I perceive most of the disvalue coming from the skimmers’ reactions to such a public signal. More on this below.
My worry is exactly that such a post (with such a framing) will not be correctly processed by too many readers (and more concretely, the “skimmers”, or the median upvoter/downvoter), in the sense that they will take away (mostly emotionally / gutturally) the wrong update (especially action-wise) from the actual information in the post (and previous posts).
Yes: I am claiming that I cannot assume perfect epistemics from LessWrong readers. More concretely, I am claiming that there is a predictable bias in one of two emotional / ethical directions, which exists mainly due to the broader ethical / cultural context we experience (from which LessWrong is not insulated).
Even if we want LessWrong to become a transparent hub of information sharing (in which indeed epistemic virtue is correctly assumed of the other), I claim that the best way to get there is not through completely implementing this transparent information sharing immediately in the hopes that individuals / groups will respond correctly. This would amount to ignoring a part of reality that steers our behavior too much to be neglected: social dynamics and culturally inherited biases. I claim the best way to get there is by implementing this transparency wherever it’s clearly granted, but necessarily being strategic in situations when some unwanted dynamics and biases are at play. The alternative, being completely transparent (“hands off the simulacrum levels”), amounts to leaving a lot of instrumental free energy on the table for these already existing dynamics and biases to hoard (as they have always done). It amounts to having a dualistic (as opposed to embedded) picture of reality, in which epistemics cannot be affected by the contingent or instrumental. And furthermore, I claim this topic (public health related to animal ethics) is unfortunately one of the tricky situations in which such strategicness (as opposed to naive transparency) is the best approach (even if it requires some more efforts on our part).
Of course, you can disagree with these claims, but I hope it’s clear why I don’t find a public jury is to be trusted on this matter.
You might respond “huh, but we’re not talking about deciding things about animal ethics here. We’re talking about deciding rationally whether some comments were or weren’t useful. We certainly should be able to at least trust the crowd on that?” I don’t think that’s the case for this topic, given how strong the “vegans bad” / “vegans annoying” immune reaction is for most people generally (that is, the background bias present in our culture / the internet).
As an example, in this thread there are some people (like you and Jim) who have engaged with my responses / position fairly deeply, and for now disagreed. I don’t expect the bulk of the upvotes / downvotes in this thread (or if we were to carry out such a public vote) to come from this camp, but more from “skimmers” and first reactions (that wouldn’t enter the nuance of my position, which is, granted, slightly complex). Indeed (and of course based on my anecdotal experience on the internet and different circles, including EA circles), I expect way too many anonymous readers/voters to, upon seeing something like human health and animal ethics weighed off in this way, would just jump on the bandwagon of punishing the veganism meme for the hell of it.
And let me also note that, while further engagement and explicit reasoning should help with recognizing those nuances (although you have reached a different conclusion), I don’t expect this to eliminate some strong emotional reactions to this topic, that drive our rational points (“we are not immune to propaganda”). And again, given the cultural background, I expect these to go more in one direction than the other.
So, what shall we do? The only thing that seems viable close to your proposal would be having the voters be “a selected crowd”, but I don’t know how to select it (if we had half and half this could look too much like a culture war, although probably that’d be even better than the random crowd due to explicitly engaging deeply with the text). Although maybe we could agree on 2-3 people. To be honest, that’s sounding like a lot of work, and as I mentioned I don’t think there’s that much more in this debate for me. But I truly think I have been strongly misrepresented, so if we did find 2-3 people who seemed impartial and epistemically virtuous I’d deem it positive to have them look at my newest, overly explicit explanation and express opinions.
So, since your main worry was that I hadn’t made my explanation of misrepresentation explicit enough (and indeed, I agree that I hadn’t yet written it out in completely explicit detail, simply because I knew that would require a lot of time), I have in this new comment provided the most explicit version I can muster myself to compose. I have made it explicit (and as a consequence long) enough that I don’t think I have many more thoughts to add, and it is a faithful representation of my opinions about how I’ve been misrepresented.
I think having that out there, for you (and Jim, etc.) to be able to completely read my thoughts and re-consider whether I was misrepresented, and for any passer-by who wants to stop by to see, is the best I can do for now. In fact, I would recommend (granted you don’t change your mind more strongly due to reading that) that your edit linked to this new, completely explicit version, instead of my original comment written in 10 minutes.
I will also note (since you seemed to care about the public opinions of people about the misrepresentation issue) that 3 people (not counting Slapstick here) (only one vegan) have privately reached out to me to say they agree that I have been strongly misrepresented. Maybe there’s a dynamic here in which some people agree more with my points but stay more silent due to being in the periphery of the community (maybe because of perceived wrong-epistemics in exchanges like this one, or having different standards for information-sharing / what constitutes misrepresentation, etc.).