Hmm… if you genuinely meant to say “Have you stopped to consider to what extent my opinion counts as evidence or not, including possibly deciding that it’s neutral or anti-evidence?” then I just want to say “No.” and I claim this is the correct thing to do. I genuinely think that social bayes/aumanning is a bad idea. To capture what I expect is a 4,000-word post in a catchy sentence: If I don’t understand something, just because Conor believes it’s true, doesn’t cause me to understand it any better.
As I say, I do take your claim that MTG-colours are useful as sufficient evidence for me to try it (conditional on me having the sort of life where I have the time and mental habit to try rationality techniques I get recommended—I still haven’t practiced the things Anna recommended to me at my CFAR workshop). I don’t even need reminding of that, it’s just true. If that’s not what you meant though, I do have a disagreement with you.
Added: I also do think that social aumanning is, in general, motivated by status, and is not helpful to truth-seeking (but that this is non-obvious and that many good rationalists do it). I do feel worried to say this because I feel you might decide that I have said the Worst Thing In The World (TM).
Look, fuckers. Coming out against “social Bayesianism” is like a communist trying to ban money because everyone should just get what they need automatically.
Except it’s not ‘LIKE’ that, it IS that. Awarding arguments credit based on who says them *is a thing we do as humans*. You can drive it underground where you can’t regulate it, or you can acknowledge it explicitly and try to craft it into something that fucking *works* in the direction you want it to (say, epistemic truth, if you’re into that). But you can’t just wish it away.
My impression is that there’s a minimum-inevitable amount of it, but that it’s possible to have systems/situations that make it even stronger, and there’s opportunity to think about that and alleviate that.
Facebook makes it really obvious if certain people like things. (It usually shows me if Eliezer ‘liked’ a thing, presumably because he has a high/dense network. This means I don’t even have the opportunity to form an opinion on it before deciding with my social-brain what it means that Eliezer liked it.)
You can curtail that by… just not making that information prominent. There are similar choices available on LW with regards to whether to show how much karma something has. (You could potentially hide people’s usernames too, but that a) comes with weird complications and b) seems more like something that’d drive stuff underground rather than be helpful)
I thinkBen’s argument was something like “Conor’s original comment was explicitly saying ‘you should value my opinion because of my expertise’”, and that this is something that inflates social bayesianism beyond its default levels.
I think Conor’s argument (and I can imagine your argument at least in some similar conditions) is that being able to evaluate expertise and incorporate expertise (and keep it distinct from halo effects) is in fact an important skill to cultivate, which comes with it’s own set of “good norms to cultivate.” Which does seem true to me, although it’s unclear to me if this particular instance actually was a good exemplar of that.
(It usually shows me if Eliezer ‘liked’ a thing, presumably because he has a high/dense network. This means I don’t even have the opportunity to form an opinion on it before deciding with my social-brain what it means that Eliezer liked it.)
Presumably this is also useful information for the rest of your brain, though, if Eliezer-likes are entangled with evidence about other things. FB seems to be doing this particular thing, in the particular case, approximately right: it doesn’t usually overtly display who liked what until I go check; and in the cases where it does display that, it’s generally because it’s correctly sending me things Eliezer liked, and being transparent that that’s what it’s filtering on. Ideally FB would make it trivial for me to subscribe/unsubscribe from particular users’ “likes”, though, and fiddle with personalized settings re who can like what, when likes are viewable at all, etc.
So, my current belief is that the right way to do this is to *not* be blatant about how you’re doing the filtering. Yes, Eliezer liking something is evidence (to me) that it’s a better-than-average thing. But a better way seems like, on LW, would be:
a) posts/comments are shown initially via filtering that takes in a lot of inputs (some combination of recent-ness, how much karma it has (which takes as an input who liked it)
Therefore, I can trust that information coming to me is important enough to be worth my time. BUT, I can still form a first impression of it based on my own judgment (the ‘it’s worth your time’ information has enough inputs that my brain isn’t driven to try and derive anything from it)
b) then I can read comments by people that give me further information like “this person who is a trained economist liked it, this person who’s judgment I generally trust disliked it, etc”
Facebook is an adversarial algorithm I *don’t* trust to show me relevant things in the first place, and it shows me the “who liked a thing” first. I think there’s a number of things going on, some good, some bad. But I have a suspicion that this has trained my “social bayesian system” to be weighted more heavily relative to my “think things through without social info” system.
For LessWrong, we have a number of options on what information to highlight and what incentives to output. We could choose to show upvote/downvote information publicly. We could choose to enable “quick response” or “FB React” style comments (that makes it easier to see if Eliezer liked a thing but didn’t have time to leave an explicit written-out comment saying so). If we went that route, we could choose to make those React-Style-Comments prominent, or always sort them to the bottom so you first have to wade through more information-dense comments.
I can imagine it turning out to be best to have FB-React style comments or similar things, but my intuition is it’s better for LW in general to force people to pause and think whenever possible.
How hard is it to get one other human to do that? Not very hard, I think. Here, I’ll do it: I don’t think Conor was quite claiming that we should value his opinion because of his expertise, although he was saying something (a) readily mistaken for that and (b) not entirely unlike it.
But that’s not the same question as “How hard is it to be sure that one other human will do that without being asked?”. Lots of mistakes go uncorrected, here and everywhere else. Most of the time, people (even smart and observant people) don’t notice mistakes. Most of the time, people (even honest and helpful people) who notice mistakes don’t point them out.
In this case, it’s not like you (initially) made it perfectly clear and explicit what exact claim you were making, and it seems to me as if you’re expecting more mind-reading from your audience than it’s reasonable to expect. Even in a community of smart truth-seeking people.
Let’s just recall what you actually said at first:
The context that’s being neglected in your comment is “Conor’s clearly put a lot of thought and cycles into rationality, and staked a specific claim that this one’s better/more useful in practice than all the other wrong-but-usefuls.”
You’re saying “it doesn’t appear much different,” which is a fine hypothesis to have, but it doesn’t engage with whether or not my voucher provides useful Bayesian evidence.
This seems to me like exactly what you would have written if you had been making the claim that we should value your opinion because of your expertise. (Well, not exactly expertise, but something like it, and I don’t think that distinction is the one you’re trying to draw here.) And, for the avoidance of doubt, I don’t in fact think there’s anything wrong with saying (something like) that we should value your opinion because of your expertise.
I’ll go further: what you actually, originally, wrote makes much more sense as “you should value my opinion because I’ve thought about this a lot and am worth listening to” than as “you should consider the fact that I’ve thought about this a lot and the other stuff I’ve written, and then decide whether that’s evidence for or against my opinion”, which is IIUC what you are now saying you meant.
I think it is very, very understandable and not at all a sign that we are living in some Orwellian world of history-revision that this discussion is not full of people who are not you protesting at your being misunderstood. Because (1) the misunderstanding is a perfectly reasonable one, given what you actually wrote, and (2) you’re right here in this discussion to defend yourself. In a situation like this one, where it’s less than perfectly clear exactly what you meant, what business is it of anyone else to dive in and try to not-Conor-splain what you meant, when you can give a much more authoritative answer to that question any time you want to?
I have (honestly, I assure you) failed to see where you “did ask, more than once” for others to endorse your account of what you were saying. I just took another (admittedly cursory, because I need to be somewhere else in five minutes) look over the thread and still can’t see it.
Let me say for the avidance of doubt that I do not begrudge the time I took to write the above, or the fact of my having written it, and that I am not irritated, and that I neither had nor have any interest in lowering your status.
As for “genuinely meant” versus “actually said”, I stand by what I wrote above: when I read what you actually originally wrote, I cannot see how it says what you now say it said. It rather conspicuously avoids making any very precise claim, so I won’t say it definitely says what Ben says it does—but his reading seems a more natural one than yours.
I am very happy to accept that what you are now saying is what you always meant, and I am not for an instant suggesting that there’s anything dishonest or insincere in what you’re now saying, but after reading and re-reading those words I cannot see how it says what-you-say-it-said rather than what-Ben-said-it-said, and in particular I cannot see how it is reasonable to complain that Ben was “uncharitable and inaccurate”.
(This is not any kind of recantation of my earlier “I don’t think Conor was quite claiming …”, precisely because I do accept what you say about what you meant by those words. But when you’re judging other people’s reactions to those words, what those words actually say at face value is really important.)
I’m sort of confused by this comment. Conor’s comment doesn’t actually (to my eyes) say what you said it says.
If Conor had wanted to say “You should try this because I said it’s good” then there’s a lot of comments he could’ve written that would be more explicit than this one. He could’ve said
That’s an interesting argument you raise. However in my experience, and given my expertise in this domain, trust me when I say this works.
Howeve, what Conor said was
[Your comment] doesn’t engage with whether or not my voucher provides useful Bayesian evidence.
Not “your comment doesn’t engage with the fact that my voucher provides useful Bayesian evidence”. The explicit meaning is “You didn’t use social evidence—have you consider doing so?” while being agnostic about the outcome of such a reasoning step.
In general there are a lot of ways someone would write a comment that explicitly states what the outcome of such reasoning could be, and the fact that Conor wrote it in one of the few ways that specifically doesn’t say his voucher should be trusted is sort of a surprising fact, and thus evidence he didn’t want to say his voucher should be trusted. I don’t think Conor intended to say exactly what he said, and his motivation was not status-based.
Analagously, if a friend says “This coffee shop I went to is great, you should try it!” and you said “You’ve given me no argument about why this coffee shop should reliably produce better products than the dozens of other coffee shops in the area” you may say “You’re right, I only wanted to let you know that I thought it was great, and if you think I’ve generally got good judgement about things like this you might find value in trying it” and that’s generally fine.
Note: I keep being quoted as saying Conor “genuinely meant” something he didn’t say. I didn’t say that. (Wow this is a fun game of he-shaid she-said.) I said “You said X, but I think you meant to say Y, but if you genuinely meant X, then I disagree.” I’m not denying that he said X, and I think he said X.
Might have something to do with people coming to the same line with different priors? E.g., based on coming from different points on the ask-guess spectrum, or from different varieties of ask/guess. For a combination of reasons—such as “it’s rude to outright assert that you’re an authority, so people regularly have to imply it and talk around it,” and “it’s just not that common for people to have zero interest/stake in a conversation, or to deliberately avoid pushing for their interest”—it’s not surprising that some people’s prior is skewed toward other interpretations, such that you need to very heavy-handedly and explicitly clarify what you mean (possibly even explicitly disavowing the wrong interpretation) before you can shift those people away from their prior.
Priors just feel like how the world is, though; it’s not natural (and often not possible) to distinguish the “plain” or “surface” meaning from the text from your assumptions about what people would most often mean by that text.
I think I took it as read that Conor was saying his voucher provides useful (and positive) evidence because that seemed the only way to make sense of his saying what he said in response to what he said it to. I mean, you can’t tell whether CoolShirtMcPants had considered whether Conor’s testimony was evidence from what he wrote; all you can tell is that apparently he didn’t think it was.
In any case, I’m now definitely confused about who has at what times thought Conor meant what by what. What I remain confident of is that Conor did not so clearly not say that we should take his endorsement as evidence as to make it unreasonable to say he did; and I think your comments above should give Conor good reason to reconsider his characterization of what you said before as “uncharitable” (given that strictly only people, not words, can be uncharitable, and that it’s hard to see why someone uncharitably disposed would write what you did above).
And I think there are too many levels of he-said-she-said going on here...
I do not think that comment should have been negative. I upvoted to counteract. I take you at your word that you meant what you say. I see similar problems you do with the R-community and their trendsetters/decision makers.
Status-raising/Compliments:
I loved your articles. In fact, they turned out to be the only thing that was making LesserWrong interesting to me as opposed to just a bunch of AI/Machine Learning stuff I totally don’t care about (I am not the target audience of this site). If you leave, I probably will too, by which I mean going back to checking it once a month-ish to see if anything particularly interesting has been written. Without your articles there really isn’t much here for me, that I can’t get by checking individual blogs, which it turns out I have to do anyways since not everything is crossposted to frontpage.
Advice/Uncompliment:
The Green in me doesn’t like that you aren’t just letting this go. The problematic signs you see I think are true, but you aren’t going to be able to change it by having neverending debates. Accept the community as-is, or move on (but tell me where you’re going, so I can read you elsewhere).
I did mention this to a few people in private who seemed to misunderstand you in this respect.
I think from the discussion that is currently available, it’s forgivable that none of the people who did not chat with you in private have this misunderstanding of the situation. I think your original sentence was easy to interpret that way.
I apologize for not correcting people on this in public. We have a bunch of major feature-launches upcoming, and I currently don’t have the capacity to follow all the recent discussion closely, and be a super productive participant. I don’t think that anyone outside of me, Ben Pace and maybe Vaniver really had the context to correct people on this confidently.
(I spent the last 10 minutes reading through your past comments, but didn’t find any clarification that felt to me like it was clear enough so that someone without large amount of context would have confidently come to a correct model of what you intended to say, so I don’t think this is really a failing of any of the people who are passively reading the site.)
Note: I have not engaged directly with your points since posting a few days ago “this is what I currently understand your point to be. If this is your point, then I am pretty confused about what it is you think we’re disagreeing about. I will not be able to usefully engage further until you clarify that.”
(I don’t think you’re obligated to have responded, but it is a brute fact about the world that Ray is not able to productively engage with this further until you’ve done so. We’ve chatted in private channels about discussing things elsewhere/elsewhen and that is still my preference)
This seems like a VERY important point to Double Crux on! I’m excited to see it come up.
Hmm… if you genuinely meant to say “Have you stopped to consider to what extent my opinion counts as evidence or not, including possibly deciding that it’s neutral or anti-evidence?” then I just want to say “No.” and I claim this is the correct thing to do. I genuinely think that social bayes/aumanning is a bad idea. To capture what I expect is a 4,000-word post in a catchy sentence: If I don’t understand something, just because Conor believes it’s true, doesn’t cause me to understand it any better.
Would love to read about a Double Crux on this point. (Perhaps you two could email back and forth and then compile the resulting text, with some minor edits, and then publish on LW2?)
Personally, I agree with Ben Pace, and the fact that it ‘might be able to be done right’ is not a crux. But I could see changing my mind.
I think I rate “strength of confidence in a person” low when trying to decide whether to really engage in a model. Other factors like “tractibility of a problem area to modelling” or “importance of problem area” are much more important. “Ease of engagement” is probably why I engaged with the mtg post as much as I did, but my low expectation in the problem areas tractibility means I probably won’t try it out for very long.
Loren ipsum
Hmm… if you genuinely meant to say “Have you stopped to consider to what extent my opinion counts as evidence or not, including possibly deciding that it’s neutral or anti-evidence?” then I just want to say “No.” and I claim this is the correct thing to do. I genuinely think that social bayes/aumanning is a bad idea. To capture what I expect is a 4,000-word post in a catchy sentence: If I don’t understand something, just because Conor believes it’s true, doesn’t cause me to understand it any better.
As I say, I do take your claim that MTG-colours are useful as sufficient evidence for me to try it (conditional on me having the sort of life where I have the time and mental habit to try rationality techniques I get recommended—I still haven’t practiced the things Anna recommended to me at my CFAR workshop). I don’t even need reminding of that, it’s just true. If that’s not what you meant though, I do have a disagreement with you.
Added: I also do think that social aumanning is, in general, motivated by status, and is not helpful to truth-seeking (but that this is non-obvious and that many good rationalists do it). I do feel worried to say this because I feel you might decide that I have said the Worst Thing In The World (TM).
Look, fuckers. Coming out against “social Bayesianism” is like a communist trying to ban money because everyone should just get what they need automatically.
Except it’s not ‘LIKE’ that, it IS that. Awarding arguments credit based on who says them *is a thing we do as humans*. You can drive it underground where you can’t regulate it, or you can acknowledge it explicitly and try to craft it into something that fucking *works* in the direction you want it to (say, epistemic truth, if you’re into that). But you can’t just wish it away.
I love y’all but sweet baby Jesus.
My impression is that there’s a minimum-inevitable amount of it, but that it’s possible to have systems/situations that make it even stronger, and there’s opportunity to think about that and alleviate that.
Facebook makes it really obvious if certain people like things. (It usually shows me if Eliezer ‘liked’ a thing, presumably because he has a high/dense network. This means I don’t even have the opportunity to form an opinion on it before deciding with my social-brain what it means that Eliezer liked it.)
You can curtail that by… just not making that information prominent. There are similar choices available on LW with regards to whether to show how much karma something has. (You could potentially hide people’s usernames too, but that a) comes with weird complications and b) seems more like something that’d drive stuff underground rather than be helpful)
I think Ben’s argument was something like “Conor’s original comment was explicitly saying ‘you should value my opinion because of my expertise’”, and that this is something that inflates social bayesianism beyond its default levels.
I think Conor’s argument (and I can imagine your argument at least in some similar conditions) is that being able to evaluate expertise and incorporate expertise (and keep it distinct from halo effects) is in fact an important skill to cultivate, which comes with it’s own set of “good norms to cultivate.” Which does seem true to me, although it’s unclear to me if this particular instance actually was a good exemplar of that.
Presumably this is also useful information for the rest of your brain, though, if Eliezer-likes are entangled with evidence about other things. FB seems to be doing this particular thing, in the particular case, approximately right: it doesn’t usually overtly display who liked what until I go check; and in the cases where it does display that, it’s generally because it’s correctly sending me things Eliezer liked, and being transparent that that’s what it’s filtering on. Ideally FB would make it trivial for me to subscribe/unsubscribe from particular users’ “likes”, though, and fiddle with personalized settings re who can like what, when likes are viewable at all, etc.
So, my current belief is that the right way to do this is to *not* be blatant about how you’re doing the filtering. Yes, Eliezer liking something is evidence (to me) that it’s a better-than-average thing. But a better way seems like, on LW, would be:
a) posts/comments are shown initially via filtering that takes in a lot of inputs (some combination of recent-ness, how much karma it has (which takes as an input who liked it)
Therefore, I can trust that information coming to me is important enough to be worth my time. BUT, I can still form a first impression of it based on my own judgment (the ‘it’s worth your time’ information has enough inputs that my brain isn’t driven to try and derive anything from it)
b) then I can read comments by people that give me further information like “this person who is a trained economist liked it, this person who’s judgment I generally trust disliked it, etc”
Facebook is an adversarial algorithm I *don’t* trust to show me relevant things in the first place, and it shows me the “who liked a thing” first. I think there’s a number of things going on, some good, some bad. But I have a suspicion that this has trained my “social bayesian system” to be weighted more heavily relative to my “think things through without social info” system.
For LessWrong, we have a number of options on what information to highlight and what incentives to output. We could choose to show upvote/downvote information publicly. We could choose to enable “quick response” or “FB React” style comments (that makes it easier to see if Eliezer liked a thing but didn’t have time to leave an explicit written-out comment saying so). If we went that route, we could choose to make those React-Style-Comments prominent, or always sort them to the bottom so you first have to wade through more information-dense comments.
I can imagine it turning out to be best to have FB-React style comments or similar things, but my intuition is it’s better for LW in general to force people to pause and think whenever possible.
Loren ipsum
How hard is it to get one other human to do that? Not very hard, I think. Here, I’ll do it: I don’t think Conor was quite claiming that we should value his opinion because of his expertise, although he was saying something (a) readily mistaken for that and (b) not entirely unlike it.
But that’s not the same question as “How hard is it to be sure that one other human will do that without being asked?”. Lots of mistakes go uncorrected, here and everywhere else. Most of the time, people (even smart and observant people) don’t notice mistakes. Most of the time, people (even honest and helpful people) who notice mistakes don’t point them out.
In this case, it’s not like you (initially) made it perfectly clear and explicit what exact claim you were making, and it seems to me as if you’re expecting more mind-reading from your audience than it’s reasonable to expect. Even in a community of smart truth-seeking people.
Let’s just recall what you actually said at first:
This seems to me like exactly what you would have written if you had been making the claim that we should value your opinion because of your expertise. (Well, not exactly expertise, but something like it, and I don’t think that distinction is the one you’re trying to draw here.) And, for the avoidance of doubt, I don’t in fact think there’s anything wrong with saying (something like) that we should value your opinion because of your expertise.
I’ll go further: what you actually, originally, wrote makes much more sense as “you should value my opinion because I’ve thought about this a lot and am worth listening to” than as “you should consider the fact that I’ve thought about this a lot and the other stuff I’ve written, and then decide whether that’s evidence for or against my opinion”, which is IIUC what you are now saying you meant.
I think it is very, very understandable and not at all a sign that we are living in some Orwellian world of history-revision that this discussion is not full of people who are not you protesting at your being misunderstood. Because (1) the misunderstanding is a perfectly reasonable one, given what you actually wrote, and (2) you’re right here in this discussion to defend yourself. In a situation like this one, where it’s less than perfectly clear exactly what you meant, what business is it of anyone else to dive in and try to not-Conor-splain what you meant, when you can give a much more authoritative answer to that question any time you want to?
Loren ipsum
I have (honestly, I assure you) failed to see where you “did ask, more than once” for others to endorse your account of what you were saying. I just took another (admittedly cursory, because I need to be somewhere else in five minutes) look over the thread and still can’t see it.
Let me say for the avidance of doubt that I do not begrudge the time I took to write the above, or the fact of my having written it, and that I am not irritated, and that I neither had nor have any interest in lowering your status.
As for “genuinely meant” versus “actually said”, I stand by what I wrote above: when I read what you actually originally wrote, I cannot see how it says what you now say it said. It rather conspicuously avoids making any very precise claim, so I won’t say it definitely says what Ben says it does—but his reading seems a more natural one than yours.
I am very happy to accept that what you are now saying is what you always meant, and I am not for an instant suggesting that there’s anything dishonest or insincere in what you’re now saying, but after reading and re-reading those words I cannot see how it says what-you-say-it-said rather than what-Ben-said-it-said, and in particular I cannot see how it is reasonable to complain that Ben was “uncharitable and inaccurate”.
(This is not any kind of recantation of my earlier “I don’t think Conor was quite claiming …”, precisely because I do accept what you say about what you meant by those words. But when you’re judging other people’s reactions to those words, what those words actually say at face value is really important.)
I’m sort of confused by this comment. Conor’s comment doesn’t actually (to my eyes) say what you said it says.
If Conor had wanted to say “You should try this because I said it’s good” then there’s a lot of comments he could’ve written that would be more explicit than this one. He could’ve said
Howeve, what Conor said was
Not “your comment doesn’t engage with the fact that my voucher provides useful Bayesian evidence”. The explicit meaning is “You didn’t use social evidence—have you consider doing so?” while being agnostic about the outcome of such a reasoning step.
In general there are a lot of ways someone would write a comment that explicitly states what the outcome of such reasoning could be, and the fact that Conor wrote it in one of the few ways that specifically doesn’t say his voucher should be trusted is sort of a surprising fact, and thus evidence he didn’t want to say his voucher should be trusted. I don’t think Conor intended to say exactly what he said, and his motivation was not status-based.
Analagously, if a friend says “This coffee shop I went to is great, you should try it!” and you said “You’ve given me no argument about why this coffee shop should reliably produce better products than the dozens of other coffee shops in the area” you may say “You’re right, I only wanted to let you know that I thought it was great, and if you think I’ve generally got good judgement about things like this you might find value in trying it” and that’s generally fine.
Note: I keep being quoted as saying Conor “genuinely meant” something he didn’t say. I didn’t say that. (Wow this is a fun game of he-shaid she-said.) I said “You said X, but I think you meant to say Y, but if you genuinely meant X, then I disagree.” I’m not denying that he said X, and I think he said X.
Might have something to do with people coming to the same line with different priors? E.g., based on coming from different points on the ask-guess spectrum, or from different varieties of ask/guess. For a combination of reasons—such as “it’s rude to outright assert that you’re an authority, so people regularly have to imply it and talk around it,” and “it’s just not that common for people to have zero interest/stake in a conversation, or to deliberately avoid pushing for their interest”—it’s not surprising that some people’s prior is skewed toward other interpretations, such that you need to very heavy-handedly and explicitly clarify what you mean (possibly even explicitly disavowing the wrong interpretation) before you can shift those people away from their prior.
Priors just feel like how the world is, though; it’s not natural (and often not possible) to distinguish the “plain” or “surface” meaning from the text from your assumptions about what people would most often mean by that text.
I think I took it as read that Conor was saying his voucher provides useful (and positive) evidence because that seemed the only way to make sense of his saying what he said in response to what he said it to. I mean, you can’t tell whether CoolShirtMcPants had considered whether Conor’s testimony was evidence from what he wrote; all you can tell is that apparently he didn’t think it was.
In any case, I’m now definitely confused about who has at what times thought Conor meant what by what. What I remain confident of is that Conor did not so clearly not say that we should take his endorsement as evidence as to make it unreasonable to say he did; and I think your comments above should give Conor good reason to reconsider his characterization of what you said before as “uncharitable” (given that strictly only people, not words, can be uncharitable, and that it’s hard to see why someone uncharitably disposed would write what you did above).
And I think there are too many levels of he-said-she-said going on here...
Loren ipsum
Personal support:
I do not think that comment should have been negative. I upvoted to counteract. I take you at your word that you meant what you say. I see similar problems you do with the R-community and their trendsetters/decision makers.
Status-raising/Compliments:
I loved your articles. In fact, they turned out to be the only thing that was making LesserWrong interesting to me as opposed to just a bunch of AI/Machine Learning stuff I totally don’t care about (I am not the target audience of this site). If you leave, I probably will too, by which I mean going back to checking it once a month-ish to see if anything particularly interesting has been written. Without your articles there really isn’t much here for me, that I can’t get by checking individual blogs, which it turns out I have to do anyways since not everything is crossposted to frontpage.
Advice/Uncompliment:
The Green in me doesn’t like that you aren’t just letting this go. The problematic signs you see I think are true, but you aren’t going to be able to change it by having neverending debates. Accept the community as-is, or move on (but tell me where you’re going, so I can read you elsewhere).
I did mention this to a few people in private who seemed to misunderstand you in this respect.
I think from the discussion that is currently available, it’s forgivable that none of the people who did not chat with you in private have this misunderstanding of the situation. I think your original sentence was easy to interpret that way.
I apologize for not correcting people on this in public. We have a bunch of major feature-launches upcoming, and I currently don’t have the capacity to follow all the recent discussion closely, and be a super productive participant. I don’t think that anyone outside of me, Ben Pace and maybe Vaniver really had the context to correct people on this confidently.
(I spent the last 10 minutes reading through your past comments, but didn’t find any clarification that felt to me like it was clear enough so that someone without large amount of context would have confidently come to a correct model of what you intended to say, so I don’t think this is really a failing of any of the people who are passively reading the site.)
Note: I have not engaged directly with your points since posting a few days ago “this is what I currently understand your point to be. If this is your point, then I am pretty confused about what it is you think we’re disagreeing about. I will not be able to usefully engage further until you clarify that.”
(I don’t think you’re obligated to have responded, but it is a brute fact about the world that Ray is not able to productively engage with this further until you’ve done so. We’ve chatted in private channels about discussing things elsewhere/elsewhen and that is still my preference)
https://www.lesserwrong.com/posts/ZdMnP77yEE3wWPoXZ/continuing-the-discussion-thread-from-the-mtg-post/hgfgRFwuGsCpknCSr
Loren ipsum
This seems like a VERY important point to Double Crux on! I’m excited to see it come up.
Would love to read about a Double Crux on this point. (Perhaps you two could email back and forth and then compile the resulting text, with some minor edits, and then publish on LW2?)
Personally, I agree with Ben Pace, and the fact that it ‘might be able to be done right’ is not a crux. But I could see changing my mind.
Loren ipsum
I’m still into the idea of reading a transcript after-the-fact. Or at least a summary.
Do you believe the situation above RE: the MTG Color Wheel is an example of a time “when you have to take action and can’t figure it out yourself”?
Loren ipsum
I think I rate “strength of confidence in a person” low when trying to decide whether to really engage in a model. Other factors like “tractibility of a problem area to modelling” or “importance of problem area” are much more important. “Ease of engagement” is probably why I engaged with the mtg post as much as I did, but my low expectation in the problem areas tractibility means I probably won’t try it out for very long.