This is subjective and all, but I met Geoff Anders at our 2012 CFAR workshop, I absolutely had the “this person wants to be a cult leader” vibe from him then, and I’ve been telling people as much for the entire time since. (To the extent of hurting my previously good friendships with two increasingly-Leverage-enmeshed people, in the mid-2010s.)
I don’t know why other people’s cult-leader-wannabe-detectors are set so differently from mine, but it’s a similar (though less deadly) version of how I quickly felt about a certain person [don’t name him, don’t summon him] who’s been booted from the Berkeley community for good reason.
I’ve read this comment several times, and it seems open to interpretation whether RyanCarey is mocking orthonormal for presenting weak evidence by presenting further obviously weak evidence, or whether RyanCarey is presenting weak evidence believing it to be strong.
Just to lean on the scales a little here, towards readers taking from these two comments (Ryan’s and orthonormal’s) what I think could (should?) be taken from them…
An available interpretation of orthonormal’s comment is that orthonormal:
had a first impression of Geoff that was negative,
then backed that first impression so hard that they “[hurt their] previously good friendships with two increasingly-Leverage-enmeshed people” (which seems to imply: backed that first impression against the contrary opinions of two friends who were in a position to gather increasingly overwhelmingly more information by being in a position to closely observe Geoff and his practices),
while telling people of their first impression “for the entire time since” (for which, absent other information about orthonormal, it is an available interpretation that orthonormal engaged in what could be inferred to be hostile gossip based on very little information and in the face of an increasing amount of evidence (from their two friends) that their first impression was false (assuming that orthonormal’s friends were themselves reasonable people)).
(In this later comment) orthonormal then reports interacting with Geoff “a few times since 2012” (and reports specific memory of one conversation, I infer with someone other than Geoff, about orthonormal’s distrust of Leverage) (for which it is an available interpretation that orthonormal gathered much less information than their “Leverage-enmeshed” friends would have gathered over the same period, stuck to their first impression, and continued to engage in hostile gossip).
Those who know orthonormal may know that this interpretation is unreasonable given their knowledge of orthonormal, or out of character given other information about orthonormal, or may know orthonormal’s first impressions to be unusually (spectacularly?) accurate (I think that I often have a pretty good early read on folks I meet, but having as much confidence in my early reads as I infer from what orthonormal has said in this comment, that orthonormal has in their reads, would seem to require pretty spectacular evidence), or etc., and I hope that readers will use whatever information they have available to draw their own conclusions, but I note that only the information presented in orthonormal’s comment seems much more damning of orthonormal than of Geoff.
(And I note that orthonormal has accumulated >15k karma on this site… which… I don’t quite know how to marry to this comment, but it seems to me might cause a reasonable person to assume that orthonormal was better than what I have suggested might be inferred from their comment… or, noting that at the time I write this orthonormal has accumulated 30 points of karma from what seems to me to be… unimpressive as presented?… that there may be something going on in the way this community allocates karma to comments (comments that do not seem to me to be very good).)
Then, RyanCarey’s comment specifically uses “deadpan”, a term strongly associated with intentionally expressionless comedy, to describe Geoff saying something that sounds like what a reasonable person might infer was intentional comedy if said by another reasonable person. So… the reasonable inference, only from what RyanCarey has said, seems to me to be that Geoff was making a deadpan joke.
I think I met Geoff at the same 2012 CFAR workshop that orthonormal did, and I have spent at least hundreds of hours since in direct conversation with Geoff, and in direct conversation with Geoff’s close associates. It seems worth saying that I have what seems to me to be overwhelmingly more direct eyewitness evidence (than orthonormal reports in their comment) that Geoff does not seem to me to be someone who wants to be a cult leader. I note further that several comments have been published to this thread by people I know to have had even closer contact over the years with Geoff than I have, and those comments seem to me to be reporting that Geoff does not seem to them to be someone who wants to be a cult leader. I wonder whether orthonormal has other evidence, or whether orthonormal will take this opportunity to reduce their confidence in their first impression… or whether orthonormal will continue to be spectacularly confident that they’ve been right all along.
And given my close contact with Geoff I note that it seems only a little out of character for Geoff to deliver a deadpan joke, in the face of the very persistent accusations he has fielded, on evidence that seems to me to be of similar quality to the evidence that orthonormal presents here, that he is or is felt to be tending towards, or reminiscent of, a cult leader, a joke “that he would like to be starting a cult if he wasn’t running Leverage”. RyanCarey doesn’t report their confidence in the accuracy of their memory of this conversation, but given what I know, and what RyanCarey and orthonormal report only in these comments, I invite readers to be both unimpressed and unconvinced by this presentation of evidence that Geoff is a “cult-leader-wannabe”.
(I want to note that while readers may react negatively to me characterising orthonormal’s behaviour as “hostile gossip”, I am in the process of drafting a more comprehensive discussion of the OP and other comments here, in which I will try to make a clear case that my use of that term is justified. If you are, based on the information you currently have, highly confident that I am being inappropriately rude in my responses here, to a post that I will attempt to demonstrate is exceedingly rude, exceedingly poorly researched and exceedingly misleading, then you are, of course, welcome to downvote this comment. If you do, I invite you to share feedback for me, so that I can better learn the standards and practices that have evolved on this site since my team first launched it.)
(If your criticism is that I did not take the time to write a shorter letter… then I’ll take those downvotes on the chin. 😁)
Instead of however we might characterise the activity we’re all engaging in here, I wonder whether we might ask Geoff directly? @Geoff_Anders, with my explicit apology for this situation, and the recognition that (given the quality of discourse exhibited here), it would be quite reasonable for you to ignore this and continue carrying on with your life, would you care to comment?
(A disclosure that some readers may conclude is evidence of collusion or conspiracy, and others might conclude is merely the bare minimum amount of research required before accusing someone of activities such as those this post (doesn’t actually denotationally accuse Geoff of, but very obviously connotationally) accuses Geoff of: In the time between the OP being posted and this comment, I have communicated with Geoff and several ex-Leverage staff and contributors.)
As in, 5+ years ago, around when I’d first visited the Bay, I remember meeting up 1:1 with Geoff in a cafe. One of the things I asked, in order to understand how he thought about EA strategy, was what he would do if he wasn’t busy starting Leverage. He said he’d probably start a cult, and I don’t remember any indication that he was joking whatsoever. I’d initially drafted my comment as “he told me, unjokingly”, except that it’s a long time ago, so I don’t want to give the impression that I’m quite that certain.
accumulated 30 points of karma from what seems to me to be… unimpressive as presented?
I upvoted on the value of the comment as additional source data (IIRC when the comment had much lower karma). This value shouldn’t be diminished by questionable interpretation/attitude bundled with it, since the interpretation can be discarded, but the data can’t be magicked up.
This is a general consideration that applies to communications that provoke a much stronger urge to mute them, for example those that defend detestable positions. If such communications bring you new relevant data, even data that doesn’t significantly change your understanding of the situation, they are still precious, the effects of processing them and not ignoring them sum up over all such instances. (I think the comment to this post most rich in relevant data is prevlev-anon’s, which I strong-upvoted.)
This makes sense to me in my first pass of thinking about it, and I agree.
There’s something subtle and extremely hard to pull off (perhaps impossible) in: “in the wishing world, what do we think a shared voting policy should be, such that the aggregate of everyone voting consistently according to that policy leaves all comments in approximately the same order that a single extremely perceptive and high-quality reasoner would rank them?”
As opposed to comments just trending toward infinities.
This works out for the earlier top level comments (that see similar voter turnout), the absolute numbers just scale with popularity of the post. If something is not in its place in your ideal ranking, it’s possible to use the vote to move it that way. Vote weights do a little bit to try improving the quality (or value lock-in) of the ranking.
One issue with the system is the zero equilibrium on controversial things, with the last voters randomly winning irrespective of the actual distribution of opinion. It’s unclear how to get something more informative for such situations, but this should be kept in mind as a use case for any reform.
I’m trying to apply the ITT to your position, and I’m pretty sure I’m failing (and for the avoidance of doubt I believe that you are generally very well informed, capable and are here engaging in good faith, so I anticipate that the failing is mine, not yours). I hope that you can help me better understand your position:
My background assumptions (not stated or endorsed by you):
Conditional on a contribution (a post, a comment) being all of (a) subject to a reasonably clear interpretation (for the reader alone, if that is the only value the reader is optimising for, or otherwise for some (weighted?) significant portion of the reader community), (b) with content that is relevant and important to a question that the reader considers important (most usually the question under discussion), and (c) that is substantially true, and it is evident that it is true from the content as it is presented (for the reader alone, or the reader community), then…
My agreement with the value that I think you’re chasing:
… I agree that there is at least an important value at stake here, and the reader upvoting a contribution that meets those conditions may serve that important value.
Further elaboration of my background assumptions:
If (a) (clear interpretation) is missing, then the reader won’t know there’s value there to reward, or must (should?) at least balance the harms that I think are clear from the reader or others misinterpreting the data offered.
If (b) (content is relevant) is missing, then… perhaps you like rewarding random facts? I didn’t eat breakfast this morning. This is clear and true, but I really don’t expect to be rewarded for sharing it.
If (c) (evident truth) is missing, then either (not evident) you don’t know whether to reward the contribution or not, or (not true) surely the value is negative?
My statement of my confusion:
Now, you didn’t state these three conditions, so you obviously get to reject my claim of their importance… yet I’ve pretty roundly convinced myself that they’re important, and that (absent some very clever but probably nit-picky edge case, which I’ve been around Lesswrong long enough to know is quite likely to show up) you’re likely to agree (other readers should note just how wildly I’m inferring here, and if Vladimir_Nesov doesn’t respond please don’t assume that they actually implied any of this). You also report that you upvoted orthonormal’s comment (I infer orthonormal’s comment instead of RyanCarey’s, because you quoted “30 points of karma”, which didn’t apply to RyanCarey’s comment). So I’m trying to work out what interpretation you took from orthonormal’s comment (and the clearest interpretation I managed to find is the one I detailed in my earlier comment: that orthonormal based their opinion overwhelmingly on their first impression and didn’t update on subsequent data), whether you think the comment shared relevant data (did you think orthonormal’s first impression was valuable data pertaining to whether Leverage and Geoff were bad? did you think the data relevant to some other valuable thing you were tracking, that might not be what other readers would take from the comment?), and whether you think that orthonormal’s data was self-evidently true (do you have other reason to believe that orthonormal’s first impressions are spectacular? did you see some other flaw in the reasoning I my earlier comment?)
So, I’m confused. What were you rewarding with your upvote? Were you rewarding (orthonormal’s) behaviour, that you expect will be useful to you but misleading for others, or rewarding behaviour that you expect would be useful on balance to your comment’s readers (if so, what and how)? If my model is just so wildly wrong that none of these questions make sense to answer, can you help me understand where I fell over?
(To the inevitable commenter who would, absent this addition, jump in and tell me that I clearly don’t know what an ITT is: I know that what I have written here is not what it looks like to try to pass an ITT — I did try, internally, to see whether I could convince myself that I could pass Vladimir_Nesov’s ITT, and it was clear to me that I could not. This is me identifying where I failed — highlighting my confusion — not trying to show you what I did.)
There is an important class of claims detailed enough to either be largely accurate or intentional lies, their distortion can’t be achieved with mere lack of understanding or motivated cognition. These can be found even in very strange places, and still be informative when taken out of context.
The claim I see here is that orthonormal used a test for dicey character with reasonable precision. The described collateral damage of just one positive reading signals that it doesn’t trigger all the time, and there was at least one solid true positive. The wording also vaguely suggests that there aren’t too many other positive readings, in which case the precision is even higher than the collateral damage signals.
Since base rate is lower than the implied precision, a positive reading works as evidence. For the opposite claim, that someone has an OK character, evidence of this form can’t have similar strength, since the base rate is already high and there is no room for precision to get significantly higher.
It’s still not strong evidence, and directly it’s only about character in the sense of low-level intuitive and emotional inclinations. This is in turn only weak evidence of actual behavior, since people often live their lives “out of character”, it’s the deliberative reasoning that matters for who someone actually is as a person. Internal urges are only a risk factor and a psychological inconvenience for someone who disagrees with their own urges and can’t or won’t retrain them, it’s not an important defining characteristic and not relevant in most contexts. This must even be purposefully disregarded in some contexts to prevent discrimination.
Edit: I managed to fumble terminology in the original version of this comment and said “specificity” instead of “precision” or “positive predictive value”, which is what I actually meant. It’s true that specificity of the test is also not low (much higher even), and for basically the same reasons, but high specificity doesn’t make a positive reading positive evidence.
The culture of Homo Sabiens often clashes pretty hard with the culture of LessWrong, so I can’t speak to how this will shake out overall.
But in the culture of Homo Sabiens, and in the-version-of-LessWrong-built-and-populated-by-Duncans, this is an outstanding comment, exhibiting several virtues, and also explicitly prosocial in its treatment of orthonormal and RyanCarey in the process of disagreement (being careful and explicit, providing handholds, preregistering places where you might be wrong, distinguishing between claims about the comments and about the overall people, being honest about hypotheses and willing to accept social disapproval for them, etc.)
I have strong-upvoted and hope further interaction with RyanCarey and orthonormal and other commenters both a) happens, and b) goes well for all involved. I would try to engage more substantively, but I’m currently trying to kill a motte-and-bailey elsewhere.
What an incredibly rude thing to say about someone. I hope no one ever posts their initial negative impressions upon meeting you online for everyone to see.
Geoff Anders is a real person. Stop treating him like he’s not.
Added: This comment was too harsh given the circumstance. My apologies to orthonormal for overreacting.
Real people can and often are extremely dangerous and it is not rude to describe dangerous people as acting in dangerous ways, or if it is then it is a valuable form of rudeness.
I have a sincere question for you, Kerry, because you seem to be upset by the approach commenters here are taking to talking about this issue and the people involved, and people here are openly discussing the character of your employer, which I can imagine to be really painful.
If your sister or brother or your significant other had become enmeshed in a controlling group and you believed the group and in particular its leader had done them serious psychological harm, how would you want people to talk about the group and its leader in public, after the fact? What sorts of discussions, comments or questions would you consider reasonable or necessary under such circumstances, and what would you consider off the table?
(Specifically, I’m not focused on whether you believe Leverage 1.0 had those characteristics, but how you would respond towards a group and its leader that you personally believed -did- have these characteristics)
Assuming something like this represents your views Freyja, then I think you’ve handled the situation quite well.
I hope you can see how that is quite different from the comment I was replying to which is someone who appears to have met Geoff once. I’m sure you can similarly imagine how you would feel if people made comments like the one from orthonormal about friends of yours without knowing them.
I’ve interacted with Geoff a few times since 2012, and continued to have that bad feeling about him.
I wanted to let people know that these impressions started even prior to Leverage, and that I know I’m not retconning my memory, because I remember a specific conversation in summer 2014 about my distrust of Leverage (and I believe that wasn’t the first such conversation). This post would not have surprised 2012!me; the signs may have been subjective but they were there.
Without getting to the object level, it’s very fair to discuss the personality of someone who wields power and authority over people, especially if one mechanism of influence is telling those people that the world is at stake.
The rationalist community did in fact have to have such conversations about Eliezer over the years, and (IMO) mostly concluded that he actively wants to just sit in a comfortable cave and produce FAI progress with his team, and so he delegates any social authority/power he gains to trusted others, making him a safer weirdo leader figure than most.
The rationalist community did in fact have to have such conversations about Eliezer over the years, and (IMO) mostly concluded that he actively wants to just sit in a comfortable cave and produce FAI progress with his team, and so he delegates any social authority/power he gains to trusted others, making him a safer weirdo leader figure than most.
Was this conversation held publicly on a non-Eliezer-influenced online forum?
I think there’s a pretty big difference—from accounts I’ve read about Leverage, the “Leverage community” had non-public conversations about Geoff as well, and they concluded he was a great guy.
He said that he had significant discussions about Geoff with people near Leverage afterwards that damaged those relationships. That suggests that the sense was very strong and he had talked about it with people who actually know him more deeply.
This is subjective and all, but I met Geoff Anders at our 2012 CFAR workshop, I absolutely had the “this person wants to be a cult leader” vibe from him then, and I’ve been telling people as much for the entire time since. (To the extent of hurting my previously good friendships with two increasingly-Leverage-enmeshed people, in the mid-2010s.)
I don’t know why other people’s cult-leader-wannabe-detectors are set so differently from mine, but it’s a similar (though less deadly) version of how I quickly felt about a certain person [don’t name him, don’t summon him] who’s been booted from the Berkeley community for good reason.
He’s also told me, deadpan, that he would like to be starting a cult if he wasn’t running Leverage.
I’ve read this comment several times, and it seems open to interpretation whether RyanCarey is mocking orthonormal for presenting weak evidence by presenting further obviously weak evidence, or whether RyanCarey is presenting weak evidence believing it to be strong.
Just to lean on the scales a little here, towards readers taking from these two comments (Ryan’s and orthonormal’s) what I think could (should?) be taken from them…
An available interpretation of orthonormal’s comment is that orthonormal:
had a first impression of Geoff that was negative,
then backed that first impression so hard that they “[hurt their] previously good friendships with two increasingly-Leverage-enmeshed people” (which seems to imply: backed that first impression against the contrary opinions of two friends who were in a position to gather increasingly overwhelmingly more information by being in a position to closely observe Geoff and his practices),
while telling people of their first impression “for the entire time since” (for which, absent other information about orthonormal, it is an available interpretation that orthonormal engaged in what could be inferred to be hostile gossip based on very little information and in the face of an increasing amount of evidence (from their two friends) that their first impression was false (assuming that orthonormal’s friends were themselves reasonable people)).
(In this later comment) orthonormal then reports interacting with Geoff “a few times since 2012” (and reports specific memory of one conversation, I infer with someone other than Geoff, about orthonormal’s distrust of Leverage) (for which it is an available interpretation that orthonormal gathered much less information than their “Leverage-enmeshed” friends would have gathered over the same period, stuck to their first impression, and continued to engage in hostile gossip).
Those who know orthonormal may know that this interpretation is unreasonable given their knowledge of orthonormal, or out of character given other information about orthonormal, or may know orthonormal’s first impressions to be unusually (spectacularly?) accurate (I think that I often have a pretty good early read on folks I meet, but having as much confidence in my early reads as I infer from what orthonormal has said in this comment, that orthonormal has in their reads, would seem to require pretty spectacular evidence), or etc., and I hope that readers will use whatever information they have available to draw their own conclusions, but I note that only the information presented in orthonormal’s comment seems much more damning of orthonormal than of Geoff.
(And I note that orthonormal has accumulated >15k karma on this site… which… I don’t quite know how to marry to this comment, but it seems to me might cause a reasonable person to assume that orthonormal was better than what I have suggested might be inferred from their comment… or, noting that at the time I write this orthonormal has accumulated 30 points of karma from what seems to me to be… unimpressive as presented?… that there may be something going on in the way this community allocates karma to comments (comments that do not seem to me to be very good).)
Then, RyanCarey’s comment specifically uses “deadpan”, a term strongly associated with intentionally expressionless comedy, to describe Geoff saying something that sounds like what a reasonable person might infer was intentional comedy if said by another reasonable person. So… the reasonable inference, only from what RyanCarey has said, seems to me to be that Geoff was making a deadpan joke.
I think I met Geoff at the same 2012 CFAR workshop that orthonormal did, and I have spent at least hundreds of hours since in direct conversation with Geoff, and in direct conversation with Geoff’s close associates. It seems worth saying that I have what seems to me to be overwhelmingly more direct eyewitness evidence (than orthonormal reports in their comment) that Geoff does not seem to me to be someone who wants to be a cult leader. I note further that several comments have been published to this thread by people I know to have had even closer contact over the years with Geoff than I have, and those comments seem to me to be reporting that Geoff does not seem to them to be someone who wants to be a cult leader. I wonder whether orthonormal has other evidence, or whether orthonormal will take this opportunity to reduce their confidence in their first impression… or whether orthonormal will continue to be spectacularly confident that they’ve been right all along.
And given my close contact with Geoff I note that it seems only a little out of character for Geoff to deliver a deadpan joke, in the face of the very persistent accusations he has fielded, on evidence that seems to me to be of similar quality to the evidence that orthonormal presents here, that he is or is felt to be tending towards, or reminiscent of, a cult leader, a joke “that he would like to be starting a cult if he wasn’t running Leverage”. RyanCarey doesn’t report their confidence in the accuracy of their memory of this conversation, but given what I know, and what RyanCarey and orthonormal report only in these comments, I invite readers to be both unimpressed and unconvinced by this presentation of evidence that Geoff is a “cult-leader-wannabe”.
(I want to note that while readers may react negatively to me characterising orthonormal’s behaviour as “hostile gossip”, I am in the process of drafting a more comprehensive discussion of the OP and other comments here, in which I will try to make a clear case that my use of that term is justified. If you are, based on the information you currently have, highly confident that I am being inappropriately rude in my responses here, to a post that I will attempt to demonstrate is exceedingly rude, exceedingly poorly researched and exceedingly misleading, then you are, of course, welcome to downvote this comment. If you do, I invite you to share feedback for me, so that I can better learn the standards and practices that have evolved on this site since my team first launched it.)
(If your criticism is that I did not take the time to write a shorter letter… then I’ll take those downvotes on the chin. 😁)
Instead of however we might characterise the activity we’re all engaging in here, I wonder whether we might ask Geoff directly? @Geoff_Anders, with my explicit apology for this situation, and the recognition that (given the quality of discourse exhibited here), it would be quite reasonable for you to ignore this and continue carrying on with your life, would you care to comment?
(A disclosure that some readers may conclude is evidence of collusion or conspiracy, and others might conclude is merely the bare minimum amount of research required before accusing someone of activities such as those this post (doesn’t actually denotationally accuse Geoff of, but very obviously connotationally) accuses Geoff of: In the time between the OP being posted and this comment, I have communicated with Geoff and several ex-Leverage staff and contributors.)
As in, 5+ years ago, around when I’d first visited the Bay, I remember meeting up 1:1 with Geoff in a cafe. One of the things I asked, in order to understand how he thought about EA strategy, was what he would do if he wasn’t busy starting Leverage. He said he’d probably start a cult, and I don’t remember any indication that he was joking whatsoever. I’d initially drafted my comment as “he told me, unjokingly”, except that it’s a long time ago, so I don’t want to give the impression that I’m quite that certain.
I upvoted on the value of the comment as additional source data (IIRC when the comment had much lower karma). This value shouldn’t be diminished by questionable interpretation/attitude bundled with it, since the interpretation can be discarded, but the data can’t be magicked up.
This is a general consideration that applies to communications that provoke a much stronger urge to mute them, for example those that defend detestable positions. If such communications bring you new relevant data, even data that doesn’t significantly change your understanding of the situation, they are still precious, the effects of processing them and not ignoring them sum up over all such instances. (I think the comment to this post most rich in relevant data is prevlev-anon’s, which I strong-upvoted.)
This makes sense to me in my first pass of thinking about it, and I agree.
There’s something subtle and extremely hard to pull off (perhaps impossible) in: “in the wishing world, what do we think a shared voting policy should be, such that the aggregate of everyone voting consistently according to that policy leaves all comments in approximately the same order that a single extremely perceptive and high-quality reasoner would rank them?”
As opposed to comments just trending toward infinities.
This works out for the earlier top level comments (that see similar voter turnout), the absolute numbers just scale with popularity of the post. If something is not in its place in your ideal ranking, it’s possible to use the vote to move it that way. Vote weights do a little bit to try improving the quality (or value lock-in) of the ranking.
One issue with the system is the zero equilibrium on controversial things, with the last voters randomly winning irrespective of the actual distribution of opinion. It’s unclear how to get something more informative for such situations, but this should be kept in mind as a use case for any reform.
I’m trying to apply the ITT to your position, and I’m pretty sure I’m failing (and for the avoidance of doubt I believe that you are generally very well informed, capable and are here engaging in good faith, so I anticipate that the failing is mine, not yours). I hope that you can help me better understand your position:
My background assumptions (not stated or endorsed by you):
Conditional on a contribution (a post, a comment) being all of (a) subject to a reasonably clear interpretation (for the reader alone, if that is the only value the reader is optimising for, or otherwise for some (weighted?) significant portion of the reader community), (b) with content that is relevant and important to a question that the reader considers important (most usually the question under discussion), and (c) that is substantially true, and it is evident that it is true from the content as it is presented (for the reader alone, or the reader community), then…
My agreement with the value that I think you’re chasing:
… I agree that there is at least an important value at stake here, and the reader upvoting a contribution that meets those conditions may serve that important value.
Further elaboration of my background assumptions:
If (a) (clear interpretation) is missing, then the reader won’t know there’s value there to reward, or must (should?) at least balance the harms that I think are clear from the reader or others misinterpreting the data offered.
If (b) (content is relevant) is missing, then… perhaps you like rewarding random facts? I didn’t eat breakfast this morning. This is clear and true, but I really don’t expect to be rewarded for sharing it.
If (c) (evident truth) is missing, then either (not evident) you don’t know whether to reward the contribution or not, or (not true) surely the value is negative?
My statement of my confusion:
Now, you didn’t state these three conditions, so you obviously get to reject my claim of their importance… yet I’ve pretty roundly convinced myself that they’re important, and that (absent some very clever but probably nit-picky edge case, which I’ve been around Lesswrong long enough to know is quite likely to show up) you’re likely to agree (other readers should note just how wildly I’m inferring here, and if Vladimir_Nesov doesn’t respond please don’t assume that they actually implied any of this). You also report that you upvoted orthonormal’s comment (I infer orthonormal’s comment instead of RyanCarey’s, because you quoted “30 points of karma”, which didn’t apply to RyanCarey’s comment). So I’m trying to work out what interpretation you took from orthonormal’s comment (and the clearest interpretation I managed to find is the one I detailed in my earlier comment: that orthonormal based their opinion overwhelmingly on their first impression and didn’t update on subsequent data), whether you think the comment shared relevant data (did you think orthonormal’s first impression was valuable data pertaining to whether Leverage and Geoff were bad? did you think the data relevant to some other valuable thing you were tracking, that might not be what other readers would take from the comment?), and whether you think that orthonormal’s data was self-evidently true (do you have other reason to believe that orthonormal’s first impressions are spectacular? did you see some other flaw in the reasoning I my earlier comment?)
So, I’m confused. What were you rewarding with your upvote? Were you rewarding (orthonormal’s) behaviour, that you expect will be useful to you but misleading for others, or rewarding behaviour that you expect would be useful on balance to your comment’s readers (if so, what and how)? If my model is just so wildly wrong that none of these questions make sense to answer, can you help me understand where I fell over?
(To the inevitable commenter who would, absent this addition, jump in and tell me that I clearly don’t know what an ITT is: I know that what I have written here is not what it looks like to try to pass an ITT — I did try, internally, to see whether I could convince myself that I could pass Vladimir_Nesov’s ITT, and it was clear to me that I could not. This is me identifying where I failed — highlighting my confusion — not trying to show you what I did.)
Edit 6hrs after posting: formatting only (I keep expecting Github Flavoured Markdown, instead of vanilla Markdown).
There is an important class of claims detailed enough to either be largely accurate or intentional lies, their distortion can’t be achieved with mere lack of understanding or motivated cognition. These can be found even in very strange places, and still be informative when taken out of context.
The claim I see here is that orthonormal used a test for dicey character with reasonable precision. The described collateral damage of just one positive reading signals that it doesn’t trigger all the time, and there was at least one solid true positive. The wording also vaguely suggests that there aren’t too many other positive readings, in which case the precision is even higher than the collateral damage signals.
Since base rate is lower than the implied precision, a positive reading works as evidence. For the opposite claim, that someone has an OK character, evidence of this form can’t have similar strength, since the base rate is already high and there is no room for precision to get significantly higher.
It’s still not strong evidence, and directly it’s only about character in the sense of low-level intuitive and emotional inclinations. This is in turn only weak evidence of actual behavior, since people often live their lives “out of character”, it’s the deliberative reasoning that matters for who someone actually is as a person. Internal urges are only a risk factor and a psychological inconvenience for someone who disagrees with their own urges and can’t or won’t retrain them, it’s not an important defining characteristic and not relevant in most contexts. This must even be purposefully disregarded in some contexts to prevent discrimination.
Edit: I managed to fumble terminology in the original version of this comment and said “specificity” instead of “precision” or “positive predictive value”, which is what I actually meant. It’s true that specificity of the test is also not low (much higher even), and for basically the same reasons, but high specificity doesn’t make a positive reading positive evidence.
The culture of Homo Sabiens often clashes pretty hard with the culture of LessWrong, so I can’t speak to how this will shake out overall.
But in the culture of Homo Sabiens, and in the-version-of-LessWrong-built-and-populated-by-Duncans, this is an outstanding comment, exhibiting several virtues, and also explicitly prosocial in its treatment of orthonormal and RyanCarey in the process of disagreement (being careful and explicit, providing handholds, preregistering places where you might be wrong, distinguishing between claims about the comments and about the overall people, being honest about hypotheses and willing to accept social disapproval for them, etc.)
I have strong-upvoted and hope further interaction with RyanCarey and orthonormal and other commenters both a) happens, and b) goes well for all involved. I would try to engage more substantively, but I’m currently trying to kill a motte-and-bailey elsewhere.
What an incredibly rude thing to say about someone. I hope no one ever posts their initial negative impressions upon meeting you online for everyone to see.
Geoff Anders is a real person. Stop treating him like he’s not.
Added: This comment was too harsh given the circumstance. My apologies to orthonormal for overreacting.
Real people can and often are extremely dangerous and it is not rude to describe dangerous people as acting in dangerous ways, or if it is then it is a valuable form of rudeness.
I have a sincere question for you, Kerry, because you seem to be upset by the approach commenters here are taking to talking about this issue and the people involved, and people here are openly discussing the character of your employer, which I can imagine to be really painful.
If your sister or brother or your significant other had become enmeshed in a controlling group and you believed the group and in particular its leader had done them serious psychological harm, how would you want people to talk about the group and its leader in public, after the fact? What sorts of discussions, comments or questions would you consider reasonable or necessary under such circumstances, and what would you consider off the table?
(Specifically, I’m not focused on whether you believe Leverage 1.0 had those characteristics, but how you would respond towards a group and its leader that you personally believed -did- have these characteristics)
Assuming something like this represents your views Freyja, then I think you’ve handled the situation quite well.
I hope you can see how that is quite different from the comment I was replying to which is someone who appears to have met Geoff once. I’m sure you can similarly imagine how you would feel if people made comments like the one from orthonormal about friends of yours without knowing them.
Thank you for scaling back your initial response.
I’ve interacted with Geoff a few times since 2012, and continued to have that bad feeling about him.
I wanted to let people know that these impressions started even prior to Leverage, and that I know I’m not retconning my memory, because I remember a specific conversation in summer 2014 about my distrust of Leverage (and I believe that wasn’t the first such conversation). This post would not have surprised 2012!me; the signs may have been subjective but they were there.
Without getting to the object level, it’s very fair to discuss the personality of someone who wields power and authority over people, especially if one mechanism of influence is telling those people that the world is at stake.
The rationalist community did in fact have to have such conversations about Eliezer over the years, and (IMO) mostly concluded that he actively wants to just sit in a comfortable cave and produce FAI progress with his team, and so he delegates any social authority/power he gains to trusted others, making him a safer weirdo leader figure than most.
Was this conversation held publicly on a non-Eliezer-influenced online forum?
I think there’s a pretty big difference—from accounts I’ve read about Leverage, the “Leverage community” had non-public conversations about Geoff as well, and they concluded he was a great guy.
He said that he had significant discussions about Geoff with people near Leverage afterwards that damaged those relationships. That suggests that the sense was very strong and he had talked about it with people who actually know him more deeply.
This is a good point. I think I reacted too harshly. I’ve added an apology to the orthonormal to the original comment