On reflection, it seems right to me that there may not be a contradiction here. I’ll post something later if I conclude otherwise.
(I think I got a bit too excited about a chance to use the old philosopher’s move of “what about that claim itself.”)
On reflection, it seems right to me that there may not be a contradiction here. I’ll post something later if I conclude otherwise.
(I think I got a bit too excited about a chance to use the old philosopher’s move of “what about that claim itself.”)
It’s not clear what “I” means here . . .
Oh, sorry, this was a quote from Descartes that is the closest thing that actually appears in Descartes to “I think therefore I am” (which doesn’t expressly appear in the Meditations).
Descartes’s idea doesn’t rely on any claims about persistent psychological entities (that would require the supposition of memory, which Descartes isn’t ready to accept yet!). Instead, he postulates an all-powerful entity that is specifically designed to deceive him and tries to determine whether anything at all can be known given that circumstance. He concludes that he can know that he exists because something has to do the thinking. Here is the relevant quote from the Second Meditation:
I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I too do not exist? No: if I convinced myself of something then I certainly existed. But there is a deceiver of supreme power and cunning who is deliberately and constantly deceiving me. In that case I too undoubtedly exist, if he is deceiving me; and let him deceive me as much as he can, he will never bring it about that I am nothing so long as I think that I am something. So after considering everything very thoroughly, I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind.
I find this pretty convincing personally. I’m interested in whether you think Descartes gets it wrong even here or whether you think his philosophical system gains its flaws later.
More generally, I’m still not quite sure what precise claims or what type of claim you predict you and Geoff would disagree about. My-model-of-Geoff suggests that he would agree with “it seems fine to say that there’s some persistent psychological entity roughly corresponding to the phrase “Rob Bensinger”.” and that “thinking”, “experience”, etc.” pick out “real” things (depending on what we mean by “real”).
Can you identify a specific claim type where you predict Geoff would think that the claim can be known with certainty and you would think otherwise?
I don’t think people should be certain of anything
What about this claim itself?
This comment is excellent. I really appreciate it.
I probably share some of your views on the “no no no no (yes), no no no no (yes), no no no no (yes)” thing, and we don’t want to go too far with it, but I’ve come to like it more over time.
(Semi-relatedly: I think I rejected the sequences unfairly when I first encountered them early on for something like this kind of stylistic objection. Coming from a philosophical background I was like “Where are the premises? What is the argument? Why isn’t this stated more precisely?” Over time I’ve come to appreciate the psychological effect of these kinds of writing styles and value that more than raw precision.)
It seems to me that you’re arguing against a view in the family of claims that include “It seems like the one thing I can know for sure is that I’m having these experiences” but I’m having trouble determining the precise claim you are refuting. I think this is because I’m not sure which claims that are meant precisely and which are meant rhetorically or directionally.
Since this is a complex topic which lots of potential distinctions to be made, it might be useful to determine your views on a few different claims in the family of “It seems like the one thing I can know for sure is that I’m having these experiences” to determine where the disagreement lies.
Below are some claims in this family. Can you pinpoint which you think are fallible and which you think are infallible (if any)? Assuming that many or most of them are fallible can you give me a sense of something like “how susceptible to fallibility” you think they are? (Also if you don’t mind, it might be useful to distinguish your views from what your-model-of-Geoff thinks to help pinpoint disagreements.) Feel free to add additional claims if they seem like they would do a better job of pinpointing the disagreement.
I am, I exist (i.e., the Cartesian cogito).
I am thinking.
I am having an experience.
I am experiencing X.
I experienced X.
I am experiencing X because there is an X-producing thing in the world.
I believe X.
I am having the experience of believing X.
Edit: Wrote this before seeing this comment, so apologies if this doesn’t interact with the content there.
Rob: Where does the reasoning chain from 1 to 3a/3b go wrong in your view? I get that you think it goes wrong in that the conclusions aren’t true, but what is your view about which premise is wrong or why the conclusion doesn’t follow from the premises?
In particular, I’d be really interested in an argument against the claim “It seems like the one thing I can know for sure is that I’m having these experiences.”
OK, excellent this is also quite helpful.
For both my own thought and in high-trust conversations I have a norm that’s something like “idea generation before content filter” which is designed to allow one to think uncomfortable thoughts (and sometimes say them) before filtering things out. I don’t have this norm for “things I say on the public internet” (or any equivalent norm). I’ll have to think a bit about what norms actually seem good to me here.
I think I can be on board with a norm where one is willing to say rude or uncomfortable things provided they’re (1) valuable to communicate and (2) one makes reasonable efforts to nevertheless protect the social fabric and render the statement receivable to the person to whom it is directed. My vague sense of comments with the “I know this is uncharitable/rude, but [uncharitable/rude thing]” is that more than half of the time I think the caveat insulates the poster from criticism and does not meaningfully protect the social fabric or help the person to whom the comments are directed, but I haven’t read such comments carefully.
In any case, I now think there is at least a good and valid version of this norm that should be distinguished from abuses of the norm.
That seems basically fair.
An unendorsed part of my intention is to complain about the comment since I found it annoying. Depending on how loudly that reads as being my goal, my comment might deserve to be downvoted to discourage focusing the conversation on complaints of this type.
The endorsed part of my intention is that the LW conversations about Leverage 1.0 would likely benefit from commentary by people who know what actually went on in Leverage 1.0. Unfortunately, the set of “people who have knowledge of Leverage 1.0 and are also comfortable on LW” is really small. I’m trying to see if I am in this set by trying to understand LW norms more explicitly. This is admittedly a rather personal goal, and perhaps it ought to be discouraged for that reason, but I think indulging me a little bit is consonant with the goals of the community as I understand them.
Also, to render an implicit thing I’m doing explicit, I think I keep identifying myself as an outsider to LW as a request for something like hospitality. It occurs to me that this might not be a social form that LW endorses! If so, then my comment probably deserves to be downvoted from the LW perspective.
Thanks a lot for taking the time to write this. The revised version makes it clearer to me what I disagree with and how I might go about responding.
An area of overlap that I notice between Duncan-norms and LW norms are sentences like this:
(This is not me being super charitable, but: it seems to me that the whole demons-and-crystals thing, which so far has not been refuted, to my knowledge, is also a start. /snark)
Where the pattern is something like: “I know this is uncharitable/rude, but [uncharitable/rude thing]. Where I come from the caveat isn’t understood to do any work. If I say “I know this is rude, but [rude thing]” I expect the recipient to take offense to roughly the same degree as if there was no caveat at all, and I expect the rudeness to derail the recipient’s ability to think about the topic to roughly the same degree.
If you’re interested, I’d appreciate the brief argument for thinking that it’s better to have norms that allow for saying the rude/uncharitable thing with a caveat instead of having norms that encourage making a similar point with non-rude/charitable comments instead.
This is really helpful. Thanks!
As of writing (November 9, 2021) this comment has 6 Karma across 11 votes. As a newbie to LessWrong with only a general understanding of LessWrong norms, I find it surprising that the comment is positive. I was wondering if those who voted on this comment (or who have an opinion on it) would be interested in explaining what Karma score this comment should have and why.
My view based on my own models of good discussion norms is that the comment is mildly toxic and should be hovering around zero karma or in slightly negative territory for the following reasons:
I would describe the tone as “sarcastic” in a way that makes it hard for me to distinguish between what the OP actually thinks and what they are saying or implying for effect.
The post doesn’t seem to engage with Geoff’s perspective in any serious way. Instead, I would describe it as casting aspersions on a straw model of Geoff.
The post seems most focused on generating applause lights via condemnation of Geoff than trying to explain why Geoff is part of the Rationality community despite his protestation to the contrary. (I could imagine the comment which tries to weigh the evidence about whether Geoff ought to be considered part of the Rationality community even today, but this comment isn’t it).
The comment repeatedly implies that Leverage was devoted to activities like “fighting evil spirits,” “using touch healing,” “exorcising demons,” etc. even though (1) the post where that is described only covers 2017-2019; (2) doesn’t specify that this kind of activity was common or typical even of her sub-group or of her overall experience; and (3) specifically notes that most people at Leverage didn’t have this experience.
I don’t think the comment is more than mildly toxic because it does raise the valid consideration that Geoff does appear to have positioned himself as at least Rationalist-adjacent early on and because none of the offenses listed above are particularly heinous. I’m sure others disagree with my assessment and I’d be interested in understanding why.
[Context: I work at Leverage now, but didn’t during Leverage 1.0 although I knew many of the people involved. I haven’t been engaging with LessWrong recently because the discussion has seemed quite toxic to me, but Speaking of Stag Hunts and in particular this comment made me a little bit more optimistic so I thought I’d try to get a clearer picture of LessWrong’s norms.]
The most directly ‘damning’ thing, as far as I can tell, is Geoff pressuring people to sign NDAs.
I received an email from a Paradigm board member on behalf of Paradigm and Leverage that aims to provide some additional clarity on the information-sharing situation here. Since the email specifies that it can be shared, I’ve uploaded it to my Google Drive (with some names and email addresses redacted). You can view it here.
The email also links to the text of the information-sharing agreement in question with some additional annotations.
[Disclosure: I work at Leverage, but did not work at Leverage during Leverage 1.0. I’m sharing this email in a personal rather than a professional capacity.]
Instead, what I’d be curious to know is whether they have the integrity to be proactively transparent about past mistakes, radically changed course when it comes to potentially harmful practices, and refrain from using any potentially harmful practices in cases where it might be advantageous on a Machiavellian-consequentialist assessment.
I think skepticism about nice words without difficult-to-fake evidence is warranted, but I also think some of this evidence is already available.
For example, I think it’s relatively easy to verify that Leverage is a radically different organization today. The costly investments we’ve made in history of science research provide the clearest example as does the fact that we’re no longer pursuing any new psychological research.
This is a good point. I think I reacted too harshly. I’ve added an apology to the orthonormal to the original comment
Assuming something like this represents your views Freyja, then I think you’ve handled the situation quite well.
I hope you can see how that is quite different from the comment I was replying to which is someone who appears to have met Geoff once. I’m sure you can similarly imagine how you would feel if people made comments like the one from orthonormal about friends of yours without knowing them.
What an incredibly rude thing to say about someone. I hope no one ever posts their initial negative impressions upon meeting you online for everyone to see.
Geoff Anders is a real person. Stop treating him like he’s not.
Added: This comment was too harsh given the circumstance. My apologies to orthonormal for overreacting.
Leverage keeps coming up because Geoff Anders (and associates) emit something epistemically and morally corrosive and are gaslighting the commons about it. And Geoff keeps trying to disingenuously hit the reset button and hide it, to exploit new groups of people. That’s what people are responding to and trying to counteract in posts like the OP.
This seems pretty unfair to me and I believe we’re trying quite hard to not hide the legacy of Leverage 1.0. For example, we (1) specifically chose to keep the Leverage name; (2) are transparent about our intention to stand up for Leverage 1.0; and (3) Geoff’s association with Leverage 1.0 is quite clear from his personal website. Additionally, given the state of Leverage’s PR after Leverage 1.0 ended, the decision to keep the name was quite costly and stemmed from a desire to preserve the legacy of Leverage 1.0.
I want to draw attention to the fact that “Kerry Vaughan” is a brand new account that has made exactly three comments, all of them on this thread. “Kerry Vaughan” is associated with Leverage. “Kerry Vaughan”’s use of “they” to describe Leverage is deliberately misleading.
I’m not hiding my connection to Leverage which is why I used my real name, mentioned that I work at Leverage in other comments, and used “we” in connection with a link to Leverage’s case studies. I used “they” to refer to Leverage 1.0 since I didn’t work at Leverage during that time.
I don’t think that’s my account actually. It’s entirely possible that I never created a LW account before now.
Sorry if this comes off as pedantic, but I don’t know what this means. The philosopher in me keeps saying “I think we’re playing a language game,” so I’d like to get as precise as we can. Is there a paper or SEP article or blog post or something that I could read which defines the meaning of this claim or the individual terms precisely?
I don’t know Geoff’s view, but Descartes thinks he can be deceived about mathematical truths (I can dig up the relevant sections from the Meditations if helpful). That’s not the same as “treating them as probabilistic statements,” but I think it’s functionally the same from your perspective.
The project of the Meditations is that Descartes starts by refusing to accept anything which can be doubted and then he tries to nevertheless build a system of knowledge from there. I don’t think Descartes would assign infinite certainty to any claim except, perhaps, the cogito.