I mean it depends a lot on what we mean by “smart people”. I’m thinking of theists like a bright philosophy student on the dumber end of smart, C. S. Lewis in the middle, and geniuses like Leibniz on the smarter end. People whose ideas might actually be worth engaging with. E.g. if your friend or someone at a party is a bright philosophy student, it might be worth engaging with them, or if you have some free time it might be a good idea to check out the ideas of some smart Christians like C. S. Lewis, and everyone in the world should take the time to check out the genius of Leibniz considering he was a theist and also the father of computer science. Their ideas are often decently sophisticated, not just something that can be described and discarded as “ontologically fundamental mental states”, and it’s worth translating their ideas into a decent language where you can understand them a little better. And if it happens to give you okay ideas while doing so, all the better, but that’s not really the point.
Who is “we”? It’s your claim. Tell me what you mean or I will think you are equivocating, as at least hundreds of millions of believers are smart in a sense, and in another, those within the top 1% of the top 1% of the top 1% of humans, only a handful may qualify, the majority of which might mean something like what you said.
some smart Christians like C. S. Lewis
Your philosophy has just been downchecked in my mind. I read much of his stuff before I could have been biased against him for being Christian, even the Screwtape Letters would have been a worthwhile exercise for an atheist writer, I didn’t know he was Christian when I read even those.
Their ideas are often decently sophisticated
The number of parts you have to add to a perpetual motion machine to hide from yourself the fact that it doesn’t work is proportional to your intelligence.
The following sentences are meant to be maximally informative given that I am unwilling to put in the necessary effort to actually respond. I apologize that I am unwilling to actually respond.
The general skill that I think is important is the skill you’re failing to demonstrate in your comment. It is a skill that I know you have, and would use if you had a confident model of me as a careful thinker. My suggestion is to just use that skill more often, for your sake and my sake and for the sake of group epistemology at all levels of organization. Just charity.
It is a skill that I know you have, and would use if you had a confident model of me as a careful thinker.
I have a confident model that you are a better thinker than posts like these suggest. But as Wei Dai says, that’s not enough: I don’t want to see posts that are unpleasant to read (not only for the cryptic obscurity, but also for excessive length and lack of paragraphing), don’t have enough valuable content to justify wading through, and turn people off of Less Wrong. Worse, since I know you can do better, these flaws feel like intentional defection with respect to Less Wrong norms of clarity in communication.
In order to be perceived as being a careful thinker by others, you have to send credible signals of being a careful thinker, and avoid sending any contrary signals. You’ve failed to do so on several recent occasions. How come you don’t consider that to be a very important skill?
Do you suggest that people should be epistemically charitable even towards others (and you specifically) who they don’t think are careful thinkers? You gave a number of reasons why people might want to do that, but as you admitted, the analysis omits opportunity costs.
Think about it this way: everything you write on LW will probably be read by at least 20 people, and many more for posts. Why should 20+ people spend the effort of deciphering your cryptic thoughts, when you could do it ahead of time or upon request but implicitly or explicitly decide not to? Just for practice? What about those who don’t think this particular occasion is the best one for such practice? Notice that this applies even when you are already perceived as a careful thinker. If you’re not, then they have even less reason to spend all that effort.
Do you suggest that people should be epistemically charitable even towards others (and you specifically) who they don’t think are careful thinkers?
Not in general, no. It’s pretty context-sensitive. I think they should do so on Less Wrong where we should aim to have insanely exceptionally high standards of group epistemology. I do think that applies doubly for folk like me who have a decent chunk of karma and have spent a lot of time with a lot of very smart people, but I am not sure how many such people contribute to LW, so it’s probably not a worthwhile norm to promote. If LW was somewhat saner perhaps they would, though, so it’s unclear.
I am a significantly better rationalist than the LW average and I’m on the verge of leaving which says a whole bunch about my lack of ability to communicate, but also some non-negligible amount about LW’s ability to understand humans who don’t want to engage in the negative sum signalling game of kow-towing to largely-unreflected-upon local norms. (I’m kind of ranting here and maybe even trolling slightly, it’s very possible that my evaluations aren’t themselves stable under reflection. (But at least I can recognize that...))
How come you don’t consider that to be a very important skill?
Right, so your comment unfortunately assumes something incorrect about my psychology, i.e. that it is motivationally possible for me to make my contributions to LW clearer. I once put a passive-aggressive apology at the bottom of one of my comments; perhaps if I continue to contribute to LW I’ll clean it up and put it at the bottom of every comment.
Point being, this isn’t the should world, and I do not have the necessary energy (or writing skills) to pull an Eliezer and communicate across years’ worth of inferential distance. Other humans who could teach what I would teach are busy saving the world, as I try to be. That said, I’m 19 years old and am learning skills at a pretty fast rate. A few years from now I’ll definitely have a solid grasp of a lot of the technical knowledge that I currently only informally (if mildly skillfully despite that) know how to play with, and I will also have put a lot more effort into learning to write (or learning to bother to want to communicate effectively). If the rationalist community hasn’t entirely disintegrated by then, then perhaps I’ll be able to actually explain things for once. That’d be nice.
Back to the question: I consider signalling credibility to be an important skill. I also try to be principled. If I did have the necessary motivation I would probably just pull an Eliezer and painstakingly explain every little detail with its own 15 paragraph post. But there is also some chance that I would just say “I refuse to kow tow to people who are unwilling to put the necessary effort into understanding the subtleties of what I am trying to say, and I doubly refuse to kow tow to people who assume I am being irrational in completely obvious ways simply because I am saying something that sounds unreasonable without filling in all of the gaps for them”. But not if I’d spent a lot of time really hammering into my head that this isn’t the should world, or if I learned to truly empathize with the psychology of the kind of human that thinks that way, which is pretty much every human ever.
(Not having done these things might be the source of my inability to feel motivated to explain things. Despair at how everyone including LW is batshit insane and because of that everyone I love is going to die, maybe? And there’s nothing I can do to change that? That sounds vaguely plausible. Hard to motivate oneself in that kind of situation, hard to expect that anything can actually have a substantial impact. Generalized frustration. I just have to remember, this isn’t the should world, it is only delusion that would cause me to expect anything else but this, people do what they have incentive and affordance to do, there is no such thing as magical free will, I am surely contemptible in a thousand similar ways, I implicitly endorse a thousand negative sum games because I’ve implicitly chosen to not reflect on whether or not they’re justified, if anyone can be seen as evil then surely I can, because I actually do have the necessary knowledge to do better, if I am to optimize anyone I may as well start with myself… ad nauseum.)
There’s some counterfactual world where I could have written this comment so as to be in less violation of local norms of epistemology and communication, and it is expected of me that I acknowledge that a tradeoff has been made which keeps this world from looking like that slightly-more-optimized world, and feel sorry about that necessity, or something, so I do. I consequently apologize.
But there is also some chance that I would just say “I refuse to kow tow to people who are unwilling to put the necessary effort into understanding the subtleties of what I am trying to say, and I doubly refuse to kow tow to people who assume I am being irrational in completely obvious ways simply because I am saying something that sounds unreasonable without filling in all of the gaps for them”.
I don’t think it’s possible to understand what you are trying to say, even assuming there is indeed something to understand, you don’t give enough information to arrive at a clear interpretation. It’s not a matter of unwillingness. And the hypothesis that someone is insane (at least in one compartment) is more plausible than that they are systematically unable/unwilling to communicate clearly insights of unreachable depth, and so only leave cryptic remarks indistinguishable from those generated by the insane. (This remains a possibility, but needs evidence to become more than that. Hindsight or private knowledge don’t justify demanding prior beliefs that overly favor the truth.)
There are people who know me in person and thus share background knowledge with me, who are able to understand what I am saying. They are the thinkers I admire most and the people I care most about influencing. I have a hypothesis which may just be wrong that people who are particularly good thinkers would notice that I wasn’t just insane-in-a-relevant-way and be able to fill in the gaps that would let them understand what I am saying. I have this hypothesis because I think that I have that skill to a large extent, as I believe do others like Michael Vassar or Peter de Blanc or Steve Rayhawk or generally people who bother to train that skill.
I notice that some people who I think are good thinkers, such as yourself, seem to have a low overall estimate of the worthwhileness of my words. However I have accumulated a fair amount of evidence that you do not have the skill of reading (or choose not to exercise the skill of reading), that is, that you err on the side of calling bullshit when I know for certain that something is not bullshit, and rarely err in the opposite direction. If you had to choose a side to be biased towards then that would of course be the correct one, but it isn’t clear that such a choice is necessary to be a strong rationalist, as I think is evidenced by Steve Rayhawk, Peter de Blanc, and Michael Vassar (three major influences on my thinking, in descending order of influence.) Thus I do not consider your low estimate of my rationality to be overwhelming evidence that it is in fact impossible to understand what I am trying to say even without sharing much background knowledge with me. I suspect that e.g. Wei Dai has a lowish estimate of my rationality w.r.t. things he is interested in; my model of Wei Dai has him as less curious than you are about things that I yammer about, so my wild guess at his thoughts on the matter are particularly little evidence compared to your thoughts. I plan on getting more information about this in time.
my model of Wei Dai has him as less curious than you are about things that I yammer about
If you mean the nature of superintelligence, I’m extremely curious about that, but I think the way you’re going about trying to find out is unlikely to lead to progress. To quote Eric Drexler, “most new ideas are wrong or inadequate.” The only way I can see how humans can make progress, when we’re running on such faulty hardware and software, is to be very careful, to subject our own ideas to constant self-scrutiny for possible errors, and to be as precise as possible in our communications, and to lay down all the steps of our reasoning, so others can understand what we mean and how exactly we arrived at our conclusions, so they can help find our errors for us.
Now sometimes one could have a flash of inspiration—an idea that might be true or an approach that seems worth pursing—but don’t know how to justify that intuition. It’s fine to try to communicate such potential insights, but this can’t be all that you do. Most of your time still has to be spent trying to figure out whether these seeming inspirations actually amount to anything, whether there are arguments that can back up your intuitions, and whether these arguments stand up to scrutiny. If you are not willing to put a substantial amount of effort into doing this yourself, then you shouldn’t be surprised that few others are willing to do it for you (i.e., take you seriously), especially when you do not even make a strong effort to use language that they can easily understand.
There are people who know me in person and thus share background knowledge with me, who are able to understand what I am saying. They are the thinkers I admire most and the people I care most about influencing.
I would be interested to know if any of your intuitive leaps have lead any of those people to make any progress beyond “a new idea that’s almost certain to be wrong even if we’re not sure why” to “something that seems likely to be an improvement over the previous state of the art”. (It’s possible that you have a comparative advantage in making such leaps of intuition, even though a priori that seems unlikely.)
you [Nesov] err on the side of calling bullshit when I know for certain that something is not bullshit
Do you have any examples? (This is unrelated to my points above. I’m just curious.)
(Warning, long comment; it stays mostly on track but is embarrassingly mostly self-centered.) I think I must have been being imprecise when I said you were “less curious” about the things I yammer about, and honestly I don’t remember what I was thinking at the time and won’t try to rationalize it. (I wasn’t on adderall then but am on adderall now; there may be state-dependent memory effects.) I thus unendorse at least that part of the grandparent.
I think that everything you’re saying is correct, and note the interesting similarities between my case and Nesov’s two years ago—except Nesov had actual formal technical understanding and results and still couldn’t easily communicate his insights, whereas my intuition is not at all grounded in formality. I definitely won’t be contributing to decision theory progress any time soon, and probably never will—I can get excited about certain philosophical themes or aesthetics and stamp things with my mark of intuitive approval, but there is very little value in that unless I’m for some reason in a situation where people with actual skills can bounce ideas off of me. (I am trying to set up that situation currently but I’m trying not to put too much weight on it.)
I am still very confused about how actual progress in decision theory-like fields works, though, insofar as the things I see on the mailing list, e.g. the discussion of Loebian blindspots, look like resolving side issues where the foundations are weak. I don’t see how getting the proof proof proofs right helps much, whereas I was very excited by Nesov’s focus on e.g. reversibility and semantics; much of this comes from being happy that Nesov has certain conceptual aesthetics which I endorse. You could perhaps characterize this as not understanding Slepnev’s style of research. I see your style as somewhere between Nesov’s and Slepnev’s. Perhaps research styles or methodology would make for a useful LW discussion post, or a decision theory list email? Or is my notion of “style” just off? I have never been involved in mathematical-esque research, nor have I read about how it works besides Polya’s How to Solve It and brief accounts of e.g. quantum mechanics research.
Anyway. Currently there is only one actual-decisions-relevant scenario where I see the sort of thinking I do being useful, and in that sense I sort of think of it as my scenario of comparative advantage. But unfortunately I’ve yet to talk to people who either have thought very deeply about very similar issues or have relevant technical knowledge, those people being Shulman and Nesov. The scenario I’m thinking of is where we have a non-provably-Friendly AI or a uFAI but there are other existential risks to worry about. (I think this scenario may be the default, though—it seems somewhat likely to me that AGI is within reach of this generation of humans, whereas it is unclear if something-like-provably Friendly AI is possible, or what value there is in somewhat-more-stable-than-hacked-together AI.) It would be useful to understand what sorts of attractors there are for a self-modifying AI to fall into for either its decision theory or utility function, what the implications of our decision to run a uFAI would be in terms of either causal or acausal game theory, and generally what the heck we’d be knowingly inflicting on the multiverse if we decided to hit the big red button.
These questions and questions like them lend themselves to thorough models and rely on precise technical knowledge but aren’t obviously questions that can be formalized. Such questions are in the grey area between the answerable-technical and the unanswerable-philosophical, with a focus on the nature of intelligence: precisely where Less Wrong-style rationality skills are most necessary and most useful. Likewise questions about “morality”, which are nestled between formal utility theory and decision theory on one side, highly qualitative “naturalistic meta-ethics” on another, and informal but technical and foundational questions about computation on a third under-explored side. Better understanding these questions has a low but non-negligible chance at affecting either singularity-focused game theory or the design choices guiding the development of FAI or somewhat-Friendly AI.
I think about things at about that level of technicality seeing as I have an automatic disposition to obsess about such questions and may or may not have a knack for doing so in a useful manner. My ability to excel at such thinking is hard to analyze; I think playing with models of complex systems, like multilevel selection, and seeing to what extent my intuitions are verified or not by the systems, would be one way to both check and train relevant intuitions. Another relevant field is probably psychology, where I have a few ideas which I think could be tested. Computational cognitive science is a relevant intuition-testing and intuition-building field and I’ve managed to nab myself a girlfriend who is going into it. Rayhawk wants to build a suite of games that train low-level probabilistic reasoning which I think would also help. He’s written up one very small one thus far and it would be excellent if Less Wrong could start a project to bring the idea to life. But that’s a story for another day.
I consider it somewhat likely that in 6 months I will look back and think myself an utter fool to expect to make any useful progress on thinking about such things. In the meantime I don’t expect LW folk to bother to try to understand my cryptic thoughts, especially not when everyone has so many of their own to worry about worrying about.
I would be interested to know if any of your intuitive leaps have lead any of those people to make any progress beyond “a new idea that’s almost certain to be wrong even if we’re not sure why” to “something that seems likely to be an improvement over the previous state of the art”.
I think the intuitive leaps I’m most proud of are in just-maybe-sort-of-almost understanding some of Rayhawk’s ideas and maybe provoking him to develop them slightly further or recall them after a few months or years of rust. I don’t have a very good idea of how useful all of my philosophicalish conversation with him has been. His ideas are uniformly a lot better than mine. If for some reason I can convince both him and SingInst that he should be doing FAI work then perhaps I’ll have a much better model of how useful my philosophical aesthetics are, or how useful they might be if I supplemented them with deep technical-formal knowledge-understanding. I currently model myself as being somewhat useful to bounce ideas off of but not yet a, ya know, real FAI researcher, not by a longshot. My aim is to become a truly useful research assistant in the next few years while realizing my apparent cognitive comparative advantage.
Do you have any examples?
The combination of social awkwardness and non-trivial difficulty of tracking down examples makes me rather averse to doing so; on the other hand I think Nesov would probably like to see such examples and I have something of a moral obligation to substantiate the claim. The realistic model of my behavior says I won’t end up providing the examples. However the realistic model of my behavior says that in the future if I come across such examples I will PM Nesov. I think however that I’d rather not list such gripes in public; I feel like it sets a bad precedent or something. (Interestingly Yudkowsky is a celebrity and thus such moral qualms have never applied to him in my head. I do regret being harsher on Eliezer than was called for; it’s way too easy to forget he’s a person as well as a meme and meme-generator.)
My aim is to become a truly useful research assistant in the next few years while realizing my apparent cognitive comparative advantage.
Are you working on training yourself to understand graduate-level logic, set theory and category theory? That’s my current best guess at an actionable thing an aspiring FAI researcher should do, no matter what else is on your plate (and it’s been a stable conclusion for over a year).
Not yet, but very soon now. (The plan for category theory is to get proficient with Haskell and maybe kill two birds with one stone by playing with functional inductive programming (which uses category theory). I do not yet have plans for set theory or logic; I don’t really understand what they’re trying to do very well. Or like, my brain hasn’t categorized them as “cool”, whereas my brain has categorized category theory as “cool”, and I think that if I better understood what was cool about them then I’d have a better idea of where to start. I was sort of hoping I could somehow learn all my math in terms of categories, which is still technically a possibility I guess but not at all something I can do on my own.)
I don’t recommend studying category theory at any depth before at least some logic, abstract algebra and topology. It can feel overly empty of substance without a wealth of examples already in place, it’s not called “abstract nonsense” for nothing. Follow my reading list if you don’t have any better ideas or background, and maybe ask someone else for advice. I don’t like some of this stuff as well, I just study it because I must.
(The plan for category theory is to get proficient with Haskell and maybe kill two birds with one stone by playing with functional inductive programming (which uses category theory)
I’ve known many people who have tried to walk down this path and failed. The successful ones I know, knew one before the other.
The scenario I’m thinking of is where we have a non-provably-Friendly AI or a uFAI but there are other existential risks to worry about. (I think this scenario may be the default, though—it seems somewhat likely to me that AGI is within reach of this generation of humans, whereas it is unclear if something-like-provably Friendly AI is possible, or what value there is in somewhat-more-stable-than-hacked-together AI.) It would be useful to understand what sorts of attractors there are for a self-modifying AI to fall into for either its decision theory or utility function, what the implications of our decision to run a uFAI would be in terms of either causal or acausal game theory, and generally what the heck we’d be knowingly inflicting on the multiverse if we decided to hit the big red button.
you err on the side of calling bullshit when I know for certain that something is not bullshit, and rarely err in the opposite direction
It’s quite possible, since originally, before retreating to this mode 1.5-2 years ago, I was suffering from mulling over external confusing ideas while failing to accumulate usefull stuff among all that noise (the last idea on this list was Ludics; now most of the noise I have to deal with is what I generate myself, but I seem to be able to slowly distill useful things from that, and I got into a habit of working on building up well-understood technical skills).
I guess I should allocate a new category for things I won’t accept into my mind, as a matter of personal epistemic hygiene, but still won’t get too confident it’s nonsense. I would still disapprove of these things for not being useful for many or even damaging for people like me-3-years-ago.
You stopped obsessing about things like ludics? Game semantics-like-stuff sounded so promising as a perspective on timeless interaction. Are you building fine-tuned decision theoretic versions of similar ideas from scratch?
Game semantics etc. were part of a search that was answered by ADT (alternatively, finally understanding UDT); they are failing to answer this question in the sense of exploring explicit counterfactuals, rather than explaining where counterfactuals come from.
After that, I tried building on ADT, didn’t get very far, then tried figuring out epistemic role of observations (which UDT/ADT deny), and I think was successful (the answer being a kind of “universal” platonism where physical facts are seen as non-special, logical theories as machines for perceiving abstract facts normally external to themselves, and processes as relating facts along their way, which generalize to ways of knowing physical facts, as in causality; this ontological stance seems very robust and describes all sorts of situations satisfactorily). This as yet needs better toy models as examples, or better-defined connection to standard math, which I’m currently trying to find.
I think they should do so on Less Wrong where we should aim to have insanely exceptionally high standards of group epistemology.
One of the ways we do this is by telling people when they are writing things that are batshit insane. Because you were. It wasn’t deep. It was obfuscated, scattered and generally poor quality thought. You may happen to be personally aweseome. Your recent comments, however, sucked. Not “were truly enlightened but the readers were not able to appreciate it”. They just sucked.
Sorry, which comments sucked? The majority of my recent comments have been upvoted, and very few were particularly obfuscated. I had one post that was largely intended to troll people and another comment that was intended to be for the lulz and which I obviously don’t think people should be mining for gold. (Which is why I said many times in the comment that it was poor quality syncretism and also bolded that it was just for fun.)
(Tangential: Is “batshit insane” Nesov’s vocabulary? It’s been mine for awhile.)
(Sorry for that, I usually need some time to debug a thought into a form I actually endorse. Don’t believe all things I say in real time, I disagree with some of them too, wait for a day to make sure. The comment was fixed before I read this echo.)
(Sorry for that, I usually need some time to debug a thought into a form I actually endorse. Don’t believe all things I say in real time, I disagree with some of them too, wait for a day to make sure.)
(The phrase was Will’s, which you adopted in your reply and I in turn used in mine. Origins traced.)
Interesting. So I was primed, generated the same phrase without realizing it was the priming, the phrase was sufficiently unfamiliar that I made a google search to more accurately see its connotations, used and posted it anyway, but then recognized that it didn’t paint an adequate picture. The process of debugging the details is such a bore, but the only way that works.
The ones referred to by Wei_Dai in the comment you were refuting/dismissing.
(Tangential: Is “batshit insane” Nesov’s vocabulary? It’s been mine for awhile.)
Yes, reading your comment in more detail I found that you used it yourself so removed the disclaimer. I didn’t want to introduce the term without it being clear to observers that I was just adopting the style from the context.
Can you please explain a bit more what the point is? I’m having trouble figuring out why I would want to try to understand something, if not to get “okay” ideas.
There are many, but unfortunately I only have enough motivation to list a few:
If talking to someone with strange beliefs in person, legitimately trying to engage with their ideas is an easy way to signal all kinds of positive things. (Maturity, charity, epistemic seriousness, openness to new experiences or ideas, and things like that, as opposed to common alternatives like abrasiveness, superficiality, pedantry, and the like.)
Reading things by smart folk who believe things that at least initially appear to be obviously false is a way to understand how exactly humans tend to fail at epistemic reasoning. For example, when I read Surprised by Joy by C. S. Lewis—not to learn about his religion, but to read about sehnsucht, something I often experience—it was very revealing how he described his conversion from unreflective atheism to idealist monadology-esque-ness/deism-ness to theism to Christianity. Basically, he did some basically sound metaphysical reasoning—though of course not the kind that constrains anticipations—which led him all the way nigh-deism. ‘We are all part of a unified universe, our responsibility is to experience as much of the universe as possible so it can understand itself’ or something like that. All of a sudden he’s thinking ‘Well I already believe in this vague abstract force thingy, and the philosophers who talk about that are obviously getting their memes from earlier philosophers who said the same thing about God, and this force thingy is kinda like God in some ways, so I might as well consider myself a theist.’ Then he learns that Jesus Christ probably actually existed in an off-the-cuff conversation with an atheist friend and scholar, and then he gets very vague and talks about how he suddenly doesn’t remember much and oh yeah all of a sudden he’s on his way to the zoo and realizes he’s a Christian. It’s not really clear what this entails in terms of anticipations, though he might’ve talked about his argument from sehnsucht for the existence of heaven. Anyway, it’s clear from what he wrote that he just felt uncomfortable and somewhere along the line stopped caring as much about reasons, and started just, ya know, going with what seemed to be the trend of his philosophical speculations, which might I remind you never paid rent in anticipated experience up until that very last, very vague step. I found it to be a memorable cautionary tale, reading the guy’s own words about his fall into the entropy of insanity. Whether or not Christianity is correct, whatever that means, it is clear that he had stopped caring about reasons, and it is clear that this was natural and easy and non-extraordinary. As someone who does a fair bit of metaphysical reasoning that doesn’t quite pay rent in anticipated experience, or doesn’t pay very much rent anyway, I think it is good to have Lewis’s example in mind.
Building the skill of actually paying attention to what people actually say. This is perhaps the most important benefit. Less Wrong folk are much better at this than most persons, and this skill itself goes a long, long way. The default for humans is of course to figure out which side the other person is arguing for and then either spout a plausibly-related counterargument for your chosen side if it is the opposite, or nod in agreement or the like if they’re on your team. Despite doing it much less than most humans, it still appears to be par for the course for aspiring rationalists. (But there may be some personal selection bias ‘cuz people pattern match what I (Will_Newsome) say to some other stupid thing and address the stupid generator of that stupid thing while bypassing whatever I actually said, either because I am bad at communication or because I’ve been justifiably classified as a person who is a priori likely to be stupid.) It is worth noting that sometimes this is a well-intentioned strategy to help resolve others’ confusions by jumping immediately to suggesting fixes for the confusion-generator, but most often it’s the result of sloppy reading. Anyway, by looking carefully at what smart people say that disagrees with what you believe or value, you train yourself to generally not throw away possibly countervailing evidence. It may be that what was written was complete tosh, but you won’t know unless you actually check from time to time, and even if it’s all tosh it’s still excellent training material.
Practice learning new concepts and languages. This is a minor benefit as generally it would be best to learn a directly useful new conceptual language, e.g. category theory.
Cultural sophistication, being able to signal cultural sophistication. Though this can easily implicitly endorse negative sum signalling games and I personally don’t see it as a good reason if done for signalling. That said, human culture is rich and complex, and I personally am afraid of being held in contempt as unsophisticated by someone like Douglas Hofstadter for not having read enough Dostoyevsky or listened to enough Chopin, so I read Dostoyevsky and listen to Chopin (and generally try to be perfect, whatever that means). Truly understanding spirituality and to a lesser extent religion is basically a large part of understanding humans and human culture. Though this is best done experientially, just like reading and listening to music, it really helps, especially for nerds, to have a decent theoretical understanding of what spiritualists and religionists might or might not be actually talking about.
Related to the above, a whole bunch of people assert that various seemingly-absurd ideas are incredibly important for some reason. I find this an object of intrinsic curiosity and perhaps others would too. In order to learn more it is really quite important to figure out what those various seemingly-absurd ideas actually are.
I could probably go on for a while. I would estimate that I missed one or two big reasons, five mildly persuasive reasons, and a whole bunch of ‘considerations’. Opportunity costs are of course not taken into account in this analysis.
Let me rephrase my question. You decided, on this particular occasion, taking into account opportunity costs, that it was worth trying to understand somebody, for a reason other than to get “okay” ideas. What was that reason?
You mean my original “let’s talk about Jesus!” comment? I think I bolded the answer in my original comment: having fun. (If I’d known LW was going to interpret what I wrote as somehow representative of my beliefs then I wouldn’t have written it. But I figured it’d just get downvoted to −5 with little controversy, like most of my previous similar posts were.)
Why is it fun? (That is, can you take a guess at why your brain’s decided it should be fun? This way of posing the question was also the primary intended meaning for my assertion about countersignaling, although it assumed more introspective access. You gave what looked like an excuse/justification on how in addition to being fun it’s also an exercise of a valuable skill, which is a sign of not knowing why you really do stuff.)
Bleh, I think there may be too much equivocation going on, even though your comment is basically correct. My original “insane” comment is not representative of my comments, nor is it a good example of the skill of charitable interpretation.
When I give justifications they do tend to be pretty related to the causes of my actions, though often in weird double-negative ways. Sometimes I do something because I am afraid of the consequences of doing something, in a self-defeating manner. I think a lot of my trying to appear discreditable is a defense mechanism put up because I am afraid of what would happen if I let myself flinch away from the prospect of appearing discreditable, like, afraid of the typical default failure mode where people get an identity as someone who is “reasonable” and then stops signalling and thus stops thinking thoughts that are “unreasonable”, where “reason” is only a very loose correlate of sanity. My favorite LW article ever is “Cached Selves”, and that has been true for two years now. Also one of my closest friends co-wrote that article, and his thinking has had a huge effect on mine.
I think saying it was “fun” is actually the rationalization, and I knew it was a rationalization, and so I was lying. It’s a lot more complex than that. I wrote it more because I was feeling frustrated at what I perceived to be an unjustified level of contempt in the Less Wrong community. (/does more reflection to make sure I’m not making things up.) Okay. Also relatedly part of it was wanting to signal insanity for the reasons outlined above, or reasons similar to the ones outline above in the sense of being afraid of some consequence of not doing something that I feel is principled, or something that I feel would make me a bad person if I didn’t attempt to do. Part of it was wanting to signal something like cleverness, which is maybe where some of the “fun” happens to be, though I can only have so much fun when I’m forced to type very quickly. Part of it was trolling for its own sake on top of the aforementioned anti-anti-virtuous rationale, though where the motivation for “trolling for its own sake” came from might be the same as that anti-anti-virtuous rationale but stemming from a more fundamental principle. I would be suspicious if any of these reasons claimed to be the real reason. Actions tend to follow many reasons in conjunction. (/avoids going off on a tangent about the principle of sufficient reason and Leibniz’s theodicy for irony’s sake.)
It’s interesting because others seem to be much more attached to certain kinds of language than I am, and so when they model me they model me as being unhealthily attached to the language of religion or spirituality or something for its own sake, and think that this is dangerous. I think this may be at least partially typical mind fallacy. I am interested in these languages because I like trolling people (and I like trolling people for many reasons as outline above), but personally much prefer the language of algorithmic probability and generally computationalism, which can actually be used precisely to talk about well-defined things. I only talk in terms of theism when I’m upset at people for being contemptuous of theism. Again there are many reasons for these things, often at different levels of abstraction, and it’s all mashed together.
What do you think the chances are that the above describes reality better than the OP implicitly does?
Can you quantify that? Approximately how many people are we talking about here? A thousand? A million? A billion?
I mean it depends a lot on what we mean by “smart people”. I’m thinking of theists like a bright philosophy student on the dumber end of smart, C. S. Lewis in the middle, and geniuses like Leibniz on the smarter end. People whose ideas might actually be worth engaging with. E.g. if your friend or someone at a party is a bright philosophy student, it might be worth engaging with them, or if you have some free time it might be a good idea to check out the ideas of some smart Christians like C. S. Lewis, and everyone in the world should take the time to check out the genius of Leibniz considering he was a theist and also the father of computer science. Their ideas are often decently sophisticated, not just something that can be described and discarded as “ontologically fundamental mental states”, and it’s worth translating their ideas into a decent language where you can understand them a little better. And if it happens to give you okay ideas while doing so, all the better, but that’s not really the point.
Who is “we”? It’s your claim. Tell me what you mean or I will think you are equivocating, as at least hundreds of millions of believers are smart in a sense, and in another, those within the top 1% of the top 1% of the top 1% of humans, only a handful may qualify, the majority of which might mean something like what you said.
Your philosophy has just been downchecked in my mind. I read much of his stuff before I could have been biased against him for being Christian, even the Screwtape Letters would have been a worthwhile exercise for an atheist writer, I didn’t know he was Christian when I read even those.
The number of parts you have to add to a perpetual motion machine to hide from yourself the fact that it doesn’t work is proportional to your intelligence.
The following sentences are meant to be maximally informative given that I am unwilling to put in the necessary effort to actually respond. I apologize that I am unwilling to actually respond.
The general skill that I think is important is the skill you’re failing to demonstrate in your comment. It is a skill that I know you have, and would use if you had a confident model of me as a careful thinker. My suggestion is to just use that skill more often, for your sake and my sake and for the sake of group epistemology at all levels of organization. Just charity.
I have a confident model that you are a better thinker than posts like these suggest. But as Wei Dai says, that’s not enough: I don’t want to see posts that are unpleasant to read (not only for the cryptic obscurity, but also for excessive length and lack of paragraphing), don’t have enough valuable content to justify wading through, and turn people off of Less Wrong. Worse, since I know you can do better, these flaws feel like intentional defection with respect to Less Wrong norms of clarity in communication.
In order to be perceived as being a careful thinker by others, you have to send credible signals of being a careful thinker, and avoid sending any contrary signals. You’ve failed to do so on several recent occasions. How come you don’t consider that to be a very important skill?
Do you suggest that people should be epistemically charitable even towards others (and you specifically) who they don’t think are careful thinkers? You gave a number of reasons why people might want to do that, but as you admitted, the analysis omits opportunity costs.
Think about it this way: everything you write on LW will probably be read by at least 20 people, and many more for posts. Why should 20+ people spend the effort of deciphering your cryptic thoughts, when you could do it ahead of time or upon request but implicitly or explicitly decide not to? Just for practice? What about those who don’t think this particular occasion is the best one for such practice? Notice that this applies even when you are already perceived as a careful thinker. If you’re not, then they have even less reason to spend all that effort.
Not in general, no. It’s pretty context-sensitive. I think they should do so on Less Wrong where we should aim to have insanely exceptionally high standards of group epistemology. I do think that applies doubly for folk like me who have a decent chunk of karma and have spent a lot of time with a lot of very smart people, but I am not sure how many such people contribute to LW, so it’s probably not a worthwhile norm to promote. If LW was somewhat saner perhaps they would, though, so it’s unclear.
I am a significantly better rationalist than the LW average and I’m on the verge of leaving which says a whole bunch about my lack of ability to communicate, but also some non-negligible amount about LW’s ability to understand humans who don’t want to engage in the negative sum signalling game of kow-towing to largely-unreflected-upon local norms. (I’m kind of ranting here and maybe even trolling slightly, it’s very possible that my evaluations aren’t themselves stable under reflection. (But at least I can recognize that...))
Right, so your comment unfortunately assumes something incorrect about my psychology, i.e. that it is motivationally possible for me to make my contributions to LW clearer. I once put a passive-aggressive apology at the bottom of one of my comments; perhaps if I continue to contribute to LW I’ll clean it up and put it at the bottom of every comment.
Point being, this isn’t the should world, and I do not have the necessary energy (or writing skills) to pull an Eliezer and communicate across years’ worth of inferential distance. Other humans who could teach what I would teach are busy saving the world, as I try to be. That said, I’m 19 years old and am learning skills at a pretty fast rate. A few years from now I’ll definitely have a solid grasp of a lot of the technical knowledge that I currently only informally (if mildly skillfully despite that) know how to play with, and I will also have put a lot more effort into learning to write (or learning to bother to want to communicate effectively). If the rationalist community hasn’t entirely disintegrated by then, then perhaps I’ll be able to actually explain things for once. That’d be nice.
Back to the question: I consider signalling credibility to be an important skill. I also try to be principled. If I did have the necessary motivation I would probably just pull an Eliezer and painstakingly explain every little detail with its own 15 paragraph post. But there is also some chance that I would just say “I refuse to kow tow to people who are unwilling to put the necessary effort into understanding the subtleties of what I am trying to say, and I doubly refuse to kow tow to people who assume I am being irrational in completely obvious ways simply because I am saying something that sounds unreasonable without filling in all of the gaps for them”. But not if I’d spent a lot of time really hammering into my head that this isn’t the should world, or if I learned to truly empathize with the psychology of the kind of human that thinks that way, which is pretty much every human ever.
(Not having done these things might be the source of my inability to feel motivated to explain things. Despair at how everyone including LW is batshit insane and because of that everyone I love is going to die, maybe? And there’s nothing I can do to change that? That sounds vaguely plausible. Hard to motivate oneself in that kind of situation, hard to expect that anything can actually have a substantial impact. Generalized frustration. I just have to remember, this isn’t the should world, it is only delusion that would cause me to expect anything else but this, people do what they have incentive and affordance to do, there is no such thing as magical free will, I am surely contemptible in a thousand similar ways, I implicitly endorse a thousand negative sum games because I’ve implicitly chosen to not reflect on whether or not they’re justified, if anyone can be seen as evil then surely I can, because I actually do have the necessary knowledge to do better, if I am to optimize anyone I may as well start with myself… ad nauseum.)
There’s some counterfactual world where I could have written this comment so as to be in less violation of local norms of epistemology and communication, and it is expected of me that I acknowledge that a tradeoff has been made which keeps this world from looking like that slightly-more-optimized world, and feel sorry about that necessity, or something, so I do. I consequently apologize.
I don’t think it’s possible to understand what you are trying to say, even assuming there is indeed something to understand, you don’t give enough information to arrive at a clear interpretation. It’s not a matter of unwillingness. And the hypothesis that someone is insane (at least in one compartment) is more plausible than that they are systematically unable/unwilling to communicate clearly insights of unreachable depth, and so only leave cryptic remarks indistinguishable from those generated by the insane. (This remains a possibility, but needs evidence to become more than that. Hindsight or private knowledge don’t justify demanding prior beliefs that overly favor the truth.)
There are people who know me in person and thus share background knowledge with me, who are able to understand what I am saying. They are the thinkers I admire most and the people I care most about influencing. I have a hypothesis which may just be wrong that people who are particularly good thinkers would notice that I wasn’t just insane-in-a-relevant-way and be able to fill in the gaps that would let them understand what I am saying. I have this hypothesis because I think that I have that skill to a large extent, as I believe do others like Michael Vassar or Peter de Blanc or Steve Rayhawk or generally people who bother to train that skill.
I notice that some people who I think are good thinkers, such as yourself, seem to have a low overall estimate of the worthwhileness of my words. However I have accumulated a fair amount of evidence that you do not have the skill of reading (or choose not to exercise the skill of reading), that is, that you err on the side of calling bullshit when I know for certain that something is not bullshit, and rarely err in the opposite direction. If you had to choose a side to be biased towards then that would of course be the correct one, but it isn’t clear that such a choice is necessary to be a strong rationalist, as I think is evidenced by Steve Rayhawk, Peter de Blanc, and Michael Vassar (three major influences on my thinking, in descending order of influence.) Thus I do not consider your low estimate of my rationality to be overwhelming evidence that it is in fact impossible to understand what I am trying to say even without sharing much background knowledge with me. I suspect that e.g. Wei Dai has a lowish estimate of my rationality w.r.t. things he is interested in; my model of Wei Dai has him as less curious than you are about things that I yammer about, so my wild guess at his thoughts on the matter are particularly little evidence compared to your thoughts. I plan on getting more information about this in time.
If you mean the nature of superintelligence, I’m extremely curious about that, but I think the way you’re going about trying to find out is unlikely to lead to progress. To quote Eric Drexler, “most new ideas are wrong or inadequate.” The only way I can see how humans can make progress, when we’re running on such faulty hardware and software, is to be very careful, to subject our own ideas to constant self-scrutiny for possible errors, and to be as precise as possible in our communications, and to lay down all the steps of our reasoning, so others can understand what we mean and how exactly we arrived at our conclusions, so they can help find our errors for us.
Now sometimes one could have a flash of inspiration—an idea that might be true or an approach that seems worth pursing—but don’t know how to justify that intuition. It’s fine to try to communicate such potential insights, but this can’t be all that you do. Most of your time still has to be spent trying to figure out whether these seeming inspirations actually amount to anything, whether there are arguments that can back up your intuitions, and whether these arguments stand up to scrutiny. If you are not willing to put a substantial amount of effort into doing this yourself, then you shouldn’t be surprised that few others are willing to do it for you (i.e., take you seriously), especially when you do not even make a strong effort to use language that they can easily understand.
I would be interested to know if any of your intuitive leaps have lead any of those people to make any progress beyond “a new idea that’s almost certain to be wrong even if we’re not sure why” to “something that seems likely to be an improvement over the previous state of the art”. (It’s possible that you have a comparative advantage in making such leaps of intuition, even though a priori that seems unlikely.)
Do you have any examples? (This is unrelated to my points above. I’m just curious.)
(Warning, long comment; it stays mostly on track but is embarrassingly mostly self-centered.) I think I must have been being imprecise when I said you were “less curious” about the things I yammer about, and honestly I don’t remember what I was thinking at the time and won’t try to rationalize it. (I wasn’t on adderall then but am on adderall now; there may be state-dependent memory effects.) I thus unendorse at least that part of the grandparent.
I think that everything you’re saying is correct, and note the interesting similarities between my case and Nesov’s two years ago—except Nesov had actual formal technical understanding and results and still couldn’t easily communicate his insights, whereas my intuition is not at all grounded in formality. I definitely won’t be contributing to decision theory progress any time soon, and probably never will—I can get excited about certain philosophical themes or aesthetics and stamp things with my mark of intuitive approval, but there is very little value in that unless I’m for some reason in a situation where people with actual skills can bounce ideas off of me. (I am trying to set up that situation currently but I’m trying not to put too much weight on it.)
I am still very confused about how actual progress in decision theory-like fields works, though, insofar as the things I see on the mailing list, e.g. the discussion of Loebian blindspots, look like resolving side issues where the foundations are weak. I don’t see how getting the proof proof proofs right helps much, whereas I was very excited by Nesov’s focus on e.g. reversibility and semantics; much of this comes from being happy that Nesov has certain conceptual aesthetics which I endorse. You could perhaps characterize this as not understanding Slepnev’s style of research. I see your style as somewhere between Nesov’s and Slepnev’s. Perhaps research styles or methodology would make for a useful LW discussion post, or a decision theory list email? Or is my notion of “style” just off? I have never been involved in mathematical-esque research, nor have I read about how it works besides Polya’s How to Solve It and brief accounts of e.g. quantum mechanics research.
Anyway. Currently there is only one actual-decisions-relevant scenario where I see the sort of thinking I do being useful, and in that sense I sort of think of it as my scenario of comparative advantage. But unfortunately I’ve yet to talk to people who either have thought very deeply about very similar issues or have relevant technical knowledge, those people being Shulman and Nesov. The scenario I’m thinking of is where we have a non-provably-Friendly AI or a uFAI but there are other existential risks to worry about. (I think this scenario may be the default, though—it seems somewhat likely to me that AGI is within reach of this generation of humans, whereas it is unclear if something-like-provably Friendly AI is possible, or what value there is in somewhat-more-stable-than-hacked-together AI.) It would be useful to understand what sorts of attractors there are for a self-modifying AI to fall into for either its decision theory or utility function, what the implications of our decision to run a uFAI would be in terms of either causal or acausal game theory, and generally what the heck we’d be knowingly inflicting on the multiverse if we decided to hit the big red button.
These questions and questions like them lend themselves to thorough models and rely on precise technical knowledge but aren’t obviously questions that can be formalized. Such questions are in the grey area between the answerable-technical and the unanswerable-philosophical, with a focus on the nature of intelligence: precisely where Less Wrong-style rationality skills are most necessary and most useful. Likewise questions about “morality”, which are nestled between formal utility theory and decision theory on one side, highly qualitative “naturalistic meta-ethics” on another, and informal but technical and foundational questions about computation on a third under-explored side. Better understanding these questions has a low but non-negligible chance at affecting either singularity-focused game theory or the design choices guiding the development of FAI or somewhat-Friendly AI.
I think about things at about that level of technicality seeing as I have an automatic disposition to obsess about such questions and may or may not have a knack for doing so in a useful manner. My ability to excel at such thinking is hard to analyze; I think playing with models of complex systems, like multilevel selection, and seeing to what extent my intuitions are verified or not by the systems, would be one way to both check and train relevant intuitions. Another relevant field is probably psychology, where I have a few ideas which I think could be tested. Computational cognitive science is a relevant intuition-testing and intuition-building field and I’ve managed to nab myself a girlfriend who is going into it. Rayhawk wants to build a suite of games that train low-level probabilistic reasoning which I think would also help. He’s written up one very small one thus far and it would be excellent if Less Wrong could start a project to bring the idea to life. But that’s a story for another day.
I consider it somewhat likely that in 6 months I will look back and think myself an utter fool to expect to make any useful progress on thinking about such things. In the meantime I don’t expect LW folk to bother to try to understand my cryptic thoughts, especially not when everyone has so many of their own to worry about worrying about.
I think the intuitive leaps I’m most proud of are in just-maybe-sort-of-almost understanding some of Rayhawk’s ideas and maybe provoking him to develop them slightly further or recall them after a few months or years of rust. I don’t have a very good idea of how useful all of my philosophicalish conversation with him has been. His ideas are uniformly a lot better than mine. If for some reason I can convince both him and SingInst that he should be doing FAI work then perhaps I’ll have a much better model of how useful my philosophical aesthetics are, or how useful they might be if I supplemented them with deep technical-formal knowledge-understanding. I currently model myself as being somewhat useful to bounce ideas off of but not yet a, ya know, real FAI researcher, not by a longshot. My aim is to become a truly useful research assistant in the next few years while realizing my apparent cognitive comparative advantage.
The combination of social awkwardness and non-trivial difficulty of tracking down examples makes me rather averse to doing so; on the other hand I think Nesov would probably like to see such examples and I have something of a moral obligation to substantiate the claim. The realistic model of my behavior says I won’t end up providing the examples. However the realistic model of my behavior says that in the future if I come across such examples I will PM Nesov. I think however that I’d rather not list such gripes in public; I feel like it sets a bad precedent or something. (Interestingly Yudkowsky is a celebrity and thus such moral qualms have never applied to him in my head. I do regret being harsher on Eliezer than was called for; it’s way too easy to forget he’s a person as well as a meme and meme-generator.)
Are you working on training yourself to understand graduate-level logic, set theory and category theory? That’s my current best guess at an actionable thing an aspiring FAI researcher should do, no matter what else is on your plate (and it’s been a stable conclusion for over a year).
Not yet, but very soon now. (The plan for category theory is to get proficient with Haskell and maybe kill two birds with one stone by playing with functional inductive programming (which uses category theory). I do not yet have plans for set theory or logic; I don’t really understand what they’re trying to do very well. Or like, my brain hasn’t categorized them as “cool”, whereas my brain has categorized category theory as “cool”, and I think that if I better understood what was cool about them then I’d have a better idea of where to start. I was sort of hoping I could somehow learn all my math in terms of categories, which is still technically a possibility I guess but not at all something I can do on my own.)
I don’t recommend studying category theory at any depth before at least some logic, abstract algebra and topology. It can feel overly empty of substance without a wealth of examples already in place, it’s not called “abstract nonsense” for nothing. Follow my reading list if you don’t have any better ideas or background, and maybe ask someone else for advice. I don’t like some of this stuff as well, I just study it because I must.
I’ve known many people who have tried to walk down this path and failed. The successful ones I know, knew one before the other.
This.
It’s quite possible, since originally, before retreating to this mode 1.5-2 years ago, I was suffering from mulling over external confusing ideas while failing to accumulate usefull stuff among all that noise (the last idea on this list was Ludics; now most of the noise I have to deal with is what I generate myself, but I seem to be able to slowly distill useful things from that, and I got into a habit of working on building up well-understood technical skills).
I guess I should allocate a new category for things I won’t accept into my mind, as a matter of personal epistemic hygiene, but still won’t get too confident it’s nonsense. I would still disapprove of these things for not being useful for many or even damaging for people like me-3-years-ago.
You stopped obsessing about things like ludics? Game semantics-like-stuff sounded so promising as a perspective on timeless interaction. Are you building fine-tuned decision theoretic versions of similar ideas from scratch?
Game semantics etc. were part of a search that was answered by ADT (alternatively, finally understanding UDT); they are failing to answer this question in the sense of exploring explicit counterfactuals, rather than explaining where counterfactuals come from.
After that, I tried building on ADT, didn’t get very far, then tried figuring out epistemic role of observations (which UDT/ADT deny), and I think was successful (the answer being a kind of “universal” platonism where physical facts are seen as non-special, logical theories as machines for perceiving abstract facts normally external to themselves, and processes as relating facts along their way, which generalize to ways of knowing physical facts, as in causality; this ontological stance seems very robust and describes all sorts of situations satisfactorily). This as yet needs better toy models as examples, or better-defined connection to standard math, which I’m currently trying to find.
One of the ways we do this is by telling people when they are writing things that are batshit insane. Because you were. It wasn’t deep. It was obfuscated, scattered and generally poor quality thought. You may happen to be personally aweseome. Your recent comments, however, sucked. Not “were truly enlightened but the readers were not able to appreciate it”. They just sucked.
Sorry, which comments sucked? The majority of my recent comments have been upvoted, and very few were particularly obfuscated. I had one post that was largely intended to troll people and another comment that was intended to be for the lulz and which I obviously don’t think people should be mining for gold. (Which is why I said many times in the comment that it was poor quality syncretism and also bolded that it was just for fun.)
(Tangential: Is “batshit insane” Nesov’s vocabulary? It’s been mine for awhile.)
(Sorry for that, I usually need some time to debug a thought into a form I actually endorse. Don’t believe all things I say in real time, I disagree with some of them too, wait for a day to make sure. The comment was fixed before I read this echo.)
(The phrase was Will’s, which you adopted in your reply and I in turn used in mine. Origins traced.)
Interesting. So I was primed, generated the same phrase without realizing it was the priming, the phrase was sufficiently unfamiliar that I made a google search to more accurately see its connotations, used and posted it anyway, but then recognized that it didn’t paint an adequate picture. The process of debugging the details is such a bore, but the only way that works.
Fascinating. Now I have to look up the phrase to see what the precise meaning of “batshit insane” term is too, just in case I am using it wrong. :)
The ones referred to by Wei_Dai in the comment you were refuting/dismissing.
Yes, reading your comment in more detail I found that you used it yourself so removed the disclaimer. I didn’t want to introduce the term without it being clear to observers that I was just adopting the style from the context.
Can you please explain a bit more what the point is? I’m having trouble figuring out why I would want to try to understand something, if not to get “okay” ideas.
There are many, but unfortunately I only have enough motivation to list a few:
If talking to someone with strange beliefs in person, legitimately trying to engage with their ideas is an easy way to signal all kinds of positive things. (Maturity, charity, epistemic seriousness, openness to new experiences or ideas, and things like that, as opposed to common alternatives like abrasiveness, superficiality, pedantry, and the like.)
Reading things by smart folk who believe things that at least initially appear to be obviously false is a way to understand how exactly humans tend to fail at epistemic reasoning. For example, when I read Surprised by Joy by C. S. Lewis—not to learn about his religion, but to read about sehnsucht, something I often experience—it was very revealing how he described his conversion from unreflective atheism to idealist monadology-esque-ness/deism-ness to theism to Christianity. Basically, he did some basically sound metaphysical reasoning—though of course not the kind that constrains anticipations—which led him all the way nigh-deism. ‘We are all part of a unified universe, our responsibility is to experience as much of the universe as possible so it can understand itself’ or something like that. All of a sudden he’s thinking ‘Well I already believe in this vague abstract force thingy, and the philosophers who talk about that are obviously getting their memes from earlier philosophers who said the same thing about God, and this force thingy is kinda like God in some ways, so I might as well consider myself a theist.’ Then he learns that Jesus Christ probably actually existed in an off-the-cuff conversation with an atheist friend and scholar, and then he gets very vague and talks about how he suddenly doesn’t remember much and oh yeah all of a sudden he’s on his way to the zoo and realizes he’s a Christian. It’s not really clear what this entails in terms of anticipations, though he might’ve talked about his argument from sehnsucht for the existence of heaven. Anyway, it’s clear from what he wrote that he just felt uncomfortable and somewhere along the line stopped caring as much about reasons, and started just, ya know, going with what seemed to be the trend of his philosophical speculations, which might I remind you never paid rent in anticipated experience up until that very last, very vague step. I found it to be a memorable cautionary tale, reading the guy’s own words about his fall into the entropy of insanity. Whether or not Christianity is correct, whatever that means, it is clear that he had stopped caring about reasons, and it is clear that this was natural and easy and non-extraordinary. As someone who does a fair bit of metaphysical reasoning that doesn’t quite pay rent in anticipated experience, or doesn’t pay very much rent anyway, I think it is good to have Lewis’s example in mind.
Building the skill of actually paying attention to what people actually say. This is perhaps the most important benefit. Less Wrong folk are much better at this than most persons, and this skill itself goes a long, long way. The default for humans is of course to figure out which side the other person is arguing for and then either spout a plausibly-related counterargument for your chosen side if it is the opposite, or nod in agreement or the like if they’re on your team. Despite doing it much less than most humans, it still appears to be par for the course for aspiring rationalists. (But there may be some personal selection bias ‘cuz people pattern match what I (Will_Newsome) say to some other stupid thing and address the stupid generator of that stupid thing while bypassing whatever I actually said, either because I am bad at communication or because I’ve been justifiably classified as a person who is a priori likely to be stupid.) It is worth noting that sometimes this is a well-intentioned strategy to help resolve others’ confusions by jumping immediately to suggesting fixes for the confusion-generator, but most often it’s the result of sloppy reading. Anyway, by looking carefully at what smart people say that disagrees with what you believe or value, you train yourself to generally not throw away possibly countervailing evidence. It may be that what was written was complete tosh, but you won’t know unless you actually check from time to time, and even if it’s all tosh it’s still excellent training material.
Practice learning new concepts and languages. This is a minor benefit as generally it would be best to learn a directly useful new conceptual language, e.g. category theory.
Cultural sophistication, being able to signal cultural sophistication. Though this can easily implicitly endorse negative sum signalling games and I personally don’t see it as a good reason if done for signalling. That said, human culture is rich and complex, and I personally am afraid of being held in contempt as unsophisticated by someone like Douglas Hofstadter for not having read enough Dostoyevsky or listened to enough Chopin, so I read Dostoyevsky and listen to Chopin (and generally try to be perfect, whatever that means). Truly understanding spirituality and to a lesser extent religion is basically a large part of understanding humans and human culture. Though this is best done experientially, just like reading and listening to music, it really helps, especially for nerds, to have a decent theoretical understanding of what spiritualists and religionists might or might not be actually talking about.
Related to the above, a whole bunch of people assert that various seemingly-absurd ideas are incredibly important for some reason. I find this an object of intrinsic curiosity and perhaps others would too. In order to learn more it is really quite important to figure out what those various seemingly-absurd ideas actually are.
I could probably go on for a while. I would estimate that I missed one or two big reasons, five mildly persuasive reasons, and a whole bunch of ‘considerations’. Opportunity costs are of course not taken into account in this analysis.
Let me rephrase my question. You decided, on this particular occasion, taking into account opportunity costs, that it was worth trying to understand somebody, for a reason other than to get “okay” ideas. What was that reason?
You mean my original “let’s talk about Jesus!” comment? I think I bolded the answer in my original comment: having fun. (If I’d known LW was going to interpret what I wrote as somehow representative of my beliefs then I wouldn’t have written it. But I figured it’d just get downvoted to −5 with little controversy, like most of my previous similar posts were.)
Why is it fun? (That is, can you take a guess at why your brain’s decided it should be fun? This way of posing the question was also the primary intended meaning for my assertion about countersignaling, although it assumed more introspective access. You gave what looked like an excuse/justification on how in addition to being fun it’s also an exercise of a valuable skill, which is a sign of not knowing why you really do stuff.)
Bleh, I think there may be too much equivocation going on, even though your comment is basically correct. My original “insane” comment is not representative of my comments, nor is it a good example of the skill of charitable interpretation.
When I give justifications they do tend to be pretty related to the causes of my actions, though often in weird double-negative ways. Sometimes I do something because I am afraid of the consequences of doing something, in a self-defeating manner. I think a lot of my trying to appear discreditable is a defense mechanism put up because I am afraid of what would happen if I let myself flinch away from the prospect of appearing discreditable, like, afraid of the typical default failure mode where people get an identity as someone who is “reasonable” and then stops signalling and thus stops thinking thoughts that are “unreasonable”, where “reason” is only a very loose correlate of sanity. My favorite LW article ever is “Cached Selves”, and that has been true for two years now. Also one of my closest friends co-wrote that article, and his thinking has had a huge effect on mine.
I think saying it was “fun” is actually the rationalization, and I knew it was a rationalization, and so I was lying. It’s a lot more complex than that. I wrote it more because I was feeling frustrated at what I perceived to be an unjustified level of contempt in the Less Wrong community. (/does more reflection to make sure I’m not making things up.) Okay. Also relatedly part of it was wanting to signal insanity for the reasons outlined above, or reasons similar to the ones outline above in the sense of being afraid of some consequence of not doing something that I feel is principled, or something that I feel would make me a bad person if I didn’t attempt to do. Part of it was wanting to signal something like cleverness, which is maybe where some of the “fun” happens to be, though I can only have so much fun when I’m forced to type very quickly. Part of it was trolling for its own sake on top of the aforementioned anti-anti-virtuous rationale, though where the motivation for “trolling for its own sake” came from might be the same as that anti-anti-virtuous rationale but stemming from a more fundamental principle. I would be suspicious if any of these reasons claimed to be the real reason. Actions tend to follow many reasons in conjunction. (/avoids going off on a tangent about the principle of sufficient reason and Leibniz’s theodicy for irony’s sake.)
It’s interesting because others seem to be much more attached to certain kinds of language than I am, and so when they model me they model me as being unhealthily attached to the language of religion or spirituality or something for its own sake, and think that this is dangerous. I think this may be at least partially typical mind fallacy. I am interested in these languages because I like trolling people (and I like trolling people for many reasons as outline above), but personally much prefer the language of algorithmic probability and generally computationalism, which can actually be used precisely to talk about well-defined things. I only talk in terms of theism when I’m upset at people for being contemptuous of theism. Again there are many reasons for these things, often at different levels of abstraction, and it’s all mashed together.
I’m still not clear on what makes it unjustified.
Okay.