If you want to discuss or debate an issue to resolution/conclusion with me, explicitly ask for that. I’m open, by request, to putting major effort into resolving disagreements.
curi
[Question] Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond?
Anecdote time: after a long discussion about the existence of any form of induction , on a CR forum, someone eventually popped up who had asked KRP the very question, after bumping into him at a conference many years ago , and his reply was that it existed , but wasn’t suitable for science.
Source?
What anyone else thinks? I am very familiar with popular CR since I used to hang out in the same forums as Curi. I’ve also read some if the great man’s works.
Which forums? Under what name?
Li and Vitanyi write:
Can a thing be simple under one definition of simplicity and not simple under another? The contemporary philosopher Karl R. Popper (1902– 1994) has said that Occam’s razor is without sense, since there is no objective criterion for simplicity. Popper states that every such proposed criterion will necessarily be biased and subjective.
There’s no citation. There’s one Popper book in the references section, LScD, but it doesn’t contain the string “occam” (case insensitive search).
I also searched a whole folder of many Popper books and found nothing mentioning Occam (except it’s mentioned by other people, not Popper, in the Schlipp volumes).
If Popper actually said something about Occam’s razor, I’d like to read it. Any idea what’s going on? This seems like a scholarship problem from Li and Vitanyi. They also dismiss Popper’s solution to the problem of induction as unsatisfactory, with no explanation, argument, cite, etc.
Which section of the 850 page book contains a clear explanation of this? On initial review they seem to talk about hypotheses, for hundreds of pages, without trying to define them or explain what sorts of things do and do not qualify or how Solomonoff hypotheses do and do not match the common sense meaning of a hypothesis.
Thanks. So “There are no black swans.” is not a valid Solomonoff hypothesis? A hypothesis can’t exclude things, only make positive predictions?
Is a hypothesis allowed to make partial predictions? E.g. predict some pixels or frames and leave others unspecified. If so, then you could “and” together two partial hypotheses and run into a similar math consistency problem, right? But the way you said it sounds like a valid hypothesis may be required to predict absolutely everything, which would prevent conjoining two hypotheses since they’re already both complete and nothing more could be added.
Mathematical Inconsistency in Solomonoff Induction?
I have never sock puppeted at LW and I have never been banned at the LW website. You’re just wrong and smearing me.
Please leave me alone.
We’re discussing social dynamics and rational conversations at http://curi.us/2363-discussion-with-gigahurt-from-less-wrong
past misbehaviors with sock puppets
What sock puppets?
A place to start is considering what problems we’re trying to solve.
Epistemology has problems like:
What is knowledge? How can new knowledge be created? What is an error? How can errors be corrected? How can disagreements between ideas be resolved? How do we learn? How can we use knowledge when making decisions? What should we do about incomplete information? Can we achieve infallible certainty (how?)? What is intelligence? How can observation be connected to thinking? Are all (good) ideas connected to observation or just some?
Are those the sorts of problems you’re trying to solve when you talk about Solomonoff induction? If so, what’s the best literature you know of that outlines (gives high level explanations rather than a bunch of details) how Solomonoff induction plus some other stuff (it should specify what stuff) solves those problems? (And says which remain currently unsolved problems?)
(My questions are open to anyone else, too.)
Hi, Deutsch was my mentor. I run the discussion forums where we’ve been continuously open to debate and questions since before LW existed. I’m also familiar with Solomonoff induction, Bayes, RAZ and HPMOR. Despite several attempts, I’ve been broadly unable to get (useful, clear) answers from the LW crowd about our questions and criticisms related to induction. But I remain interested in trying to resolve these disagreements and to sort out epistemological issues.
Are you interested in extended discussion about this, with a goal of reaching some conclusions about CR/LW differences, or do you know anyone who is? And if you’re interested, have you read FoR and BoI?
I’ll begin with one comment now:
I am getting the sense that critrats frequently engage in a terrible Strong Opinionatedness where they let themselves wholely believe probably wrong theories
~All open, public groups have lots of low quality self-proclaimed members. You may be right about some critrats you’ve talked with or read.
But that is not a CR position. CR says we only ever believe theories tentatively. We always know they may be wrong and that we may need to reconsider. We can’t 100% count on ideas. Wholely believing things is not a part of CR.
If by “wholely” you mean with a 100% probability, that is also not a CR position, since CR doesn’t assign probabilities of truth to beliefs. If you insist on a probability, a CRist might say “0% or infinitesimal” (Popper made some comments similar to that) for all his beliefs, never 100%, while reiterating that probability applies to physical events so the question is misconceived.
Sometimes we act, judge, decide or (tentatively) conclude. When we do this, we have to choose something and not some other things. E.g. it may have been a close call between getting sushi or pizza, but then I chose only pizza and no sushi, not 51% pizza and 49% sushi. (Sometimes meta/mixed/compromise views are appropriate, which combine elements of rival views. E.g. I could go to a food court and get 2 slices of pizza and 2 maki rolls. But then I’m acting 100% on that plan and not following either original plan. So I’m still picking a single plan to wholely act on.)
More discussion of this post is available at https://curi.us/2366-analyzing-blackmail-being-illegal#comments
many motives … mostly commonly to get money
If I threaten to do X unless you pay me, then the motive for making that threat is getting money. However, I don’t get money for doing X. There are separate things involved (threat and action) with different motives.
I wrote a reply at https://www.lesswrong.com/posts/5ffPhqaLdrSajFe37/analyzing-blackmail-being-illegal-hanson-and-mowshowitz
I read only the initial overview at the top, did my own analysis, then read the rest to see if it’d change my mind.
Here are summaries of IMO the two most notable ideas from my analysis:
-
Compare blackmail to this scenario: My neighbor is having a party this weekend. I threaten to play loud music (at whatever the max loudness is that’s normally within my rights) to disrupt it unless he pays me $100. Compare to: I often play loud music and my neighbor comes and offers me $100 to be quiet all weekend. In one, I’m threatening to do something for the express purpose of harming someone, not to pursue my own values. In the other, I just enjoy music as part of my life. I think blackmail compares to the first scenario, but not the second.
-
We (should) prohibit initiation of force as a means to an end. The real underlying thing is enabling people to pursue their values in their life and resolve conflicts. If blackmail doesn’t initiate force, that doesn’t automatically make it OK, b/c non-initiation of force isn’t the primary.
-
Analyzing Blackmail Being Illegal (Hanson and Mowshowitz related)
I read the older, now-renamed book that I linked. The newer one has different authors. I saw it when searching and confirmed the right author for the one I read by searching old emails.
Do the PUAs really have a good model of an average human, or just a good model of a drunk woman who came to a nightclub wanting to get laid?
PUAs have evidence of efficacy. The best is hidden camera footage. The best footage that I’m aware of, in terms of confidence the girls aren’t actors, is Mystery’s VH1 show and the Cajun on Keys to the VIP. I believe RSD doesn’t use actors either and they have a lot of footage. I know some others have been caught faking footage.
My trusted friend bootcamped with Mystery and provided me with eyewitness accounts similar to various video footage. My friend also learned and used PUA successfully, experienced it working for him in varied situations … and avoids talking about PUA in public. He also observed other high profile PUAs in action IRL.
Some PUAs do daygame and other venues, not just nightclubs/parties. They have found the same general social principles apply, but adjustments are needed like lower energy approaches. Mystery, who learned nightclub style PUA initially, taught daygame on at least one episode of his TV show and his students quickly had some success.
PUAs have also demonstrated they’re effective at dealing with males. They can approach mixed-gender sets and befriend or tool the males. They’ve also shown effectiveness at befriending females who aren’t their target. Also standard PUA training advice is to approach 100 people on the street and talk with them. Learning how to have smalltalk conversations with anyone helps people be better PUAs, and also people who get good at PUA become more successful at those street conversations than they used to be.
I think these PUA Field Reports are mostly real stories, not lies. Narrator bias/misunderstandings and minor exaggerations are common. I think they’re overall more reliable than posts on r/relationships or r/AmITheAsshole, which I think also do provide useful evidence about what the world is like.
There are also notable points of convergence, e.g. Feynman told a story (“You Just Ask Them?” in Surely You’re Joking) in which he got some PUA type advice and found it immediately effective (after his previous failures), both in a bar setting and later with a “nice” girl in another setting.
everyone lives in a bubble
I generally agree but I also think there are some major areas of overlap between different subcultures. I think some principles apply pretty broadly, e.g. LoLE applies in the business world, in academia, in high school popularity contests, and for macho posturing like in the Top Gun movie. My beliefs about this use lots of evidence from varied sources (you can observe people doing social dynamics ~everywhere) but also do use significant interpretation and analysis of that evidence. There are also patterns in the conclusions I’ve observed other people reach and how e.g. their conclusion re PUA correlates with my opinion on whether they are a high quality thinker (which I judged on other topics first). I know someone with different philosophical views could reach different conclusions from the same data set. My basic answer to that is that I study rationality, I write about my ideas, and I’m publicly open to debate. If anyone knows a better method for getting accurate beliefs please tell me. I would also be happy pay for useful critical feedback if I knew any good way to arrange it.
Business is a good source of separate evidence about social dynamics because there are a bunch of books and other materials about the social dynamics of negotiating raises, hiring interviews, promotions, office politics, leadership, managing others, being a boss, sales, marketing, advertising, changing organizations from the bottom-up (passing on ideas to your boss, boss’s boss and even the CEO), etc. I’ve read a fair amount of that stuff but it’s not my main field (which is epistemology/rationality).
There are also non-PUA/MGTOW/etc relationship books with major convergence with PUA, e.g. The Passion Paradox (which has apparently been renamed The Passion Trap). I understand that to be a mainstream book:
About the Author Dr. Dean C. Delis is a clinical psychologist, Professor of Psychiatry at the University of California, San Diego, School of Medicine, and a staff psychologist at the San Diego V.A. Medical Center. He has more than 100 professional publications and has served on the editorial boards of several scientific journals. He is a diplomate of the American Board of Professional Psychology and American Board of Clinical Neuropsychology.
The main idea of the book is similar to LoLE. Quoting my notes from 2005 (I think this is before I was familiar with PUA): “The main idea of the passion paradox is that the person who wants the relationship less is in control and secure, and therefore cares about the relationship less, while the one who wants it more is more needy and insecure. And that being in these roles can make people act worse, thus reinforcing the problems.”. I was not convinced by this at the time and also wrote: “I think passion paradox dynamics could happen sometimes, but that they need not, and that trying to analyse all relationships that way will often be misleading.” Now I have a much more AWALT view.
The entire community is selecting for people who have some kinds of problems with social interaction
I agree the PUA community is self-selected to mostly be non-naturals, especially the instructors, though there are a few exceptions. In other words, they do tend to attract nerdy types who have to explicitly learn about social rules.
Sometimes I even wonder whether I overestimate how much the grass is greener on the other side.
My considered opinion is that it’s not, and that blue pillers are broadly unhappy (to be fair, so are red pillers). I don’t think being good at social dynamics (via study or “naturally” (aka via early childhood study)) makes people happy. I think doing social dynamics effectively clashes with rationality and being less rational has all sorts of downstream negative consequences. (Some social dynamics is OK to do, I’m not advocating zero, but I think it’s pretty limited.)
I don’t think high status correlates well with happiness. Both for ultra high status like celebs, which causes various problems, and also for high status that doesn’t get you so much public attention.
I think rationality correlates with happiness better. I would expect to be wrong about that if I was wrong about which self-identified rational people are not actually rational (I try to spot fakers and bad thinking).
I think the people with the best chance to be happy are content and secure with their social status. In other words, they aren’t actively trying to climb higher socially and they don’t have to put much effort into maintaining their current social status. The point is that they aren’t putting much effort into social dynamics and focus most of their energy on other stuff.
I am intellectually aware of the taboo against the “PUA/MRA/etc” cluster.
I too am intellectual aware of that but don’t intuitively feel it. I also refuse to care and have publicly associated my real name with lower status stuff than PUA. I have gotten repeated feedback (sometimes quite strongly worded) about how my PUA ideas alienate people, including from a few long time fans, but I haven’t stopped talking about it.
[Edit for clarity: I mostly mean hostile feedback from alienated people, not feedback from people worrying I’ll alienate others.]
I would like to learn from people who are guided neither by social taboos nor by edginess. And I am not sure if I could contribute much beyond an occassional sanity check.
I’d be happy to have you at my discussion forums. My community started in 1994, (not entirely) coincidentally the same year as alt.seduction.fast. The community is fairly oriented around the work of David Deutsch (the previous community leader and my mentor) and myself, as well as other thinkers that Deutsch or I like. A broad variety of topics are welcome (~anything that rationality can be applied to).
People also reject ideas before they’ve been explored in depth. I’ve tried to discuss similar issues with LW before but the basic response was roughly “we like chaos where no one pays attention to whether an argument has ever been answered by anyone; we all just do our own thing with no attempt at comprehensiveness or organizing who does what; having organized leadership of any sort, or anyone who is responsible for anything, would be irrational” (plus some suggestions that I’m low social status and that therefore I personally deserve to be ignored. there were also suggestions – phrased rather differently but amounting to this – that LW will listen more if published ideas are rewritten, not to improve on any flaws, but so that the new versions can be published at LW before anywhere else, because the LW community’s attention allocation is highly biased towards that).