The usual rule is to identify as an “aspiring rationalist”; identifying rationality as what you are can lead to believing you’re less prone to bias than you really are, while identifying it as what you aspire to reminds you to maintain constant vigilance.
That is mostly true, you’ve discovered the fallacy of most humans, they identify not out of rationality’s sake but from their own comfort in most cases. Because they are not honest, they wear the facade of Rationality to rationalize their behavior even though they are not rational at all or care.
Stupidity can be categorized as writing useless posts on facebook—Yudkowsky. Not being Vegan. Smoking etc.
Connecting with the Way emotionally will allow you to scrutinize and redevelop your belief system. It’s an observation our species has made but not reviewed.
Okay, I finished reading the book, and then I also looked at the wiki. So...
A few years ago I suspected that the biggest danger for the rationalist movement could be it’s own success. I mean, as long as no one give a fuck about rationality, the few nerds are able to meet somewhere at the corner of the internet, debate their hobby, and try to improve themselves if they desire so. But if somehow the word “rationality” becomes popular, all crackpots and scammers will notice it, and will start producing their own versions—and if they won’t care about the actual rationality, they will have more degrees of freedom, so they will probably produce more attractive versions. Well, Gleb Tsipursky is already halfway there, and this Athene guy seems to be fully there… except that instead of “rationality”, his applause light is “logic”. Same difference.
Instead of nitpicking hundred small details, I’ll try to get right into what I perceive as the fundamental difference between LW and “logic nation”:
According to LW, rationality is hard. It’s hard, because our monkey brains were never designed by evolution to be rational in the first place. Just to use tools and win tribal politics. That’s what we are good at. The path to rationality is full of thousand biases, and requires often to go against your own instinct. This is why most people fail. This is why most smart people fail. This is why even most of the smartest ones fail. Humans are predictably irrational, their brains have systematic biases, even smart people believe stupid things for predictable reasons. Korzybski called it “map and territory”, other people call it “magical thinking”, here at LW we talk about “mysterious answers to mysterious question”—this all points in approximately the same direction, that human brains have a predictable tendency to just believe some stupid shit, because from inside it seems perfectly real, actually even better than the real thing. And smarter people just do it in more sophisticated ways. So you have to really work hard, study hard, and even then you have a tiny chance to be fully sane; but without current research and hard work, your chances are zero for all practical purposes.
“Logic nation” has exactly the opposite approach. There is this “one weird trick”, when you spend a few hours or weeks doing a mental exercise that will associate your positive emotions with “logic”, and… voilà… you have achieved a quantum leap, and from now on all you have to do is to keep this emotional state, and everything will be alright. Your faith in logic will save you. And the first thing you have to do, of course, is to call your friends and tell them about this wonderful new thing, so they also get the chance to “click”. As long as you keep worshiping “logic”, everything will be okay. Mother Logic loves you, Mother Logic cares for you, Mother Logic will protect you, Mother Logic created this universe for you… and when you fully understand your true nature, you will see that actually Mother Logic is you. (Using my own words here, but this is exactly how what I have seen so far seems to me.)
Well, to me this smells like exactly the kind of predictable irrationality humans habitually do. Take something your group accepts as high-status and start worshiping it. Imagine that all your problems will magically disappear if you just keep believing hard. Dissolve yourself in some nebulous concept. How is this different from what the average New Age hippie believes? Oh yes, your goddess is called Logic, not Gaia. I rest my case.
I know that the topic of AI is too removed from our everyday lives, and most people’s opinion on this topic will absolutely have no consequence on anything, but even look there: Athene just waves his hand and says it will be all magically okay, because an AI smarter than us will of course automatically invent morality. (Another piece of human predictable irrationality, called “anthropomorphisation”. Yeah, the AI will be just another human, just like the god of rain is just another human. What else than a human could there be?)
Speaking of instrumental rationality, the book you linked provides a lot of good practical advice. I was impressed. I admit I didn’t expect to see this level of sanity outside LessWrong. Some parts of the book could be converted into 5 or 10 really good posts on LW. I mean it as a compliment. But ultimately, that seems to be all there is, and the rest is just a huge hype about it. (Recently LW is kind of dying, so to get an idea about what a really high-quality content looks like, see e.g. articles written by lukeprog.) But speaking about epistemic rationality, the “logic nation” is far below the LW level. It’s all just hand-waving. And salesmanship.
Also, I dislike how Athene provides scientific citations for very specific claims, but when he describes a whole concept, he doesn’t bother hinting that the concept was already invented by someone else. For example, on the wiki there is his bastardized version of Tegmark Multiverse + Solomonoff Induction, but it is written as something he just made up, using “logic”. You see, science is only useful for providing footnotes for his book. Science supports Athene, not the other way round.
Eliezer, for all his character flaws, may perhaps describe himself as the smartest being in the universe (I am exaggerating here (but not too much)), but then he still tells you about Kahneman and Solomonoff and Jaynes and others, and would encourage you to go and read their books.
Etc. The summary is that Athene provides a decent checklist of instrumental rationality in his book, but everything else is just a hype. And his target audience are the people who believe in “one weird trick”.
Try reading the Sequences and maybe you will see what I was trying to describe here. That is a book that often moves people to a higher level of clarity of thinking, where the things that seemed awesome previously just become “oh, now I see how this is just another instance of this cognitive error”. I believe what Athene is doing is built on such errors; but you need to recognize them as errors first. Again, I am not saying he is completely wrong; and he has useful things to provide. (I haven’t listened to his podcasts yet, if they expand on the material from the book, that could be valuable. Although I strongly prefer written texts.) It’s just, there is so much hype about something that was already done better. So obviously people on this website are not going to be very impressed. But it may be incredibly impressive to someone not familiar with the rationalist community.
Okay, I finished reading the book, and then I also looked at the wiki. So...
If you are aware of mathematics what do you think about this part: https://logicnation.org/wiki/A_simple_click#Did_God_create_logic.3F Is it falsifiable? There was an interesting talk how something can arise out of nothing and how it’s relatable to the present moment which one can’t ever grasp but I will have to condense it for you guys later.
A few years ago I suspected that the biggest danger for the rationalist movement could be it’s own success. I mean, as long as no one give a fuck about rationality, the few nerds are able to meet somewhere at the corner of the internet, debate their hobby, and try to improve themselves if they desire so. But if somehow the word “rationality” becomes popular, all crackpots and scammers will notice it, and will start producing their own versions—and if they won’t care about the actual rationality, they will have more degrees of freedom, so they will probably produce more attractive versions. Well, Gleb Tsipursky is already halfway there, and this Athene guy seems to be fully there… except that instead of “rationality”, his applause light is “logic”. Same difference.
Instead of nitpicking hundred small details, I’ll try to get right into what I perceive as the fundamental difference between LW and “logic nation”:
I agree.
According to LW, rationality is hard.
It’s hard, because our monkey brains were never designed by evolution to be rational in the first place. Just to use tools and win tribal politics. That’s what we are good at.
That’s false, if it wasn’t for evolution you wouldn’t have the ability to be rational in the first place.
The path to rationality is full of thousand biases, and requires often to go against your own instinct. This is why most people fail. This is why most smart people fail. This is why even most of the smartest ones fail. Humans are predictably irrational, their brains have systematic biases, even smart people believe stupid things for predictable reasons. Korzybski called it “map and territory”, other people call it “magical thinking”, here at LW we talk about “mysterious answers to mysterious question”—this all points in approximately the same direction, that human brains have a predictable tendency to just believe some stupid shit, because from inside it seems perfectly real, actually even better than the real thing. And smarter people just do it in more sophisticated ways. So you have to really work hard, study hard, and even then you have a tiny chance to be fully sane; but without current research and hard work, your chances are zero for all practical purposes.
Again, I agree.
“Logic nation” has exactly the opposite approach. There is this “one weird trick”, when you spend a few hours or weeks doing a mental exercise that will associate your positive emotions with “logic”, and… voilà… you have achieved a quantum leap, and from now on all you have to do is to keep this emotional state, and everything will be alright. Your faith in logic will save you.
Yes, the one weird trick has been observed to ‘work’ in the different cases although we don’t know more than that right now. But someone with an understanding in neuroscience, psychology-physiology connection and can bot search the academic literature would be able to connect the dots I think.
‘Logic nation’ has nothing to do with this type of rationality, though, it was a mistake to deliberately say it had or use the word.
And the first thing you have to do, of course, is to call your friends and tell them about this wonderful new thing, so they also get the chance to “click”.
No, you don’t want to do that, it’s unlikely people will care or understand or immediately cry wolf cult. It’s also with a large likelihood an inefficient use of time. There is people who want to click so it’s probably better to push resources there. If Yudkowsky clicked he would probably not call up someone, instead write an article ‘to rule them all’ and all of you would finally get it.
But this is speculation or me giving information which can be taken accounted of after you click. Because you think differently, you probably will take some time to restructure your beliefs as the “first thing”.
As long as you keep worshiping “logic”, everything will be okay. Mother Logic loves you, Mother Logic cares for you, Mother Logic will protect you, Mother Logic created this universe for you… and when you fully understand your true nature, you will see that actually Mother Logic is you. (Using my own words here, but this is exactly how what I have seen so far seems to me.)
Not completely accurate, it’s only if you cannot fix something with an adequate amount of time, if you, for example, scratched your leg, you can accept the pain, thus the suffering go away instantly. Doing logical things and figuring out God (Spinoza) by science could be seen as prayer, maybe neuro or something else too. But it’s speculation after all since it differs for everyone based on everything, their current knowledge (see the example of Yudkowsky) and so on.
Well, to me this smells like exactly the kind of predictable irrationality humans habitually do. Take something your group accepts as high-status and start worshiping it. Imagine that all your problems will magically disappear if you just keep believing hard. Dissolve yourself in some nebulous concept. How is this different from what the average New Age hippie believes? Oh yes, your goddess is called Logic, not Gaia. I rest my case.
Sure, rationality-as-LW-and-all-the-literature-puts-it. You’re better asking how is this different to what I believe? Instead, your god might be ‘comfort’. ‘Identity’ might be prevalent in rationality communities. When you realize this, doing the 4 steps, emotionally, you’re on the path to mastering the Way.
I know that the topic of AI is too removed from our everyday lives, and most people’s opinion on this topic will absolutely have no consequence on anything, but even look there: Athene just waves his hand and says it will be all magically okay, because an AI smarter than us will of course automatically invent morality. (Another piece of human predictable irrationality, called “anthropomorphisation”. Yeah, the AI will be just another human, just like the god of rain is just another human. What else than a human could there be?)
Sure, I agree, it requires more understanding of the topic and he is lacking quite a bit. Has someone made the argument, what if humans trying to intervene with AGI be the cause of the species destruction? Hard coding values might be contradictory, for example. It might value ‘logic’ automatically.
Speaking of instrumental rationality, the book you linked provides a lot of good practical advice. I was impressed. I admit I didn’t expect to see this level of sanity outside LessWrong. Some parts of the book could be converted into 5 or 10 really good posts on LW. I mean it as a compliment. But ultimately, that seems to be all there is, and the rest is just a huge hype about it. (Recently LW is kind of dying, so to get an idea about what a really high-quality content looks. ….................he still tells you about Kahneman and Solomonoff and Jaynes and others, and would encourage you to go and read their books.
That’s good, I wonder how much he has read on LW or rationality, It might be the case he didn’t bastardized Tegmark + Solomonoff. Just made it all up himself. But he knows about rationality.org, EA & LW.
Etc. The summary is that Athene provides a decent checklist of instrumental rationality in his book, but everything else is just a hype. And his target audience are the people who believe in “one weird trick”.
You don’t have to believe it, you can observe it, write it down, read the testimonies, think about what is going on with your current data. If there is studies and peer-review, that’s changing the predictions, but there still can be one now and if you are willing to try it.
Try reading the Sequences and maybe you will see what I was trying to describe here. That is a book that often moves people to a higher level of clarity of thinking, where the things that seemed awesome previously just become “oh, now I see …..........… texts.) It’s just, there is so much hype about something that was already done better. So obviously people on this website are not going to be very impressed. But it may be incredibly impressive to someone not familiar with the rationalist community.
Sure, the same way I don’t see you as completely wrong, you have useful things to provide and so does all the books on rationality /Sequences, etc. I agree with most of what you’re saying but it seems as you don’t really understand what the click is about.
a) emotions (categorize as a value) → uses rationality as a tool to sustain the value
b) I don’t know what to write here. you’ll have to see for yourself.
This is exactly the part I called “his bastardized version of Tegmark Multiverse + Solomonoff Induction” in my previous comment. He intruduces a few complicated concepts, without going into details; it’s all just “this could”, “this would”, “emerges from this”.
To be falsifiable, there needs to be a specific argument made in the first place. Preferably written explicitly, not just hinted at.
For example: “Additionally it might also help us better understand the quantum weirdness such as entanglement and superposition.”—Okay, might. How exactly? Uhm, who cares, right? It’s important that I said “quantum”, “entanglement” and “superposition”. It shows I am smart. Not like I said anything specific about quantum physics, other than that it might be connected with ones and zeroes in some unspecified way. Yeah, maybe.
Statements like “If logic is your core value you automatically try to understand everything logically” are deeply in the motte-and-bailey territory. Yeah, people who value logic, are probably more likely to try using it. On the other hand, human brains are quite good at valueing one thing and automatically doing other thing.
When I try going through individual statements in the text, too many of them contain some kind of weasel word. Statements that something “can be this”, “could be this”, “emerges from this”, or “is one of reasons” are hard to disprove. Statements saying “I have been wondering about this”, “I will define this”, “this makes me look at the world differently” can be true descriptions of author’s mental state; I have no way to verify it; but that’s irrelevant for the topic itself. -- There are too many statements like this in the text. Probably not a coincidence. I really don’t want to play this verbal game, because this is an exercise in rhetorics, not rationality.
Yes, the one weird trick has been observed to ‘work’ in the different cases although we don’t know more than that right now.
I suspect it already has a name in psychology, and that it does much less than Athene claims. In psychotherapy, people have “breakthrough insights” every week, and it feels like their life has changed completely. But this is just a short-term emotional effect, and the miraculous changes usually don’t happen.
I wonder how much he has read on LW or rationality, It might be the case he didn’t bastardized Tegmark + Solomonoff. Just made it all up himself. But he knows about rationality.org, EA & LW.
So, he knows about LW and stuff, but he doesn’t bother to make a reference, and instead he tells it like he made up everything himself. Nice.
Well, that probably explains my feeling of “some parts are pure manipulation, but some parts feel really LessWrong-ish”. The LessWrong-ish parts are probably just… taken from Less Wrong.
This is exactly the part I called “his bastardized version of Tegmark Multiverse + Solomonoff Induction” in my previous comment. He intruduces a few complicated concepts, without going into details; it’s all just “this could”, “this would”, “emerges from this”.
To be falsifiable, there needs to be a specific argument made in the first place. Preferably written explicitly, not just hinted at.
Falsifiable mathematically, a theory of everything which includes the theory itself, I mean. But sure, it allows someone to pick up the torch who are going to write a paper anyway.
For example: “Additionally it might also help us better understand the quantum weirdness such as entanglement and superposition.”—Okay, might. How exactly? Uhm, who cares, right? It’s important that I said “quantum”, “entanglement” and “superposition”. It shows I am smart. Not like I said anything specific about quantum physics, other than that it might be connected with ones and zeroes in some unspecified way. Yeah, maybe.
When I try going through individual statements in the text, too many of them contain some kind of weasel word. Statements that something “can be this”, “could be this”, “emerges from this”, or “is one of reasons” are hard to disprove. Statements saying “I have been wondering about this”, “I will define this”, “this makes me look at the world differently” can be true descriptions of author’s mental state; I have no way to verify it; but that’s irrelevant for the topic itself. -- There are too many statements like this in the text. Probably not a coincidence. I really don’t want to play this verbal game, because this is an exercise in rhetorics, not rationality.
They are just ramblings and no real inquiry has been made to investigate these ‘bathtub theories’, as Elon Musk puts it. But it is an easy way to explain certain things. Too much weight shouldn’t be put on it, but it would be interesting with papers, so someone who is going to publish—do this.
Statements like “If logic is your core value you automatically try to understand everything logically” are deeply in the motte-and-bailey territory.
Yeah, people who value logic, are probably more likely to try using it.
On the other hand, human brains are quite good at valueing one thing and automatically doing other thing.
That’s not correct as doing another thing still arises out of what you value emotionally according to this theory.
I suspect it already has a name in psychology, and that it does much less than Athene claims. In psychotherapy, people have “breakthrough insights” every week, and it feels like their life has changed completely. But this is just a short-term emotional effect, and the miraculous changes usually don’t happen.
We don’t know if it’s permanent, so far data only goes for around 1 month − 1 month 1 week. With the exception of Athene himself (creator of the experiment). But enlightenment, religious experiences etc last for awhile.
So, he knows about LW and stuff, but he doesn’t bother to make a reference, and instead he tells it like he made up everything himself. Nice.
Well, that probably explains my feeling of “some parts are pure manipulation, but some parts feel really LessWrong-ish”. The LessWrong-ish parts are probably just… taken from Less Wrong.
Well, he probably hasn’t read anything, he did apply for an LW meet-up but was rejected as he had to stay for the full amount of days, before this clicking religion thing they did reach out regarding their group on here I think, at EA forums and elsewhere. Staying there is free. Regarding rationality.org and so forth I think he mentioned they’re all just intellectually masturbating.
I tried to understand the world by seeing everything as information instead since it then becomes a lot easier to find a logical answer to how we came to existence and why the logical patterns around us emerge. There are two scenario’s that sound more logical for the average person, one is that there has always been nothing and the other that there has always been infinite chaos. Keep in mind, this is simplified because always makes us think about time and time came only to existence with the big bang. The issue people have though is how something could emerge from nothing without the intervention of a creator. On the other hand, if we assume there was always infinite chaos and we can find a falsifiable explanation to how our consistent reality could emerge from it we would have a much easier time to set our inner conflict at ease.
To get back to how I approach everything as information, let’s represent this infinite chaos as 1′s and 0′s. How could our reality emerge from this and how would logic be able to bring about all this beauty and consistency. There is already mathematical models of how chaos brings about order but in this specific case we can also derive certain mathematical conclusions from infinity. For example 0 would appear around half the time and 1 as well. Same, if you take the combination 01 it would appear 25% of the time while the combination 10, 11 and 00 would do so to. What you already can see is that the longer the binary number is the less frequent it appears within infinity.
To understand the next step you need some basic understanding about the concept of compression algorithm. To illustrate, if you have a fully black background in paint and save it as a .bmp it will be a much larger file then when you save it as a .jpg. The reason for this is because the .jpg uses a compression algorithm that allows you to show the same black picture on the screen but requires a lot smaller binary number. If this black picture would be our consciousness instead and it would emerge from infinite chaos, it would naturally be the one that is most compressed since it is what is most likely to happen. This is one explanation for how everything around us seems to follow specific patterns as these are merely the compression algorithms that are brought about due to the probabilities within infinite chaos.
If this line of thinking would be true it would also have other consequences. The number 1 and a billion 0′s for example would be smaller then a shorter binary number that would contain more information. This approach would also bring about a different kind of math that isn’t based on Euclidean or non-Euclidean geometry. Additionally it might also help us better understand the quantum weirdness such as entanglement and superposition.
I hope you understand why I am not impressed with the Athene’s version.
Well, he probably hasn’t read anything, he did apply for an LW meet-up but was rejected as he had to stay for the full amount of days, before this clicking religion thing they did reach out regarding their group on here I think, at EA forums and elsewhere. Staying there is free. Regarding rationality.org and so forth I think he mentioned they’re all just intellectually masturbating.
Having to stay somewhere for a few days doesn’t sound to me like a regular LW meetup. I guess it was either a CFAR workshop, or an event like this.
(Uhm, this is probably not the case, but asking anyway to make sure—“they did reach out regarding their group on here I think” does not refer to this, right? Because that’s the only recent attempt to reach out here that I remember.)
Regarding rationality.org and so forth I think he mentioned they’re all just intellectually masturbating.
Heh, sometimes I have a similar impression. On the other hand, some things take time. A few years ago, superintelligent AI was a completely fringe topic… now it’s popular in media, and random people share articles about it on Facebook. So either the founders of LW caused this trend, or at least were smart enough to predict it. That requires some work. MIRI and CFAR have funding, which is also not simple to achieve. They sometimes publish scientific articles. If I remember correctly, they were also involved in creating the effective altruist movement. (Luke Muehlhauser, the former executive director of MIRI, now works for GiveWell.) There is probably more, but I think this already qualifies as more than “intellectual masturbation”.
Athene has an impressive personal track record. I admit that part. But the whole thing about “clicking” is a separate claim. (Steve Jobs was an impressive person; that doesn’t prove his beliefs in reincarnation are correct.)
Any specific part of it? I have already spent hours researching this topic. I have even read the Reddit forum where people describe how they “clicked” (most posts seem the same, and so do all replies, it’s a bit creepy). Am I supposed to listen to the guided meditation, or watch yet another advertising video, or...?
Thanks! I feel weird about this whole thing. Similarly how I feel weird about Gleb.
I don’t want to make a full conclusion for others (that feels like too much responsibility), but at least I can point them directly towards the imporant parts, so they don’t have to google and watch promotional videos.
Here is the good part—a PDF booklet with some useful advice on instrumental rationality. It would make a good LW article, if some parts were removed.
Here are the bad parts—wiki (just read the main page), and reddit forum (click on a few random articles, they all feel the same, and the responses all feel the same)
The rest is just marketing, hyping the contents of the wiki and of the book over and over again.
This is my conclusion after ~10 hours of looking at various materials; maybe there is something more that I missed, also I didn’t listen to the podcasts. This all seems to be a one man show; a guy whose main strength is making popular youtube videos, and being a successful poker player in the past.
This is a Less Wrong article on a similar topic: An Intuitive Explanation of Solomonoff Induction
I hope you understand why I am not impressed with the Athene’s version.
I understand, that article looks interesting.
Having to stay somewhere for a few days doesn’t sound to me like a regular LW meetup. I guess it was either a CFAR workshop, or an event like this.
I think it was an event like you linked.
(Uhm, this is probably not the case, but asking anyway to make sure—“they did reach out regarding their group on here I think” does not refer to this, right? Because that’s the only recent attempt to reach out here that I remember.)
The message in the bottom does look a lot like Athene’s writing, but the first one I don’t really understand what it is about. They have mentioned they do sports betting. Athene doesn’t care about lying, only if it’s for the greater good, so I speculate they have some ways to make money at betting, for example, thus they thought about trying to reach out to some high IQ people to actually off-load work or have ’em join. But this is only speculation. I think they’ve reached out in general and around December explaining their “charity” organization and how they offer free food, housing etc. But maybe not.
Heh, sometimes I have a similar impression. On the other hand, some things take time. A few years ago, superintelligent AI was a completely fringe topic… now it’s popular in media, and random people share articles about it on Facebook. So either the founders of LW caused this trend, or at least were smart enough to predict it. That requires some work. MIRI and CFAR have funding, which is also not simple to achieve. They sometimes publish scientific articles. If I remember correctly, they were also involved in creating the effective altruist movement. (Luke Muehlhauser, the former executive director of MIRI, now works for GiveWell.) There is probably more, but I think this already qualifies as more than “intellectual masturbation”.
Athene has an impressive personal track record. I admit that part. But the whole thing about “clicking” is a separate claim. (Steve Jobs was an impressive person; that doesn’t prove his beliefs in reincarnation are correct.)
That makes sense. By the way gamingforgood has a 10X multiplier on donations to their newborn survival programs, how likely is it that is is more efficent then GiveWell’s top charities?
Any specific part of it? I have already spent hours researching this topic. I have even read the Reddit forum where people describe how they “clicked” (most posts seem the same, and so do all replies, it’s a bit creepy). Am I supposed to listen to the guided meditation, or watch yet another advertising video, or...?
I was thinking more the design, but sure, the guided meditations might be good if you are doing a self-experiment of the 4 steps and so forth, they’re nothing special, what you’d expect I suppose.
The message in the bottom does look a lot like Athene’s writing
Yeah, the “i was a great poker player” part… I missed that previously.
OK, now I am quite confused. Here are things said by “hans_jonsson” (Athene?) in that thread:
i didnt wish to post my message very publicly cause it embarrassing and awkwardly like i was bragging when i wished to be honest, and hopefully show that im competent.
So, to put things together… a guy who according to Wikipedia is a popular YouTube celebrity, raised millions for charity, and recently started the “Logic Nation”… was contacting individual LW members through private messages, instead of posting in an open thread, because posting in an open thread would be awkward bragging… and the best way to show that he was competent, was to pseudonymously post something that closely resembles popular scams.
My brain has problem processing so much logic.
(Could LessWrong really be so frightening for outsiders? So much that even starting your own cult feels less awkward and more humble than posting in LW open thread...)
I admit I am out of my depth here. There seems to be evidence flying in both directions, and I am confused. (Still learning towards “scam”, though.)
Other than that, the poor grammar and spelling is typical of Athene and lacking of paragraphs, the further you scroll in his Reddit profile the worse his grammar becomes: https://www.reddit.com/user/Chiren :)
He also wrote this in the thread: >and i may very well have some mental issues in regards to quite a few things
I don’t take things too seriously. When you type or talk you can push buttons and see how the community reacts to forward your agenda, the best option after making a mistake might not be to say “I am X and I should’ve posted in the open thread” instead for example say it’s embarrassing, awkward and eventually that “may very well have mental issues”
My brain has problem processing so much logic.
Well, yes, maybe.
(Could LessWrong really be so frightening for outsiders? So much that even starting your own cult feels less awkward and more humble than posting in LW open thread...)
I admit I am out of my depth here. There seems to be evidence flying in both directions, and I am confused. (Still learning towards “scam”, though.)
Observing what was done objectively it seems as it was as it was said, to keep it in private? I don’t really see how it is leaning towards scam, you might simply have been wrong all along. The first message I don’t fully understand.
Athene: “This stuff sounds so much like a scam if i see anything like this again you are permabanned. If you want to help put concrete info and don’t make it sound so dodgy or have to contact you or whatever.”
Some rando: “In what way does it sound like a scam? I’m not selling anything and I’m not asking them to sign up anywhere. Just to simply PM me so we can chat about it. I didn’t disclose details because I didn’t want to start people on a wild goose chase trying to do it when they aren’t capable. I thought if I could help a few people who have the right mindset become more financially stable, they’d be able to make better use of what you teach.”
(then the post was removed, presumably by Athene)
For context, this is Athene on LW, nine months ago:
Other rando: “Act publicly, especially when it includes asking members to participate in financial transactions. It is your insisting to work behind the courtains that seems fishy to me.”
Athene: “why should i ask publicly when asking personal questions about personal decisions? im insisting to work behind the curtains? when did i insist, and why should i ask publicly? … why would i change a message that i wrote as perfectly as i could? … my priorities are to as fast as possible get someone intelligent with the right priorities educated as well as donate current money the most effecient way possible.”
I remember that, pretty funny. Maybe he learned from LW and sub- and consciously understood it, responded the same way 7 months later. :) Now now, if it is him who posted here, that’s simply speculation but I think it’s 60-70% probability.
I agree that “stupid” is a bad label for the clustering which includes those kinds of behaviors, but I don’t agree if you’re saying that smoking and meat-eating are usually instrumentally rational choices for common human desires.
Stupid implies either incorrect logic or lack of consideration of an action. For many humans, these behaviors are neither one. They’re some combination of weakness (knowing the better choice, but failing to override the monkey brain) and value differences (preferring current/near experienced pleasures over later/distant pain).
note: I eat lots of meat. I also play lots of video games and read lots of fiction, none of which is purely rationally motivated. I don’t smoke or vape, but that’s also not rationally motivated—I just find it disgusting.
It doesn’t really matter what either of us think. If someone eats too much meat, and wishes they could stop, but can’t, then for a certain function we can claim it’s irrational in their achievement of that goal. If I eat a fair amount of meat because I work out, because it helps me get my weight-lifting goals, it’s rational for my objective.
What’s your objective? Well, my main point is really just that we can’t abstract these sorts of things, they are empirical. “Is X irrational (implied: for all people under all conditions)?” is about as meaningful as “Does this chair really exist?”
It doesn’t really matter what either of us think. If someone eats too much meat, and wishes they could stop, but can’t, then for a certain function we can claim it’s irrational in their achievement of that goal.
Exactly, it’s better to look at the evidence, objective reality and see what’s more likely to be efficient. You presume with the latter statement that your achievement of a goal is accurate.
If I eat a fair amount of meat because I work out, because it helps me get my weight-lifting goals, it’s rational for my objective.
I hope that this example is simply that, eating meat is not necessary for a positive nitrogen balance and muscle hypertrophy OR strength. It might have a slight advantage, but at that point, you’d assume you’re already doing everything efficiently and your genetics are on par. Very unlikely.
What’s your objective? Well, my main point is really just that we can’t abstract these sorts of things, they are empirical. “Is X irrational (implied: for all people under all conditions)?” is about as meaningful as “Does this chair really exist?”
You realize you are biased and not in line with the objective reality of things where your desires can be replaced and come from a certain place for a reason.
If you define stupidity as a set of rules that we use to ensure a problem is solved longer than chance or never and is nevertheless pursued with alacrity and enthusiasm. Then it’s stupid.
I don’t know what the problem to be solved can be boiled down to in the context of this definition, maybe evolving as a super organism although that is not an end. General solving seems more applicable.
The usual rule is to identify as an “aspiring rationalist”; identifying rationality as what you are can lead to believing you’re less prone to bias than you really are, while identifying it as what you aspire to reminds you to maintain constant vigilance.
That is mostly true, you’ve discovered the fallacy of most humans, they identify not out of rationality’s sake but from their own comfort in most cases. Because they are not honest, they wear the facade of Rationality to rationalize their behavior even though they are not rational at all or care.
Stupidity can be categorized as writing useless posts on facebook—Yudkowsky. Not being Vegan. Smoking etc.
Connecting with the Way emotionally will allow you to scrutinize and redevelop your belief system. It’s an observation our species has made but not reviewed.
Okay, I finished reading the book, and then I also looked at the wiki. So...
A few years ago I suspected that the biggest danger for the rationalist movement could be it’s own success. I mean, as long as no one give a fuck about rationality, the few nerds are able to meet somewhere at the corner of the internet, debate their hobby, and try to improve themselves if they desire so. But if somehow the word “rationality” becomes popular, all crackpots and scammers will notice it, and will start producing their own versions—and if they won’t care about the actual rationality, they will have more degrees of freedom, so they will probably produce more attractive versions. Well, Gleb Tsipursky is already halfway there, and this Athene guy seems to be fully there… except that instead of “rationality”, his applause light is “logic”. Same difference.
Instead of nitpicking hundred small details, I’ll try to get right into what I perceive as the fundamental difference between LW and “logic nation”:
According to LW, rationality is hard. It’s hard, because our monkey brains were never designed by evolution to be rational in the first place. Just to use tools and win tribal politics. That’s what we are good at. The path to rationality is full of thousand biases, and requires often to go against your own instinct. This is why most people fail. This is why most smart people fail. This is why even most of the smartest ones fail. Humans are predictably irrational, their brains have systematic biases, even smart people believe stupid things for predictable reasons. Korzybski called it “map and territory”, other people call it “magical thinking”, here at LW we talk about “mysterious answers to mysterious question”—this all points in approximately the same direction, that human brains have a predictable tendency to just believe some stupid shit, because from inside it seems perfectly real, actually even better than the real thing. And smarter people just do it in more sophisticated ways. So you have to really work hard, study hard, and even then you have a tiny chance to be fully sane; but without current research and hard work, your chances are zero for all practical purposes.
“Logic nation” has exactly the opposite approach. There is this “one weird trick”, when you spend a few hours or weeks doing a mental exercise that will associate your positive emotions with “logic”, and… voilà… you have achieved a quantum leap, and from now on all you have to do is to keep this emotional state, and everything will be alright. Your faith in logic will save you. And the first thing you have to do, of course, is to call your friends and tell them about this wonderful new thing, so they also get the chance to “click”. As long as you keep worshiping “logic”, everything will be okay. Mother Logic loves you, Mother Logic cares for you, Mother Logic will protect you, Mother Logic created this universe for you… and when you fully understand your true nature, you will see that actually Mother Logic is you. (Using my own words here, but this is exactly how what I have seen so far seems to me.)
Well, to me this smells like exactly the kind of predictable irrationality humans habitually do. Take something your group accepts as high-status and start worshiping it. Imagine that all your problems will magically disappear if you just keep believing hard. Dissolve yourself in some nebulous concept. How is this different from what the average New Age hippie believes? Oh yes, your goddess is called Logic, not Gaia. I rest my case.
I know that the topic of AI is too removed from our everyday lives, and most people’s opinion on this topic will absolutely have no consequence on anything, but even look there: Athene just waves his hand and says it will be all magically okay, because an AI smarter than us will of course automatically invent morality. (Another piece of human predictable irrationality, called “anthropomorphisation”. Yeah, the AI will be just another human, just like the god of rain is just another human. What else than a human could there be?)
Speaking of instrumental rationality, the book you linked provides a lot of good practical advice. I was impressed. I admit I didn’t expect to see this level of sanity outside LessWrong. Some parts of the book could be converted into 5 or 10 really good posts on LW. I mean it as a compliment. But ultimately, that seems to be all there is, and the rest is just a huge hype about it. (Recently LW is kind of dying, so to get an idea about what a really high-quality content looks like, see e.g. articles written by lukeprog.) But speaking about epistemic rationality, the “logic nation” is far below the LW level. It’s all just hand-waving. And salesmanship.
Also, I dislike how Athene provides scientific citations for very specific claims, but when he describes a whole concept, he doesn’t bother hinting that the concept was already invented by someone else. For example, on the wiki there is his bastardized version of Tegmark Multiverse + Solomonoff Induction, but it is written as something he just made up, using “logic”. You see, science is only useful for providing footnotes for his book. Science supports Athene, not the other way round.
Eliezer, for all his character flaws, may perhaps describe himself as the smartest being in the universe (I am exaggerating here (but not too much)), but then he still tells you about Kahneman and Solomonoff and Jaynes and others, and would encourage you to go and read their books.
Etc. The summary is that Athene provides a decent checklist of instrumental rationality in his book, but everything else is just a hype. And his target audience are the people who believe in “one weird trick”.
Try reading the Sequences and maybe you will see what I was trying to describe here. That is a book that often moves people to a higher level of clarity of thinking, where the things that seemed awesome previously just become “oh, now I see how this is just another instance of this cognitive error”. I believe what Athene is doing is built on such errors; but you need to recognize them as errors first. Again, I am not saying he is completely wrong; and he has useful things to provide. (I haven’t listened to his podcasts yet, if they expand on the material from the book, that could be valuable. Although I strongly prefer written texts.) It’s just, there is so much hype about something that was already done better. So obviously people on this website are not going to be very impressed. But it may be incredibly impressive to someone not familiar with the rationalist community.
If you are aware of mathematics what do you think about this part: https://logicnation.org/wiki/A_simple_click#Did_God_create_logic.3F Is it falsifiable? There was an interesting talk how something can arise out of nothing and how it’s relatable to the present moment which one can’t ever grasp but I will have to condense it for you guys later.
I agree.
That’s false, if it wasn’t for evolution you wouldn’t have the ability to be rational in the first place.
Again, I agree.
Yes, the one weird trick has been observed to ‘work’ in the different cases although we don’t know more than that right now. But someone with an understanding in neuroscience, psychology-physiology connection and can bot search the academic literature would be able to connect the dots I think.
‘Logic nation’ has nothing to do with this type of rationality, though, it was a mistake to deliberately say it had or use the word.
No, you don’t want to do that, it’s unlikely people will care or understand or immediately cry wolf cult. It’s also with a large likelihood an inefficient use of time. There is people who want to click so it’s probably better to push resources there. If Yudkowsky clicked he would probably not call up someone, instead write an article ‘to rule them all’ and all of you would finally get it.
But this is speculation or me giving information which can be taken accounted of after you click. Because you think differently, you probably will take some time to restructure your beliefs as the “first thing”.
Not completely accurate, it’s only if you cannot fix something with an adequate amount of time, if you, for example, scratched your leg, you can accept the pain, thus the suffering go away instantly. Doing logical things and figuring out God (Spinoza) by science could be seen as prayer, maybe neuro or something else too. But it’s speculation after all since it differs for everyone based on everything, their current knowledge (see the example of Yudkowsky) and so on.
Sure, rationality-as-LW-and-all-the-literature-puts-it. You’re better asking how is this different to what I believe? Instead, your god might be ‘comfort’. ‘Identity’ might be prevalent in rationality communities. When you realize this, doing the 4 steps, emotionally, you’re on the path to mastering the Way.
Sure, I agree, it requires more understanding of the topic and he is lacking quite a bit. Has someone made the argument, what if humans trying to intervene with AGI be the cause of the species destruction? Hard coding values might be contradictory, for example. It might value ‘logic’ automatically.
That’s good, I wonder how much he has read on LW or rationality, It might be the case he didn’t bastardized Tegmark + Solomonoff. Just made it all up himself. But he knows about rationality.org, EA & LW.
You don’t have to believe it, you can observe it, write it down, read the testimonies, think about what is going on with your current data. If there is studies and peer-review, that’s changing the predictions, but there still can be one now and if you are willing to try it.
Sure, the same way I don’t see you as completely wrong, you have useful things to provide and so does all the books on rationality /Sequences, etc. I agree with most of what you’re saying but it seems as you don’t really understand what the click is about.
a) emotions (categorize as a value) → uses rationality as a tool to sustain the value
b) I don’t know what to write here. you’ll have to see for yourself.
This is exactly the part I called “his bastardized version of Tegmark Multiverse + Solomonoff Induction” in my previous comment. He intruduces a few complicated concepts, without going into details; it’s all just “this could”, “this would”, “emerges from this”.
To be falsifiable, there needs to be a specific argument made in the first place. Preferably written explicitly, not just hinted at.
For example: “Additionally it might also help us better understand the quantum weirdness such as entanglement and superposition.”—Okay, might. How exactly? Uhm, who cares, right? It’s important that I said “quantum”, “entanglement” and “superposition”. It shows I am smart. Not like I said anything specific about quantum physics, other than that it might be connected with ones and zeroes in some unspecified way. Yeah, maybe.
Statements like “If logic is your core value you automatically try to understand everything logically” are deeply in the motte-and-bailey territory. Yeah, people who value logic, are probably more likely to try using it. On the other hand, human brains are quite good at valueing one thing and automatically doing other thing.
When I try going through individual statements in the text, too many of them contain some kind of weasel word. Statements that something “can be this”, “could be this”, “emerges from this”, or “is one of reasons” are hard to disprove. Statements saying “I have been wondering about this”, “I will define this”, “this makes me look at the world differently” can be true descriptions of author’s mental state; I have no way to verify it; but that’s irrelevant for the topic itself. -- There are too many statements like this in the text. Probably not a coincidence. I really don’t want to play this verbal game, because this is an exercise in rhetorics, not rationality.
I suspect it already has a name in psychology, and that it does much less than Athene claims. In psychotherapy, people have “breakthrough insights” every week, and it feels like their life has changed completely. But this is just a short-term emotional effect, and the miraculous changes usually don’t happen.
So, he knows about LW and stuff, but he doesn’t bother to make a reference, and instead he tells it like he made up everything himself. Nice.
Well, that probably explains my feeling of “some parts are pure manipulation, but some parts feel really LessWrong-ish”. The LessWrong-ish parts are probably just… taken from Less Wrong.
Falsifiable mathematically, a theory of everything which includes the theory itself, I mean. But sure, it allows someone to pick up the torch who are going to write a paper anyway.
They are just ramblings and no real inquiry has been made to investigate these ‘bathtub theories’, as Elon Musk puts it. But it is an easy way to explain certain things. Too much weight shouldn’t be put on it, but it would be interesting with papers, so someone who is going to publish—do this.
That’s not correct as doing another thing still arises out of what you value emotionally according to this theory.
It does: religious experience enlightenment “wikipedia.org/wiki/Enlightenment_(spiritual)″ mystical experience - nondualism
We don’t know if it’s permanent, so far data only goes for around 1 month − 1 month 1 week. With the exception of Athene himself (creator of the experiment). But enlightenment, religious experiences etc last for awhile.
Well, he probably hasn’t read anything, he did apply for an LW meet-up but was rejected as he had to stay for the full amount of days, before this clicking religion thing they did reach out regarding their group on here I think, at EA forums and elsewhere. Staying there is free. Regarding rationality.org and so forth I think he mentioned they’re all just intellectually masturbating.
By the way, what do you think about the website: https://www.asimpleclick.org/# ?
This is Athene:
This is a Less Wrong article on a similar topic: An Intuitive Explanation of Solomonoff Induction
I hope you understand why I am not impressed with the Athene’s version.
Having to stay somewhere for a few days doesn’t sound to me like a regular LW meetup. I guess it was either a CFAR workshop, or an event like this.
(Uhm, this is probably not the case, but asking anyway to make sure—“they did reach out regarding their group on here I think” does not refer to this, right? Because that’s the only recent attempt to reach out here that I remember.)
Heh, sometimes I have a similar impression. On the other hand, some things take time. A few years ago, superintelligent AI was a completely fringe topic… now it’s popular in media, and random people share articles about it on Facebook. So either the founders of LW caused this trend, or at least were smart enough to predict it. That requires some work. MIRI and CFAR have funding, which is also not simple to achieve. They sometimes publish scientific articles. If I remember correctly, they were also involved in creating the effective altruist movement. (Luke Muehlhauser, the former executive director of MIRI, now works for GiveWell.) There is probably more, but I think this already qualifies as more than “intellectual masturbation”.
Athene has an impressive personal track record. I admit that part. But the whole thing about “clicking” is a separate claim. (Steve Jobs was an impressive person; that doesn’t prove his beliefs in reincarnation are correct.)
Any specific part of it? I have already spent hours researching this topic. I have even read the Reddit forum where people describe how they “clicked” (most posts seem the same, and so do all replies, it’s a bit creepy). Am I supposed to listen to the guided meditation, or watch yet another advertising video, or...?
I applaud your effort and hope your hours spent means others’ saved.
Thanks! I feel weird about this whole thing. Similarly how I feel weird about Gleb.
I don’t want to make a full conclusion for others (that feels like too much responsibility), but at least I can point them directly towards the imporant parts, so they don’t have to google and watch promotional videos.
Here is the good part—a PDF booklet with some useful advice on instrumental rationality. It would make a good LW article, if some parts were removed.
magnet:?xt=urn:btih:e3ade7cdccc4aba33789686b9b9d765d7f14ae7b&dn=Real+Answers&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969&tr=udp%3A%2F%2Fzer0day.ch%3A1337&tr=udp%3A%2F%2Fopen.demonii.com%3A1337&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Fexodus.desync.com%3A6969
Here are the bad parts—wiki (just read the main page), and reddit forum (click on a few random articles, they all feel the same, and the responses all feel the same)
The rest is just marketing, hyping the contents of the wiki and of the book over and over again.
This is my conclusion after ~10 hours of looking at various materials; maybe there is something more that I missed, also I didn’t listen to the podcasts. This all seems to be a one man show; a guy whose main strength is making popular youtube videos, and being a successful poker player in the past.
I understand, that article looks interesting.
I think it was an event like you linked.
The message in the bottom does look a lot like Athene’s writing, but the first one I don’t really understand what it is about. They have mentioned they do sports betting. Athene doesn’t care about lying, only if it’s for the greater good, so I speculate they have some ways to make money at betting, for example, thus they thought about trying to reach out to some high IQ people to actually off-load work or have ’em join. But this is only speculation. I think they’ve reached out in general and around December explaining their “charity” organization and how they offer free food, housing etc. But maybe not.
That makes sense. By the way gamingforgood has a 10X multiplier on donations to their newborn survival programs, how likely is it that is is more efficent then GiveWell’s top charities?
I was thinking more the design, but sure, the guided meditations might be good if you are doing a self-experiment of the 4 steps and so forth, they’re nothing special, what you’d expect I suppose.
Yeah, the “i was a great poker player” part… I missed that previously.
OK, now I am quite confused. Here are things said by “hans_jonsson” (Athene?) in that thread:
So, to put things together… a guy who according to Wikipedia is a popular YouTube celebrity, raised millions for charity, and recently started the “Logic Nation”… was contacting individual LW members through private messages, instead of posting in an open thread, because posting in an open thread would be awkward bragging… and the best way to show that he was competent, was to pseudonymously post something that closely resembles popular scams.
My brain has problem processing so much logic.
(Could LessWrong really be so frightening for outsiders? So much that even starting your own cult feels less awkward and more humble than posting in LW open thread...)
I admit I am out of my depth here. There seems to be evidence flying in both directions, and I am confused. (Still learning towards “scam”, though.)
Other than that, the poor grammar and spelling is typical of Athene and lacking of paragraphs, the further you scroll in his Reddit profile the worse his grammar becomes: https://www.reddit.com/user/Chiren :)
He also wrote this in the thread: >and i may very well have some mental issues in regards to quite a few things
I don’t take things too seriously. When you type or talk you can push buttons and see how the community reacts to forward your agenda, the best option after making a mistake might not be to say “I am X and I should’ve posted in the open thread” instead for example say it’s embarrassing, awkward and eventually that “may very well have mental issues”
Well, yes, maybe.
Observing what was done objectively it seems as it was as it was said, to keep it in private? I don’t really see how it is leaning towards scam, you might simply have been wrong all along. The first message I don’t fully understand.
Oh, this is pure gold! :D
Two months ago, on Athene’s Reddit forum:
For context, this is Athene on LW, nine months ago:
Karma is a bitch.
I remember that, pretty funny. Maybe he learned from LW and sub- and consciously understood it, responded the same way 7 months later. :) Now now, if it is him who posted here, that’s simply speculation but I think it’s 60-70% probability.
You can’t just classify things as stupid because you think they are stupid, and what you think is stupid is true because you think you’re rational.
The idea that ‘not being vegan’ or ‘smoking’ are stupid is silly.
I agree that “stupid” is a bad label for the clustering which includes those kinds of behaviors, but I don’t agree if you’re saying that smoking and meat-eating are usually instrumentally rational choices for common human desires.
Stupid implies either incorrect logic or lack of consideration of an action. For many humans, these behaviors are neither one. They’re some combination of weakness (knowing the better choice, but failing to override the monkey brain) and value differences (preferring current/near experienced pleasures over later/distant pain).
note: I eat lots of meat. I also play lots of video games and read lots of fiction, none of which is purely rationally motivated. I don’t smoke or vape, but that’s also not rationally motivated—I just find it disgusting.
It doesn’t really matter what either of us think. If someone eats too much meat, and wishes they could stop, but can’t, then for a certain function we can claim it’s irrational in their achievement of that goal. If I eat a fair amount of meat because I work out, because it helps me get my weight-lifting goals, it’s rational for my objective.
What’s your objective? Well, my main point is really just that we can’t abstract these sorts of things, they are empirical. “Is X irrational (implied: for all people under all conditions)?” is about as meaningful as “Does this chair really exist?”
Exactly, it’s better to look at the evidence, objective reality and see what’s more likely to be efficient. You presume with the latter statement that your achievement of a goal is accurate.
I hope that this example is simply that, eating meat is not necessary for a positive nitrogen balance and muscle hypertrophy OR strength. It might have a slight advantage, but at that point, you’d assume you’re already doing everything efficiently and your genetics are on par. Very unlikely.
You realize you are biased and not in line with the objective reality of things where your desires can be replaced and come from a certain place for a reason.
If you define stupidity as a set of rules that we use to ensure a problem is solved longer than chance or never and is nevertheless pursued with alacrity and enthusiasm. Then it’s stupid.
I don’t know what the problem to be solved can be boiled down to in the context of this definition, maybe evolving as a super organism although that is not an end. General solving seems more applicable.