This is exactly the part I called “his bastardized version of Tegmark Multiverse + Solomonoff Induction” in my previous comment. He intruduces a few complicated concepts, without going into details; it’s all just “this could”, “this would”, “emerges from this”.
To be falsifiable, there needs to be a specific argument made in the first place. Preferably written explicitly, not just hinted at.
For example: “Additionally it might also help us better understand the quantum weirdness such as entanglement and superposition.”—Okay, might. How exactly? Uhm, who cares, right? It’s important that I said “quantum”, “entanglement” and “superposition”. It shows I am smart. Not like I said anything specific about quantum physics, other than that it might be connected with ones and zeroes in some unspecified way. Yeah, maybe.
Statements like “If logic is your core value you automatically try to understand everything logically” are deeply in the motte-and-bailey territory. Yeah, people who value logic, are probably more likely to try using it. On the other hand, human brains are quite good at valueing one thing and automatically doing other thing.
When I try going through individual statements in the text, too many of them contain some kind of weasel word. Statements that something “can be this”, “could be this”, “emerges from this”, or “is one of reasons” are hard to disprove. Statements saying “I have been wondering about this”, “I will define this”, “this makes me look at the world differently” can be true descriptions of author’s mental state; I have no way to verify it; but that’s irrelevant for the topic itself. -- There are too many statements like this in the text. Probably not a coincidence. I really don’t want to play this verbal game, because this is an exercise in rhetorics, not rationality.
Yes, the one weird trick has been observed to ‘work’ in the different cases although we don’t know more than that right now.
I suspect it already has a name in psychology, and that it does much less than Athene claims. In psychotherapy, people have “breakthrough insights” every week, and it feels like their life has changed completely. But this is just a short-term emotional effect, and the miraculous changes usually don’t happen.
I wonder how much he has read on LW or rationality, It might be the case he didn’t bastardized Tegmark + Solomonoff. Just made it all up himself. But he knows about rationality.org, EA & LW.
So, he knows about LW and stuff, but he doesn’t bother to make a reference, and instead he tells it like he made up everything himself. Nice.
Well, that probably explains my feeling of “some parts are pure manipulation, but some parts feel really LessWrong-ish”. The LessWrong-ish parts are probably just… taken from Less Wrong.
This is exactly the part I called “his bastardized version of Tegmark Multiverse + Solomonoff Induction” in my previous comment. He intruduces a few complicated concepts, without going into details; it’s all just “this could”, “this would”, “emerges from this”.
To be falsifiable, there needs to be a specific argument made in the first place. Preferably written explicitly, not just hinted at.
Falsifiable mathematically, a theory of everything which includes the theory itself, I mean. But sure, it allows someone to pick up the torch who are going to write a paper anyway.
For example: “Additionally it might also help us better understand the quantum weirdness such as entanglement and superposition.”—Okay, might. How exactly? Uhm, who cares, right? It’s important that I said “quantum”, “entanglement” and “superposition”. It shows I am smart. Not like I said anything specific about quantum physics, other than that it might be connected with ones and zeroes in some unspecified way. Yeah, maybe.
When I try going through individual statements in the text, too many of them contain some kind of weasel word. Statements that something “can be this”, “could be this”, “emerges from this”, or “is one of reasons” are hard to disprove. Statements saying “I have been wondering about this”, “I will define this”, “this makes me look at the world differently” can be true descriptions of author’s mental state; I have no way to verify it; but that’s irrelevant for the topic itself. -- There are too many statements like this in the text. Probably not a coincidence. I really don’t want to play this verbal game, because this is an exercise in rhetorics, not rationality.
They are just ramblings and no real inquiry has been made to investigate these ‘bathtub theories’, as Elon Musk puts it. But it is an easy way to explain certain things. Too much weight shouldn’t be put on it, but it would be interesting with papers, so someone who is going to publish—do this.
Statements like “If logic is your core value you automatically try to understand everything logically” are deeply in the motte-and-bailey territory.
Yeah, people who value logic, are probably more likely to try using it.
On the other hand, human brains are quite good at valueing one thing and automatically doing other thing.
That’s not correct as doing another thing still arises out of what you value emotionally according to this theory.
I suspect it already has a name in psychology, and that it does much less than Athene claims. In psychotherapy, people have “breakthrough insights” every week, and it feels like their life has changed completely. But this is just a short-term emotional effect, and the miraculous changes usually don’t happen.
We don’t know if it’s permanent, so far data only goes for around 1 month − 1 month 1 week. With the exception of Athene himself (creator of the experiment). But enlightenment, religious experiences etc last for awhile.
So, he knows about LW and stuff, but he doesn’t bother to make a reference, and instead he tells it like he made up everything himself. Nice.
Well, that probably explains my feeling of “some parts are pure manipulation, but some parts feel really LessWrong-ish”. The LessWrong-ish parts are probably just… taken from Less Wrong.
Well, he probably hasn’t read anything, he did apply for an LW meet-up but was rejected as he had to stay for the full amount of days, before this clicking religion thing they did reach out regarding their group on here I think, at EA forums and elsewhere. Staying there is free. Regarding rationality.org and so forth I think he mentioned they’re all just intellectually masturbating.
I tried to understand the world by seeing everything as information instead since it then becomes a lot easier to find a logical answer to how we came to existence and why the logical patterns around us emerge. There are two scenario’s that sound more logical for the average person, one is that there has always been nothing and the other that there has always been infinite chaos. Keep in mind, this is simplified because always makes us think about time and time came only to existence with the big bang. The issue people have though is how something could emerge from nothing without the intervention of a creator. On the other hand, if we assume there was always infinite chaos and we can find a falsifiable explanation to how our consistent reality could emerge from it we would have a much easier time to set our inner conflict at ease.
To get back to how I approach everything as information, let’s represent this infinite chaos as 1′s and 0′s. How could our reality emerge from this and how would logic be able to bring about all this beauty and consistency. There is already mathematical models of how chaos brings about order but in this specific case we can also derive certain mathematical conclusions from infinity. For example 0 would appear around half the time and 1 as well. Same, if you take the combination 01 it would appear 25% of the time while the combination 10, 11 and 00 would do so to. What you already can see is that the longer the binary number is the less frequent it appears within infinity.
To understand the next step you need some basic understanding about the concept of compression algorithm. To illustrate, if you have a fully black background in paint and save it as a .bmp it will be a much larger file then when you save it as a .jpg. The reason for this is because the .jpg uses a compression algorithm that allows you to show the same black picture on the screen but requires a lot smaller binary number. If this black picture would be our consciousness instead and it would emerge from infinite chaos, it would naturally be the one that is most compressed since it is what is most likely to happen. This is one explanation for how everything around us seems to follow specific patterns as these are merely the compression algorithms that are brought about due to the probabilities within infinite chaos.
If this line of thinking would be true it would also have other consequences. The number 1 and a billion 0′s for example would be smaller then a shorter binary number that would contain more information. This approach would also bring about a different kind of math that isn’t based on Euclidean or non-Euclidean geometry. Additionally it might also help us better understand the quantum weirdness such as entanglement and superposition.
I hope you understand why I am not impressed with the Athene’s version.
Well, he probably hasn’t read anything, he did apply for an LW meet-up but was rejected as he had to stay for the full amount of days, before this clicking religion thing they did reach out regarding their group on here I think, at EA forums and elsewhere. Staying there is free. Regarding rationality.org and so forth I think he mentioned they’re all just intellectually masturbating.
Having to stay somewhere for a few days doesn’t sound to me like a regular LW meetup. I guess it was either a CFAR workshop, or an event like this.
(Uhm, this is probably not the case, but asking anyway to make sure—“they did reach out regarding their group on here I think” does not refer to this, right? Because that’s the only recent attempt to reach out here that I remember.)
Regarding rationality.org and so forth I think he mentioned they’re all just intellectually masturbating.
Heh, sometimes I have a similar impression. On the other hand, some things take time. A few years ago, superintelligent AI was a completely fringe topic… now it’s popular in media, and random people share articles about it on Facebook. So either the founders of LW caused this trend, or at least were smart enough to predict it. That requires some work. MIRI and CFAR have funding, which is also not simple to achieve. They sometimes publish scientific articles. If I remember correctly, they were also involved in creating the effective altruist movement. (Luke Muehlhauser, the former executive director of MIRI, now works for GiveWell.) There is probably more, but I think this already qualifies as more than “intellectual masturbation”.
Athene has an impressive personal track record. I admit that part. But the whole thing about “clicking” is a separate claim. (Steve Jobs was an impressive person; that doesn’t prove his beliefs in reincarnation are correct.)
Any specific part of it? I have already spent hours researching this topic. I have even read the Reddit forum where people describe how they “clicked” (most posts seem the same, and so do all replies, it’s a bit creepy). Am I supposed to listen to the guided meditation, or watch yet another advertising video, or...?
Thanks! I feel weird about this whole thing. Similarly how I feel weird about Gleb.
I don’t want to make a full conclusion for others (that feels like too much responsibility), but at least I can point them directly towards the imporant parts, so they don’t have to google and watch promotional videos.
Here is the good part—a PDF booklet with some useful advice on instrumental rationality. It would make a good LW article, if some parts were removed.
Here are the bad parts—wiki (just read the main page), and reddit forum (click on a few random articles, they all feel the same, and the responses all feel the same)
The rest is just marketing, hyping the contents of the wiki and of the book over and over again.
This is my conclusion after ~10 hours of looking at various materials; maybe there is something more that I missed, also I didn’t listen to the podcasts. This all seems to be a one man show; a guy whose main strength is making popular youtube videos, and being a successful poker player in the past.
This is a Less Wrong article on a similar topic: An Intuitive Explanation of Solomonoff Induction
I hope you understand why I am not impressed with the Athene’s version.
I understand, that article looks interesting.
Having to stay somewhere for a few days doesn’t sound to me like a regular LW meetup. I guess it was either a CFAR workshop, or an event like this.
I think it was an event like you linked.
(Uhm, this is probably not the case, but asking anyway to make sure—“they did reach out regarding their group on here I think” does not refer to this, right? Because that’s the only recent attempt to reach out here that I remember.)
The message in the bottom does look a lot like Athene’s writing, but the first one I don’t really understand what it is about. They have mentioned they do sports betting. Athene doesn’t care about lying, only if it’s for the greater good, so I speculate they have some ways to make money at betting, for example, thus they thought about trying to reach out to some high IQ people to actually off-load work or have ’em join. But this is only speculation. I think they’ve reached out in general and around December explaining their “charity” organization and how they offer free food, housing etc. But maybe not.
Heh, sometimes I have a similar impression. On the other hand, some things take time. A few years ago, superintelligent AI was a completely fringe topic… now it’s popular in media, and random people share articles about it on Facebook. So either the founders of LW caused this trend, or at least were smart enough to predict it. That requires some work. MIRI and CFAR have funding, which is also not simple to achieve. They sometimes publish scientific articles. If I remember correctly, they were also involved in creating the effective altruist movement. (Luke Muehlhauser, the former executive director of MIRI, now works for GiveWell.) There is probably more, but I think this already qualifies as more than “intellectual masturbation”.
Athene has an impressive personal track record. I admit that part. But the whole thing about “clicking” is a separate claim. (Steve Jobs was an impressive person; that doesn’t prove his beliefs in reincarnation are correct.)
That makes sense. By the way gamingforgood has a 10X multiplier on donations to their newborn survival programs, how likely is it that is is more efficent then GiveWell’s top charities?
Any specific part of it? I have already spent hours researching this topic. I have even read the Reddit forum where people describe how they “clicked” (most posts seem the same, and so do all replies, it’s a bit creepy). Am I supposed to listen to the guided meditation, or watch yet another advertising video, or...?
I was thinking more the design, but sure, the guided meditations might be good if you are doing a self-experiment of the 4 steps and so forth, they’re nothing special, what you’d expect I suppose.
The message in the bottom does look a lot like Athene’s writing
Yeah, the “i was a great poker player” part… I missed that previously.
OK, now I am quite confused. Here are things said by “hans_jonsson” (Athene?) in that thread:
i didnt wish to post my message very publicly cause it embarrassing and awkwardly like i was bragging when i wished to be honest, and hopefully show that im competent.
So, to put things together… a guy who according to Wikipedia is a popular YouTube celebrity, raised millions for charity, and recently started the “Logic Nation”… was contacting individual LW members through private messages, instead of posting in an open thread, because posting in an open thread would be awkward bragging… and the best way to show that he was competent, was to pseudonymously post something that closely resembles popular scams.
My brain has problem processing so much logic.
(Could LessWrong really be so frightening for outsiders? So much that even starting your own cult feels less awkward and more humble than posting in LW open thread...)
I admit I am out of my depth here. There seems to be evidence flying in both directions, and I am confused. (Still learning towards “scam”, though.)
Other than that, the poor grammar and spelling is typical of Athene and lacking of paragraphs, the further you scroll in his Reddit profile the worse his grammar becomes: https://www.reddit.com/user/Chiren :)
He also wrote this in the thread: >and i may very well have some mental issues in regards to quite a few things
I don’t take things too seriously. When you type or talk you can push buttons and see how the community reacts to forward your agenda, the best option after making a mistake might not be to say “I am X and I should’ve posted in the open thread” instead for example say it’s embarrassing, awkward and eventually that “may very well have mental issues”
My brain has problem processing so much logic.
Well, yes, maybe.
(Could LessWrong really be so frightening for outsiders? So much that even starting your own cult feels less awkward and more humble than posting in LW open thread...)
I admit I am out of my depth here. There seems to be evidence flying in both directions, and I am confused. (Still learning towards “scam”, though.)
Observing what was done objectively it seems as it was as it was said, to keep it in private? I don’t really see how it is leaning towards scam, you might simply have been wrong all along. The first message I don’t fully understand.
Athene: “This stuff sounds so much like a scam if i see anything like this again you are permabanned. If you want to help put concrete info and don’t make it sound so dodgy or have to contact you or whatever.”
Some rando: “In what way does it sound like a scam? I’m not selling anything and I’m not asking them to sign up anywhere. Just to simply PM me so we can chat about it. I didn’t disclose details because I didn’t want to start people on a wild goose chase trying to do it when they aren’t capable. I thought if I could help a few people who have the right mindset become more financially stable, they’d be able to make better use of what you teach.”
(then the post was removed, presumably by Athene)
For context, this is Athene on LW, nine months ago:
Other rando: “Act publicly, especially when it includes asking members to participate in financial transactions. It is your insisting to work behind the courtains that seems fishy to me.”
Athene: “why should i ask publicly when asking personal questions about personal decisions? im insisting to work behind the curtains? when did i insist, and why should i ask publicly? … why would i change a message that i wrote as perfectly as i could? … my priorities are to as fast as possible get someone intelligent with the right priorities educated as well as donate current money the most effecient way possible.”
I remember that, pretty funny. Maybe he learned from LW and sub- and consciously understood it, responded the same way 7 months later. :) Now now, if it is him who posted here, that’s simply speculation but I think it’s 60-70% probability.
This is exactly the part I called “his bastardized version of Tegmark Multiverse + Solomonoff Induction” in my previous comment. He intruduces a few complicated concepts, without going into details; it’s all just “this could”, “this would”, “emerges from this”.
To be falsifiable, there needs to be a specific argument made in the first place. Preferably written explicitly, not just hinted at.
For example: “Additionally it might also help us better understand the quantum weirdness such as entanglement and superposition.”—Okay, might. How exactly? Uhm, who cares, right? It’s important that I said “quantum”, “entanglement” and “superposition”. It shows I am smart. Not like I said anything specific about quantum physics, other than that it might be connected with ones and zeroes in some unspecified way. Yeah, maybe.
Statements like “If logic is your core value you automatically try to understand everything logically” are deeply in the motte-and-bailey territory. Yeah, people who value logic, are probably more likely to try using it. On the other hand, human brains are quite good at valueing one thing and automatically doing other thing.
When I try going through individual statements in the text, too many of them contain some kind of weasel word. Statements that something “can be this”, “could be this”, “emerges from this”, or “is one of reasons” are hard to disprove. Statements saying “I have been wondering about this”, “I will define this”, “this makes me look at the world differently” can be true descriptions of author’s mental state; I have no way to verify it; but that’s irrelevant for the topic itself. -- There are too many statements like this in the text. Probably not a coincidence. I really don’t want to play this verbal game, because this is an exercise in rhetorics, not rationality.
I suspect it already has a name in psychology, and that it does much less than Athene claims. In psychotherapy, people have “breakthrough insights” every week, and it feels like their life has changed completely. But this is just a short-term emotional effect, and the miraculous changes usually don’t happen.
So, he knows about LW and stuff, but he doesn’t bother to make a reference, and instead he tells it like he made up everything himself. Nice.
Well, that probably explains my feeling of “some parts are pure manipulation, but some parts feel really LessWrong-ish”. The LessWrong-ish parts are probably just… taken from Less Wrong.
Falsifiable mathematically, a theory of everything which includes the theory itself, I mean. But sure, it allows someone to pick up the torch who are going to write a paper anyway.
They are just ramblings and no real inquiry has been made to investigate these ‘bathtub theories’, as Elon Musk puts it. But it is an easy way to explain certain things. Too much weight shouldn’t be put on it, but it would be interesting with papers, so someone who is going to publish—do this.
That’s not correct as doing another thing still arises out of what you value emotionally according to this theory.
It does: religious experience enlightenment “wikipedia.org/wiki/Enlightenment_(spiritual)″ mystical experience - nondualism
We don’t know if it’s permanent, so far data only goes for around 1 month − 1 month 1 week. With the exception of Athene himself (creator of the experiment). But enlightenment, religious experiences etc last for awhile.
Well, he probably hasn’t read anything, he did apply for an LW meet-up but was rejected as he had to stay for the full amount of days, before this clicking religion thing they did reach out regarding their group on here I think, at EA forums and elsewhere. Staying there is free. Regarding rationality.org and so forth I think he mentioned they’re all just intellectually masturbating.
By the way, what do you think about the website: https://www.asimpleclick.org/# ?
This is Athene:
This is a Less Wrong article on a similar topic: An Intuitive Explanation of Solomonoff Induction
I hope you understand why I am not impressed with the Athene’s version.
Having to stay somewhere for a few days doesn’t sound to me like a regular LW meetup. I guess it was either a CFAR workshop, or an event like this.
(Uhm, this is probably not the case, but asking anyway to make sure—“they did reach out regarding their group on here I think” does not refer to this, right? Because that’s the only recent attempt to reach out here that I remember.)
Heh, sometimes I have a similar impression. On the other hand, some things take time. A few years ago, superintelligent AI was a completely fringe topic… now it’s popular in media, and random people share articles about it on Facebook. So either the founders of LW caused this trend, or at least were smart enough to predict it. That requires some work. MIRI and CFAR have funding, which is also not simple to achieve. They sometimes publish scientific articles. If I remember correctly, they were also involved in creating the effective altruist movement. (Luke Muehlhauser, the former executive director of MIRI, now works for GiveWell.) There is probably more, but I think this already qualifies as more than “intellectual masturbation”.
Athene has an impressive personal track record. I admit that part. But the whole thing about “clicking” is a separate claim. (Steve Jobs was an impressive person; that doesn’t prove his beliefs in reincarnation are correct.)
Any specific part of it? I have already spent hours researching this topic. I have even read the Reddit forum where people describe how they “clicked” (most posts seem the same, and so do all replies, it’s a bit creepy). Am I supposed to listen to the guided meditation, or watch yet another advertising video, or...?
I applaud your effort and hope your hours spent means others’ saved.
Thanks! I feel weird about this whole thing. Similarly how I feel weird about Gleb.
I don’t want to make a full conclusion for others (that feels like too much responsibility), but at least I can point them directly towards the imporant parts, so they don’t have to google and watch promotional videos.
Here is the good part—a PDF booklet with some useful advice on instrumental rationality. It would make a good LW article, if some parts were removed.
magnet:?xt=urn:btih:e3ade7cdccc4aba33789686b9b9d765d7f14ae7b&dn=Real+Answers&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969&tr=udp%3A%2F%2Fzer0day.ch%3A1337&tr=udp%3A%2F%2Fopen.demonii.com%3A1337&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Fexodus.desync.com%3A6969
Here are the bad parts—wiki (just read the main page), and reddit forum (click on a few random articles, they all feel the same, and the responses all feel the same)
The rest is just marketing, hyping the contents of the wiki and of the book over and over again.
This is my conclusion after ~10 hours of looking at various materials; maybe there is something more that I missed, also I didn’t listen to the podcasts. This all seems to be a one man show; a guy whose main strength is making popular youtube videos, and being a successful poker player in the past.
I understand, that article looks interesting.
I think it was an event like you linked.
The message in the bottom does look a lot like Athene’s writing, but the first one I don’t really understand what it is about. They have mentioned they do sports betting. Athene doesn’t care about lying, only if it’s for the greater good, so I speculate they have some ways to make money at betting, for example, thus they thought about trying to reach out to some high IQ people to actually off-load work or have ’em join. But this is only speculation. I think they’ve reached out in general and around December explaining their “charity” organization and how they offer free food, housing etc. But maybe not.
That makes sense. By the way gamingforgood has a 10X multiplier on donations to their newborn survival programs, how likely is it that is is more efficent then GiveWell’s top charities?
I was thinking more the design, but sure, the guided meditations might be good if you are doing a self-experiment of the 4 steps and so forth, they’re nothing special, what you’d expect I suppose.
Yeah, the “i was a great poker player” part… I missed that previously.
OK, now I am quite confused. Here are things said by “hans_jonsson” (Athene?) in that thread:
So, to put things together… a guy who according to Wikipedia is a popular YouTube celebrity, raised millions for charity, and recently started the “Logic Nation”… was contacting individual LW members through private messages, instead of posting in an open thread, because posting in an open thread would be awkward bragging… and the best way to show that he was competent, was to pseudonymously post something that closely resembles popular scams.
My brain has problem processing so much logic.
(Could LessWrong really be so frightening for outsiders? So much that even starting your own cult feels less awkward and more humble than posting in LW open thread...)
I admit I am out of my depth here. There seems to be evidence flying in both directions, and I am confused. (Still learning towards “scam”, though.)
Other than that, the poor grammar and spelling is typical of Athene and lacking of paragraphs, the further you scroll in his Reddit profile the worse his grammar becomes: https://www.reddit.com/user/Chiren :)
He also wrote this in the thread: >and i may very well have some mental issues in regards to quite a few things
I don’t take things too seriously. When you type or talk you can push buttons and see how the community reacts to forward your agenda, the best option after making a mistake might not be to say “I am X and I should’ve posted in the open thread” instead for example say it’s embarrassing, awkward and eventually that “may very well have mental issues”
Well, yes, maybe.
Observing what was done objectively it seems as it was as it was said, to keep it in private? I don’t really see how it is leaning towards scam, you might simply have been wrong all along. The first message I don’t fully understand.
Oh, this is pure gold! :D
Two months ago, on Athene’s Reddit forum:
For context, this is Athene on LW, nine months ago:
Karma is a bitch.
I remember that, pretty funny. Maybe he learned from LW and sub- and consciously understood it, responded the same way 7 months later. :) Now now, if it is him who posted here, that’s simply speculation but I think it’s 60-70% probability.