Jed talks about how you need to “empty somebody out” before they can be “filled back in”.
Run, do not walk, away from this person.
It reminds me of “jailbreaking”, as advocated and practiced by certain prominent members of the rationalist community.
Or that part in “The Matrix” when Agent Smith copies himself into all the other occupants of the Matrix. His purpose has become to tile the future lightcone with copies of himself, which I believe was also a central tenet of Zizism.
What do we want to tile the future lightcone with?
I’ve listened to this book’s audio version a several times in the past few years. I finished it for the fourth time yesterday.
I can’t help but find the book’s claims convincing, and I’d like to hear more of your thoughts on them.
To support your point: Yes, the book might be dangerous.
Last year, the book’s ideas threw me into a senseless circle of nihilistic ruminations. I had chosen to listen to it again (3rd time) at a bad time, two months into a later diagnosed adjustment disorder. It got better after about two more months, due to the betterment of external circumstances. I never considered suicide, but I did warily consider trying “spiritual autolysis”.
To disagree with your point: I believe the book to also hold the potential to improve the lives of those who read it. To even value life more than before.
Let me quote a passage from the book, part of which is quoted in this post’s section VII:
“I think the bubble [the illusion of reality] is a magnificent amusement part, and leaving it is a damn silly thing to do unless you absolutely must. I would advise anyone who didn’t absolutely have to leave to just head back in and enjoy it while it lasts.”
At one other point, the author also states something along those lines:
“If anything, I’m the one missing out. I can’t regain the belief that anything matters.”
I see a huge difference in what I grasp of “jailbreaking”, and this book’s claims. The author doesn’t call anything corrupt. On the contrary, he states that “It’s all good”.
Despite my rough phase last year, I have a lot of admiration for this book.
However, I feel like I might be naive in some ways, too easily convinceable.
If you will, please tell me your thoughts about all of that, and the red flags you’re seeing.
Also, this is my first comment on this platform, so please tell me about any conventions I disregarded :)
McKenna’s shtick comes preloaded with fully general and condescending answers to all objections: it’s your “semantic stopsigns” getting in the way, your fear of realising that nothing is true, all is a lie, and if you would just blow your brains out like he has you’d see it. He’ll give you the gun to do it with and when you decline he’ll smug at you saying fine, stick with your life of comfortable ignorance.
I’m willing to believe he’s honestly trying to describe his experiences. But by his own descriptions, whatever it is that he has, it is something I have not the slightest interest in, for all that he calls it “enlightenment”. Of course he has a self-justifying interpretation to put on that, but I do not care about what he would think of me. Neither will I play the game of But Suppose, which is just another Fully General Response. “But suppose he’s right! Then he’d be right! So he could be right!” There are decisions to be made here. I have made mine, supposing is at an end, and I leave him by the door wherein I went.
He says:
I play video games, read books, watch movies. I’d say I probably blow several hours a day that way, but I don’t see it as a waste because I don’t have anything better to spend my time on. I couldn’t put it to better use because I’m not trying to become something or accomplish anything. I have no dissatisfaction to drive me, no ambition to draw me. I’ve done what I came to do. I’m just killing time ’til time kills me.
Is this who you want to be? That’s what he’s offering. No thanks. I am left speculating on why anyone would take him up on the offer.
A couple of months ago I was at the Early Music Festival in Utrecht, ten full days of great music at least 400 years old played by some of the top people in the world. Five minutes of that was worth more to me than all of Jed McKenna’s burnt-out ramblings.
His “shtick” (why the dramatic approach?) is that if we try to disprove everything, without giving up, every false belief will eventually be dealt with, and nothing true will be affected. Is there some fault with that or not?
In regards to enlightenment, he uses a specific definition, and it’s not something that can be decided by arguing. You either satisfy the definition or you don’t. Nobody has asked you to care about it, so you needn’t justify your decisions if you don’t.
If you think he is offering something like “how to play video games all day,” you have misunderstood him quite significantly, and I’d suggest not misrepresenting him, at least not here on LessWrong.
His “shtick” (why the dramatic approach?) is that if we try to disprove everything, without giving up, every false belief will eventually be dealt with, and nothing true will be affected. Is there some fault with that or not?
He says that “nothing true will perish” but also that there is no truth. Either he or the OP dismisses everything that people have discovered about the world as mere “semantic stopsigns”, which looks pretty much like a semantic stopsign itself. There is nothing here and no amount of hermeneutics will magic it into something.
If you think he is offering something like “how to play video games all day,” you have misunderstood him quite significantly, and I’d suggest not misrepresenting him, at least not here on LessWrong.
I quoted his actual words, to the effect that he does nothing and everything remains undone. I am not going to search out any other reading of these words than what they say on their face. If that is a misrepresentation, he is misrepresenting himself.
Could you point to where he claims there is no truth? What I’ve seen him say is along the lines of “no belief is true” and “nobody will write down the truth.” That should not be surprising to anyone who groks falsification. (For those who do not, the LessWrong article on why 0 and 1 are not probabilities is a place to start.)
He is describing what he’s up to. You say that’s what he’s offering. So you already are searching out other readings. Have you heard of taking things out of context? The reason that is frowned upon is because dogmatically just reading a piece of text is a reliable way to draw bad conclusions.
Could you point to where he claims there is no truth?
The OP says:
Jed says that after going through this process long enough, you will wind up with the answer that there is no truth.
and
In some sense the rationality community is clinging on to the semantic stopsign of bayes rule and empiricism, while Jed lights even those on fire and declares truth as non-existent.
So if you disagree with that reading, your argument is with the OP.
If we’re going to duel with Eliezer posts, see also The Simple Truth.
Here are a few of my beliefs (although not my own words):
“I think I exist. I am conscious of my own identity. I was born and I shall die. I have arms and legs. I occupy a particular point in space. No other solid object can occupy the same point simultaneously.”
I do not expect to update any of these, and certainly not from sitting with my eyes closed “questioning” them.
But perhaps whatever Jed means can only be learned by going on a month-long retreat with him?
I looked briefly into Ziz. My conclusion is that she had some interesting ideas I hadn’t heard before, and some completely ridiculous ideas. I couldn’t find her definition of “good” or “bad” or the idea of tiling the future lightcone with copies of herself.
Thanks for reminding me about that scene from the Matrix. Gave it a look on YouTube. Awesome movie.
I’m wondering, how do you look at the question of what we want to tile the future lightcone with?
Hey thanks for the link Richard that was an interesting read. There definitely seems to be some similarities.
I was actually thinking about what we want to tile the future lightcone with the other day. This was the progression I saw:
Conventional Morality :: Do what feels right without thinking much about it.
Utilitarianism I :: The atomic unit of “goodness” and “badness” is the valence of human experience. The valence of experience across all humans matters equally. The suffering of a child in Africa matters just as much as the suffering of my neighbor.
Utilitarianism II :: The valence of experience across all sentient things matters equally. i.e. The suffering of cows matters too.
Utilitarianism III :: The valence of experience across all sentient things across time matters equally. The suffering of sentient things in the future matters just as much as the suffering of my neighbor today. i.e. longtermism
Utilitarianism IV :: Understanding valence and consciousness takes a lexicographical preference over any attempt to improve the valence of sentient things as we understand it today because only with this better understanding can we efficiently maximize the valence of sentient things. i.e. veganism is only helpful in its ability to speed up our ability to understand consciousness and release a utilitron shockwave. Everything before the utilitron shockwave can be rounded to zero.
Utiltiarianism V :: Upon understanding consciousness, we can expect to have our preferences significantly shaken in a way that we can’t hope to properly anticipate (we can’t expect to have properly understood our preferences with such a weak understanding of “reality”). The lexicographical preference then becomes understanding consciousness and making the “right” decision on what to do next upon understanding it. In this case, it would mean that all of our “moral” actions were only good in so far as their contribution to this revelation and making the “right” decision upon understanding consciousness.
Utilitarianism VI :: ?
Utilitarianism V has some similarities to tiling the future lightcone with copies of yourself which can then execute based on their updated preferences in the future.
But “yourself” is really just a collection of memes. It will be the memes that are propagating themselves like a virus. There’s no real coherent persistent definition of “yourself”.
What do you want to tile the future lightcone with?
Run, do not walk, away from this person.
It reminds me of “jailbreaking”, as advocated and practiced by certain prominent members of the rationalist community.
Or that part in “The Matrix” when Agent Smith copies himself into all the other occupants of the Matrix. His purpose has become to tile the future lightcone with copies of himself, which I believe was also a central tenet of Zizism.
What do we want to tile the future lightcone with?
Hi Richard!
I’ve listened to this book’s audio version a several times in the past few years. I finished it for the fourth time yesterday.
I can’t help but find the book’s claims convincing, and I’d like to hear more of your thoughts on them.
To support your point: Yes, the book might be dangerous.
Last year, the book’s ideas threw me into a senseless circle of nihilistic ruminations. I had chosen to listen to it again (3rd time) at a bad time, two months into a later diagnosed adjustment disorder. It got better after about two more months, due to the betterment of external circumstances. I never considered suicide, but I did warily consider trying “spiritual autolysis”.
To disagree with your point: I believe the book to also hold the potential to improve the lives of those who read it. To even value life more than before.
Let me quote a passage from the book, part of which is quoted in this post’s section VII:
At one other point, the author also states something along those lines:
I see a huge difference in what I grasp of “jailbreaking”, and this book’s claims. The author doesn’t call anything corrupt. On the contrary, he states that “It’s all good”.
Despite my rough phase last year, I have a lot of admiration for this book.
However, I feel like I might be naive in some ways, too easily convinceable.
If you will, please tell me your thoughts about all of that, and the red flags you’re seeing.
Also, this is my first comment on this platform, so please tell me about any conventions I disregarded :)
Hello, and welcome!
McKenna’s shtick comes preloaded with fully general and condescending answers to all objections: it’s your “semantic stopsigns” getting in the way, your fear of realising that nothing is true, all is a lie, and if you would just blow your brains out like he has you’d see it. He’ll give you the gun to do it with and when you decline he’ll smug at you saying fine, stick with your life of comfortable ignorance.
I’m willing to believe he’s honestly trying to describe his experiences. But by his own descriptions, whatever it is that he has, it is something I have not the slightest interest in, for all that he calls it “enlightenment”. Of course he has a self-justifying interpretation to put on that, but I do not care about what he would think of me. Neither will I play the game of But Suppose, which is just another Fully General Response. “But suppose he’s right! Then he’d be right! So he could be right!” There are decisions to be made here. I have made mine, supposing is at an end, and I leave him by the door wherein I went.
He says:
Is this who you want to be? That’s what he’s offering. No thanks. I am left speculating on why anyone would take him up on the offer.
A couple of months ago I was at the Early Music Festival in Utrecht, ten full days of great music at least 400 years old played by some of the top people in the world. Five minutes of that was worth more to me than all of Jed McKenna’s burnt-out ramblings.
His “shtick” (why the dramatic approach?) is that if we try to disprove everything, without giving up, every false belief will eventually be dealt with, and nothing true will be affected. Is there some fault with that or not?
In regards to enlightenment, he uses a specific definition, and it’s not something that can be decided by arguing. You either satisfy the definition or you don’t. Nobody has asked you to care about it, so you needn’t justify your decisions if you don’t.
If you think he is offering something like “how to play video games all day,” you have misunderstood him quite significantly, and I’d suggest not misrepresenting him, at least not here on LessWrong.
He says that “nothing true will perish” but also that there is no truth. Either he or the OP dismisses everything that people have discovered about the world as mere “semantic stopsigns”, which looks pretty much like a semantic stopsign itself. There is nothing here and no amount of hermeneutics will magic it into something.
I quoted his actual words, to the effect that he does nothing and everything remains undone. I am not going to search out any other reading of these words than what they say on their face. If that is a misrepresentation, he is misrepresenting himself.
Could you point to where he claims there is no truth? What I’ve seen him say is along the lines of “no belief is true” and “nobody will write down the truth.” That should not be surprising to anyone who groks falsification. (For those who do not, the LessWrong article on why 0 and 1 are not probabilities is a place to start.)
He is describing what he’s up to. You say that’s what he’s offering. So you already are searching out other readings. Have you heard of taking things out of context? The reason that is frowned upon is because dogmatically just reading a piece of text is a reliable way to draw bad conclusions.
The OP says:
and
So if you disagree with that reading, your argument is with the OP.
If we’re going to duel with Eliezer posts, see also The Simple Truth.
Here are a few of my beliefs (although not my own words):
“I think I exist. I am conscious of my own identity. I was born and I shall die. I have arms and legs. I occupy a particular point in space. No other solid object can occupy the same point simultaneously.”
I do not expect to update any of these, and certainly not from sitting with my eyes closed “questioning” them.
But perhaps whatever Jed means can only be learned by going on a month-long retreat with him?
I looked briefly into Ziz. My conclusion is that she had some interesting ideas I hadn’t heard before, and some completely ridiculous ideas. I couldn’t find her definition of “good” or “bad” or the idea of tiling the future lightcone with copies of herself.
Thanks for reminding me about that scene from the Matrix. Gave it a look on YouTube. Awesome movie.
I’m wondering, how do you look at the question of what we want to tile the future lightcone with?
Hey thanks for the link Richard that was an interesting read. There definitely seems to be some similarities.
I was actually thinking about what we want to tile the future lightcone with the other day. This was the progression I saw:
Conventional Morality :: Do what feels right without thinking much about it.
Utilitarianism I :: The atomic unit of “goodness” and “badness” is the valence of human experience. The valence of experience across all humans matters equally. The suffering of a child in Africa matters just as much as the suffering of my neighbor.
Utilitarianism II :: The valence of experience across all sentient things matters equally. i.e. The suffering of cows matters too.
Utilitarianism III :: The valence of experience across all sentient things across time matters equally. The suffering of sentient things in the future matters just as much as the suffering of my neighbor today. i.e. longtermism
Utilitarianism IV :: Understanding valence and consciousness takes a lexicographical preference over any attempt to improve the valence of sentient things as we understand it today because only with this better understanding can we efficiently maximize the valence of sentient things. i.e. veganism is only helpful in its ability to speed up our ability to understand consciousness and release a utilitron shockwave. Everything before the utilitron shockwave can be rounded to zero.
Utiltiarianism V :: Upon understanding consciousness, we can expect to have our preferences significantly shaken in a way that we can’t hope to properly anticipate (we can’t expect to have properly understood our preferences with such a weak understanding of “reality”). The lexicographical preference then becomes understanding consciousness and making the “right” decision on what to do next upon understanding it. In this case, it would mean that all of our “moral” actions were only good in so far as their contribution to this revelation and making the “right” decision upon understanding consciousness.
Utilitarianism VI :: ?
Utilitarianism V has some similarities to tiling the future lightcone with copies of yourself which can then execute based on their updated preferences in the future.
But “yourself” is really just a collection of memes. It will be the memes that are propagating themselves like a virus. There’s no real coherent persistent definition of “yourself”.
What do you want to tile the future lightcone with?