Most of this post, along with the previous posts in the series, is both beautiful and true—the best combination. It’s a pity it had to be mixed in with the meme about computers magically waking up with superpowers. I don’t think that meme is necessary here, any more than it’s necessary to believe the world was created in 4004 BC to appreciate Christmas. Taking it out—discussing it in separate posts if you wish to discuss it—is the major improvement I would suggest.
A few people commented that that section was jarring, and I kept editing it to be less jarring, but if folks on Less Wrong are actively bothered by it then it may simply need to get cut.
The ritual book as a whole is meant to reflect my beliefs (and the beliefs of a particular community) at the time that I wrote it. Partly so I can hand it to friends and family who are interested and say “this basically sums up my worldview right now” (possibly modifying them towards my worldview would be a nice plus, but not the main goal.” But also so that in 10-20 years I can look back and see a snapshot of what I cared about in 2011. Grappling with the implications of the Singularity was one of the defining things of this process. If it was just a matter of “I care a lot about the world”, this whole project wouldn’t have had the urgency it did. It would have just been a matter of persuading others to care, or sharing a meme, rather than forcing myself to rebel against a powerful aspect of my psyche. It’s important that I took it seriously, and it’s important that I ended on a note of “I still do not know the answer to this question, but I think I’d be better able to deal with it if it turned out to be true, and I commit to studying it further.”
So I’m leaving the Solstice pdf as is. But this post, as a standalone Less Wrong article, may be instrumentally useful as a way to help people take the future seriously in a general sense. I’m going to leave it for now, to get a few more data points on people’s reaction, but probably edit it some more in a day or so.
There will be a new Solstice book next year, and there’s at least a 50% chance that I will dramatically toning down the transhumanist elements to create a more.… secular? (ha) version of it that I can promote to a slightly larger humanist population.
One data point btw: My cousin (25 year old male, educated and nerdy but not particularly affiliated with our meme cluster) said that he found the AI section of this essay jarring, but in the end understood why I took it seriously and updated towards it. I don’t know if that would have happened if we hadn’t already been personally close.
I like it as is, but I think that’s partly because I’m trying to do the same thing you are at the moment—update emotionally on existential risks like uFAI. It’s a problem that needs to be taken seriously, and its placement here gives a concrete villain to what might otherwise turn into a feel-good applause lights speech.
I think that’s a good point, and I’ll be leaving it as is for now.
I will eventually want to rewrite it (or something similar) with more traditional humanist elements. This brings up another question: if you remove the uFAI antagonist, is it actually bad that it’s a feel-good applause lights speech? It’s intended to be a call to action of sorts, without railroading the reader down any single action, other than to figure out their own values and work towards them. I don’t know if it really succeeded at that, with or without the uFAI references.
Edit: wow, totally forgot to add a word that altered the meaning of a sentence dramatically. Fixed.
If the singularity was magical I’d be a lot more hopeful about the future of humankind (even humans aren’t clever enough to implement magic).
I agree with you a bit though.
ETA: Wait, that’s actually technically inaccurate. If I believed the singularity was magical I’d be a lot more hopeful about the future of humankind. But I do hope to believe the truth, whatever is really the case.
Most of this post, along with the previous posts in the series, is both beautiful and true—the best combination. It’s a pity it had to be mixed in with the meme about computers magically waking up with superpowers. I don’t think that meme is necessary here, any more than it’s necessary to believe the world was created in 4004 BC to appreciate Christmas. Taking it out—discussing it in separate posts if you wish to discuss it—is the major improvement I would suggest.
A few people commented that that section was jarring, and I kept editing it to be less jarring, but if folks on Less Wrong are actively bothered by it then it may simply need to get cut.
The ritual book as a whole is meant to reflect my beliefs (and the beliefs of a particular community) at the time that I wrote it. Partly so I can hand it to friends and family who are interested and say “this basically sums up my worldview right now” (possibly modifying them towards my worldview would be a nice plus, but not the main goal.” But also so that in 10-20 years I can look back and see a snapshot of what I cared about in 2011. Grappling with the implications of the Singularity was one of the defining things of this process. If it was just a matter of “I care a lot about the world”, this whole project wouldn’t have had the urgency it did. It would have just been a matter of persuading others to care, or sharing a meme, rather than forcing myself to rebel against a powerful aspect of my psyche. It’s important that I took it seriously, and it’s important that I ended on a note of “I still do not know the answer to this question, but I think I’d be better able to deal with it if it turned out to be true, and I commit to studying it further.”
So I’m leaving the Solstice pdf as is. But this post, as a standalone Less Wrong article, may be instrumentally useful as a way to help people take the future seriously in a general sense. I’m going to leave it for now, to get a few more data points on people’s reaction, but probably edit it some more in a day or so.
There will be a new Solstice book next year, and there’s at least a 50% chance that I will dramatically toning down the transhumanist elements to create a more.… secular? (ha) version of it that I can promote to a slightly larger humanist population.
One data point btw: My cousin (25 year old male, educated and nerdy but not particularly affiliated with our meme cluster) said that he found the AI section of this essay jarring, but in the end understood why I took it seriously and updated towards it. I don’t know if that would have happened if we hadn’t already been personally close.
I like it as is, but I think that’s partly because I’m trying to do the same thing you are at the moment—update emotionally on existential risks like uFAI. It’s a problem that needs to be taken seriously, and its placement here gives a concrete villain to what might otherwise turn into a feel-good applause lights speech.
I think that’s a good point, and I’ll be leaving it as is for now.
I will eventually want to rewrite it (or something similar) with more traditional humanist elements. This brings up another question: if you remove the uFAI antagonist, is it actually bad that it’s a feel-good applause lights speech? It’s intended to be a call to action of sorts, without railroading the reader down any single action, other than to figure out their own values and work towards them. I don’t know if it really succeeded at that, with or without the uFAI references.
Edit: wow, totally forgot to add a word that altered the meaning of a sentence dramatically. Fixed.
If the singularity was magical I’d be a lot more hopeful about the future of humankind (even humans aren’t clever enough to implement magic).
I agree with you a bit though.
ETA: Wait, that’s actually technically inaccurate. If I believed the singularity was magical I’d be a lot more hopeful about the future of humankind. But I do hope to believe the truth, whatever is really the case.