I drafted what is apparently too long an introduction to fit into a comment. Rather than try to work out how to rewrite the whole thing to fit into some unknown maximum length, I’ll break it up into parts.
PART 1:
Greetings!
I’ve been lurking since early 2010. I’ll finally take the plunge and actually engage with the community here.
I’m a Ph.D. student in math education. It’s a terribly named field, it would seem; everyone seems to think at first that this means I’m training to either (a) teach math or (b) prepare future math teachers. It’s actually better thought of as a subfield of psychology that focuses on mathematical cognition as well as on teaching and learning.
I grew up in a transhumanist household. My father signed us all up for cryonics when I was about five years old, I think it was. At the time I was just starting to realize that if death is inevitable for others, then that might mean that death is inevitable for me. I remember going up to my mother and father in the kitchen and asking, “Am I going to die someday?” They looked at me and said, “No, we’re signing all of us up for cryonics. That means if we die, they’ll just bring us back.” I remember being so excited about signing the life insurance policy that I misspelled my name. On the way out of the insurance agent’s office I asked “Does this mean I’m immortal now?” I literally leaped and squealed with excitement when they said yes.
In retrospect, I can recognize that as a tremendously defining point in my psychological development. Most people I’ve known who have signed up for cryonics know the feeling of an immense weight they didn’t even know about being lifted once everything is finalized. Although I know better than to trust my memory, I do recall learning over the course of a few days before that event how to “wear” that weight before I finally asked my family about it. I took my being signed up for cryonics as blanket permission to cast that weight off by just assuming that I would live forever. I do realize now that they were oversimplifying things, but I think it still had a very powerful effect on the basic makeup of my psyche: whereas everyone else seems to have to learn how to recognize and let go of the burden of mortality, It has never been meaningfully real to me.
Unfortunately, I can see now how that gave me permission to be complacent in a lot of important areas through most of my life. If you know that you and your closest loved ones are immortal and that anyone else can become immortal if they so choose, there’s no sense of urgency to do what you can to end death. Instead, the only real danger as far as I could tell was deathism, since that mental poison would permanently and needlessly have the net effect of making people commit suicide. But even then, my concern wasn’t that deathism might halt immortalist efforts; my concern had always been that individuals I care about might needlessly choose to die because of this ubiquitous mental disease. That was always a sad possibility, but on a core emotional level I felt confident that mortality would be obliterated in my lifetime and that the people I most cared about—mainly my family—would be there with me one way or another. So no real problems, right?
When you think this way, it makes some rationalizations way too easy. I missed a lot of opportunities in my teens because I didn’t have hardly any courage to do what others thought might be a bad idea or even much self-awareness to decide on a sense of purpose (although I don’t think I knew enough to have any idea how to define a purpose without baseless recursion). So instead of saying something like:
I’m scared, and that’s making me flinch, which isn’t a good way to make decisions. If I went ahead anyway, what would it be like for me looking back at this decision a year later? If I follow the flinch, how would I feel about that a year later?
...I would say something more like this:
Oh, I’ll just go this easier route. If I don’t like where that path leads me, I can always backtrack and correct course in fifty years or so. There’s always more time.
The problem was that until relatively recently, I didn’t apply the metacognitive effort needed to recognize what this necessarily must do to my life as a general algorithm. It actively discourages ever reflecting carefully even on major life decisions. And that’s ignoring the issue that immortality isn’t guaranteed even to transhuman cryonicists.
That said, I’m immensely grateful I never “caught” the deep terror of mortality. The basic emotional sense of okayness wasn’t the problem at all; the problem was that it made too many stupid things too easy for me to rationalize, and I simply hadn’t been raised with the right kind of metacognition to counter that stupidity. From what I’ve been able to learn and observe, it seems that metacognition is much easier to teach than is a basic emotional sense that the future will be okay.
I can say, however, that if it hadn’t been for Eliezer and Less Wrong, I probably would still be making the same stupid mistake.
I had the pleasure of meeting Eliezer in January 2010 at a conference for young cryonicists. At the time I thought he was just a really sharp Enneagram type Five who had a lot of clever arguments for a materialist worldview. Well, I guess I still think that’s true in a way! But at the time I didn’t put much stock in materialism for a few different reasons:
I’ve had a number of experiences that most self-proclaimed skeptics insist are a priori impossible and that therefore I must be either lying or deluded. I could pinpoint some phenomena I was probably deluded about, and I suspect there are still some, but I’ve had some experiences that usually get classified as “paranormal” that are just way too specific, unusual, and verified to be chance best as I can tell. And I’m under the impression that these effects are pretty well-known and scientifically well-verified, even if I have no clue how to reconcile them with the laws of physics. But I’ve found that arguing with most die-hard materialists about these things is about as fruitful as trying to converse with creationists about biology. They know they’re right, and as far as they’re concerned, one either agrees with them or is just stupid/deluded/foolish/thinking wishfully/worthless/bad. I don’t have much patience for conversation with people who are more interested in proving that I’m wrong than they are in discovering the truth.
It seemed to me that the hard problem of consciousness probably came from assuming materialism. Since it’s such a confusing problem and I was pretty sure that we can be more confident that we experience than that experience is a result of something more basic, it seemed to me sensible to consider that consciousness might the foundation from which the laws of physics emerge. (Yes, I’m aware that this sounds very much like a common confusion about quantum mechanics, but what I was thinking at the time was more basic than that. I was distinguishing between consciousness and the conscious mind. I’m not so sure anymore that this makes sense, though, since the mind is responsible for structuring experience, and I’m not sure what consciousness without an object (i.e. being conscious without being conscious of something) would mean.) But even if consciousness weren’t the foundation, I was pretty sure at the time that materialism didn’t have even an in-principle plausible approach to the hard problem. At the time, that seemed like a pretty basic issue since, without exception, all of our evidence that materialism is consistent comes from conscious experience (or at least I lack the imagination to know how we could possibly have evidence we use and know that we can trust but that we aren’t aware of!).
But I’ve always tried to cultivate a willingness to be wrong even if I haven’t always been as good at that as I would like. So when it became clear to me that Eliezer scoffed at the idea that the hard problem of consciousness might be fundamentally different than other scientific challenges, I asked him if he’d be willing to explain to me what his take was on the matter. He pointed me toward his zombie sequence) since he understandably didn’t want to take the time to explain something he had already put effort into writing down.
About a month later, I finally read that sequence. That had the interesting effect of undermining a lot of mystical thinking that had taken refuge behind the hard problem of consciousness, so I was really intrigued to read what else Eliezer had put together here. For reasons that would quite a while for me to explain, I quickly became really hesitant to read more than a small handful of LW articles at a time, and I wasn’t sure I really wanted to become part of the community here. So I just sort of watched from the sidelines for a long time, occasionally seeing something about “Friendly AI” and “existential risk” and other similar snippets.
So I eventually started looking into those things.
I learned that there’s a great deal of hunger for help in these areas.
I have sat complacently on the sidelines entirely too long. It has become clear to me that we need less preparation and more action. So I am now stepping up to take action.
I’m here to do what I can henceforth for the future. I’m starting by plugging into the community here and continuing to refine my rationality to what extent I can, in the aim of solving what heady problems I can. (One that’s still close to my heart is finding effective ways of eradicating deathism. I’ve actually encountered some surprisingly promising directions on this.) Once I’ve had a chance to attend at least one of the meetups (as I had to abandon the one after Anna’s talk for personal reasons), I hope to encourage some regular meetups in the San Diego area (at least as long as I don’t drive everyone here nuts!). Beyond that, I’ll have to see where this goes; I’m not sure any of what I’ve just named is the most strategic boon I can offer, but it’s a start and it seems very likely to quickly steer me in the best direction.
Of course, suggestions are welcome. I’m interested in doing what I can to eradicate the horror of death and exalt a wonderful future, and if that means I need to change course drastically, so be it.
Greetings!
I drafted what is apparently too long an introduction to fit into a comment. Rather than try to work out how to rewrite the whole thing to fit into some unknown maximum length, I’ll break it up into parts.
PART 1:
Greetings!
I’ve been lurking since early 2010. I’ll finally take the plunge and actually engage with the community here.
I’m a Ph.D. student in math education. It’s a terribly named field, it would seem; everyone seems to think at first that this means I’m training to either (a) teach math or (b) prepare future math teachers. It’s actually better thought of as a subfield of psychology that focuses on mathematical cognition as well as on teaching and learning.
I grew up in a transhumanist household. My father signed us all up for cryonics when I was about five years old, I think it was. At the time I was just starting to realize that if death is inevitable for others, then that might mean that death is inevitable for me. I remember going up to my mother and father in the kitchen and asking, “Am I going to die someday?” They looked at me and said, “No, we’re signing all of us up for cryonics. That means if we die, they’ll just bring us back.” I remember being so excited about signing the life insurance policy that I misspelled my name. On the way out of the insurance agent’s office I asked “Does this mean I’m immortal now?” I literally leaped and squealed with excitement when they said yes.
In retrospect, I can recognize that as a tremendously defining point in my psychological development. Most people I’ve known who have signed up for cryonics know the feeling of an immense weight they didn’t even know about being lifted once everything is finalized. Although I know better than to trust my memory, I do recall learning over the course of a few days before that event how to “wear” that weight before I finally asked my family about it. I took my being signed up for cryonics as blanket permission to cast that weight off by just assuming that I would live forever. I do realize now that they were oversimplifying things, but I think it still had a very powerful effect on the basic makeup of my psyche: whereas everyone else seems to have to learn how to recognize and let go of the burden of mortality, It has never been meaningfully real to me.
Unfortunately, I can see now how that gave me permission to be complacent in a lot of important areas through most of my life. If you know that you and your closest loved ones are immortal and that anyone else can become immortal if they so choose, there’s no sense of urgency to do what you can to end death. Instead, the only real danger as far as I could tell was deathism, since that mental poison would permanently and needlessly have the net effect of making people commit suicide. But even then, my concern wasn’t that deathism might halt immortalist efforts; my concern had always been that individuals I care about might needlessly choose to die because of this ubiquitous mental disease. That was always a sad possibility, but on a core emotional level I felt confident that mortality would be obliterated in my lifetime and that the people I most cared about—mainly my family—would be there with me one way or another. So no real problems, right?
When you think this way, it makes some rationalizations way too easy. I missed a lot of opportunities in my teens because I didn’t have hardly any courage to do what others thought might be a bad idea or even much self-awareness to decide on a sense of purpose (although I don’t think I knew enough to have any idea how to define a purpose without baseless recursion). So instead of saying something like:
...I would say something more like this:
The problem was that until relatively recently, I didn’t apply the metacognitive effort needed to recognize what this necessarily must do to my life as a general algorithm. It actively discourages ever reflecting carefully even on major life decisions. And that’s ignoring the issue that immortality isn’t guaranteed even to transhuman cryonicists.
That said, I’m immensely grateful I never “caught” the deep terror of mortality. The basic emotional sense of okayness wasn’t the problem at all; the problem was that it made too many stupid things too easy for me to rationalize, and I simply hadn’t been raised with the right kind of metacognition to counter that stupidity. From what I’ve been able to learn and observe, it seems that metacognition is much easier to teach than is a basic emotional sense that the future will be okay.
I can say, however, that if it hadn’t been for Eliezer and Less Wrong, I probably would still be making the same stupid mistake.
(Continued...)
PART 2 (part 1 here):
I had the pleasure of meeting Eliezer in January 2010 at a conference for young cryonicists. At the time I thought he was just a really sharp Enneagram type Five who had a lot of clever arguments for a materialist worldview. Well, I guess I still think that’s true in a way! But at the time I didn’t put much stock in materialism for a few different reasons:
I’ve had a number of experiences that most self-proclaimed skeptics insist are a priori impossible and that therefore I must be either lying or deluded. I could pinpoint some phenomena I was probably deluded about, and I suspect there are still some, but I’ve had some experiences that usually get classified as “paranormal” that are just way too specific, unusual, and verified to be chance best as I can tell. And I’m under the impression that these effects are pretty well-known and scientifically well-verified, even if I have no clue how to reconcile them with the laws of physics. But I’ve found that arguing with most die-hard materialists about these things is about as fruitful as trying to converse with creationists about biology. They know they’re right, and as far as they’re concerned, one either agrees with them or is just stupid/deluded/foolish/thinking wishfully/worthless/bad. I don’t have much patience for conversation with people who are more interested in proving that I’m wrong than they are in discovering the truth.
It seemed to me that the hard problem of consciousness probably came from assuming materialism. Since it’s such a confusing problem and I was pretty sure that we can be more confident that we experience than that experience is a result of something more basic, it seemed to me sensible to consider that consciousness might the foundation from which the laws of physics emerge. (Yes, I’m aware that this sounds very much like a common confusion about quantum mechanics, but what I was thinking at the time was more basic than that. I was distinguishing between consciousness and the conscious mind. I’m not so sure anymore that this makes sense, though, since the mind is responsible for structuring experience, and I’m not sure what consciousness without an object (i.e. being conscious without being conscious of something) would mean.) But even if consciousness weren’t the foundation, I was pretty sure at the time that materialism didn’t have even an in-principle plausible approach to the hard problem. At the time, that seemed like a pretty basic issue since, without exception, all of our evidence that materialism is consistent comes from conscious experience (or at least I lack the imagination to know how we could possibly have evidence we use and know that we can trust but that we aren’t aware of!).
But I’ve always tried to cultivate a willingness to be wrong even if I haven’t always been as good at that as I would like. So when it became clear to me that Eliezer scoffed at the idea that the hard problem of consciousness might be fundamentally different than other scientific challenges, I asked him if he’d be willing to explain to me what his take was on the matter. He pointed me toward his zombie sequence) since he understandably didn’t want to take the time to explain something he had already put effort into writing down.
About a month later, I finally read that sequence. That had the interesting effect of undermining a lot of mystical thinking that had taken refuge behind the hard problem of consciousness, so I was really intrigued to read what else Eliezer had put together here. For reasons that would quite a while for me to explain, I quickly became really hesitant to read more than a small handful of LW articles at a time, and I wasn’t sure I really wanted to become part of the community here. So I just sort of watched from the sidelines for a long time, occasionally seeing something about “Friendly AI” and “existential risk” and other similar snippets.
So I eventually started looking into those things.
I learned that there’s a great deal of hunger for help in these areas.
And I realized that I had been an utter fool.
I have sat complacently on the sidelines entirely too long. It has become clear to me that we need less preparation and more action. So I am now stepping up to take action.
I’m here to do what I can henceforth for the future. I’m starting by plugging into the community here and continuing to refine my rationality to what extent I can, in the aim of solving what heady problems I can. (One that’s still close to my heart is finding effective ways of eradicating deathism. I’ve actually encountered some surprisingly promising directions on this.) Once I’ve had a chance to attend at least one of the meetups (as I had to abandon the one after Anna’s talk for personal reasons), I hope to encourage some regular meetups in the San Diego area (at least as long as I don’t drive everyone here nuts!). Beyond that, I’ll have to see where this goes; I’m not sure any of what I’ve just named is the most strategic boon I can offer, but it’s a start and it seems very likely to quickly steer me in the best direction.
Of course, suggestions are welcome. I’m interested in doing what I can to eradicate the horror of death and exalt a wonderful future, and if that means I need to change course drastically, so be it.
I look forward to working with all of you.
Thank you for reading!