My name is Scott Starin. I toyed with the idea of using a pseudonym, but I decided that this site is related enough to my real world persona that I should be safe in claiming my LW persona.
I am a spacecraft dynamics and control expert working for NASA. I am a 35-year old man married to another man, and we have a year-old daughter. I am an atheist, and in the past held animist and Christian beliefs. I would describe my ethics as rationally guided with one instinctive impulse to the basic Christian idea of valuing and respecting one’s neighbor, and another instinctive impulse to mistrust everyone and growl at anyone who looks like they might take my food. Understanding my own humanity and human biases seems a good path toward suppressing the instinctive impulses when they are inappropriate.
I came to this site from an unrelated blog that briefly said something like “Eliezer Yudkowsky is frighteningly intelligent” and linked to this site. So, I came to see for myself. I’ve read through a lot of the sequences. I really enjoyed the Three Worlds Collide story and forced my husband to read it. EY does seems to be intelligent, but I’m signing up because he and the rest of the community seem to shine brightest when new ideas are brought in. I have some ideas that I haven’t seen expressed, so I hope to contribute.
One area where I might contribute is from my professional interest in the management of catastrophic risk of spacecraft failure, which shares some ideas with biases associated with existential risk to the human species. Yudkowsky’s book chapter on the topic was really helpful.
Another area is in the difference between religious belief and religious practice. The strong tendency to reject religious belief by members of the LW community may come at the expense of really understanding what powerful emotional, and yet rational, needs may be met by religious practice. This is probably a disservice to those religious readers you have who could benefit from enhanced conversation with LW atheists. Religious communities serve important needs in our society, such as charitable support for the poor or imprisoned and helping loved ones who are in real existential crisis (e.g. terminally ill or suicidal), etc. (Some communities may even produce benefits that outweigh the costs of whatever injury to truth and rationality they may do.) It struck me that a Friendly AI that doesn’t understand these needs may not be feasible, so I thought I should bring it up.
I hope readers will note my ample use of “may” and “might” here. I haven’t come to any firm conclusions, but I have good reasons for my thoughts. (I’ll have to prove that last claim, I know. As a good-faith opener, I do go to a church that has a lot of atheist members—not agnostics, true atheists, like me.) I confess the whole karma thing at this site causes me some anxiety, but I’ve decided to give it a try anyway. I hope we can talk.
(Since I’m identifying myself, I am required by law to say: Nothing I write on this site should be construed as speaking for my employer. I won’t put a disclaimer in every post—that could get annoying—only those where I might reasonably be thought to be speaking for or about my work at NASA.)
Understanding and overcoming human cognitive biases is, of course, a recurring theme here. So is management of catastrophic (including existential) risks.
Discussions of charity come up from time to time, usually framed as optimization problems. This post gets cited often. We actually had a recent essay contest on efficient charity that might interest you.
The value of religion (as distinct from the value of charity, of community, and so forth) comes up from time to time but rarely goes anywhere useful.
Don’t sweat the karma.
If you don’t mind a personal question: where did you and your husband get married?
We got married in a small town near St. Catharine’s, Ontario, a few weeks after it became legal there.
Thanks for the charity links. I find practical and aesthetic value in the challenging aspect of “shut up and multiply,”(http://lesswrong.com/lw/n3/circular_altruism/), particularly in the example you linked about purchasing charity efficiently. However, it seems to me that oversimplification can occur when we talk about human suffering.
(Please forgive me if the following is rehashing something written earlier.) For example, multiplying a billion people’s suffering for 1 second to make it equal to a billion seconds of consecutive suffering to make it seem way more bad than a million consecutive seconds—almost 12 straight days—of suffering done by one person is just plainly, rationally wrong. One proof of that is that distributing those million seconds as one-second bursts at regular intervals over a person’s life is better than the million consecutive seconds because the person is not otherwise unduly hampered by the occasional one-second annoyances, but would probably become unable to function well in the consecutive case, and might be permanently injured (a la PTSD). My point is there’s something missing from the equation, and that potential lies at the heart of the human impulse to be irrational when presented with the same choice as comparative gain vs. comparative loss.
As you say, a million isolated seconds of suffering isn’t as bad as a million consecutive seconds of suffering, because (among other things) of the knock-on effects of consecutivity (e.g. PTSD). Maybe it’s only 10% as bad, or 1%, or .1%, or .0001%, or whatever. Sure, agreed, of course.
But the moral intuition being challenged by “shut up and multiply” isn’t about that.
If everyone agreed that sure, N dust-specks was worse than 50 years of torture for some N, and we were merely haggling over the price, the thought experiment would not be interesting. That’s why the thought experiment involves ridiculous numbers like 3^^^3 in the first place, so we can skip over all that.
When we’re trying to make practical decisions about what suffering to alleviate, we care about N, and precision matters. At that point we have to do some serious real-world thinking and measuring and, y’know, work.
But what’s challenging about “shut up and multiply” isn’t the value of N, it’s the existence of N. if we’re starting out with a moral intuition that dust-specks and torture simply aren’t commensurable, and therefore there is no value of N… well, then the work of calculating it is doomed before we start.
OK, I now understand the way the site works: If someone responds to your comment, it shows up in your mailbox like an e-mail. Sorry for getting that wrong with Vaniver ( i responded by private mail), and if I can fix it in a little while, I will (edit: and now I have). Now, to content:
Thanks for responding to me! I didn’t feel like I should hijack the welcome thread for something I didn’t know hadn’t been thoroughly discussed elsewhere. So I tried to be succinct, and failed and ended up garbled.
First, 3^^^3 is WAY more than a googolplex ;-)
Second, I fully recognize the existence of N, and I tried to make that clear in the last statement of content-value in my answer to you, by recalling the central lesson of “shut up and multiply”, which is that people, when faced with identical situations presented at one time as gain comparisons, and at another time as loss comparisons, will fail to recognize the identity and choose differently. That is a REALLY useful thing to know about human bias, and I don’t discount it.
I suppose my comment above amounts to a quibble if it’s already understood that EY’s ideas only apply to identical situations presented with different gain/loss values, but I don’t have the impression that’s all he was getting at. Hence, my caveat. If everyone’s already beyond that, feel free to ignore.
I agree that dust-specks and torture are commensurable. If you will allow, a personal story:
I have distichiasis. Look it up, it ain’t fun. My oily tear glands, on the insides of my eyelids, produce eyelashes that grow toward my eyes. Every once in a while, one of those (almost invisible, clear—mine rarely have pigment at all) eyelashes grows long enough to brush my eyes. At that instant, I rarely notice, having been inured to the sensation. I only respond when the lash is long enough to wake me up in the middle of the night, and I struggle to pull out the invisible eyelash. Sometimes, rarely, it gets just the right (wrong) length when I’m driving, and I clap my hand over my eye to hold it still until I get home.
If I could reliably relieve myself of this condition in exchange for one full day of hot stinging torture, I would do so, as long as I could schedule it conveniently, because I could then get LASIK, which distichiasis strictly disallows for me stasis quo. I even tried, with electrolysis, which burned and scarred my eyelids enough that the doctor finally suggested I’d better stop.
So, an individual’s choices about how they will consume their lot of torture can be wide-ranging. I recognize that. These calculations of EY’s do not recognize these differences. Sometimes, it makes sense to shut up and multiply. Other times, when it’s available (as it often is), it makes sense to shut up and listen. Because of that inherent fact, of the difference between internal perception and others’ external perception of your suffering, we have a really useful intuition built in to, in otherwise equal situations, defer to the judgment of those who will suffer. We optimize not over suffering, but over choice. That is our human nature. It may be irrational. But, that nature should be addressed—not only failing to multiply human suffering sufficiently objectively.
Another area is in the difference between religious belief and religious practice. The strong tendency to reject religious belief by members of the LW community may come at the expense of really understanding what powerful emotional, and yet rational, needs may be met by religious practice. This is probably a disservice to those religious readers you have who could benefit from enhanced conversation with LW atheists. Religious communities serve important needs in our society, such as charitable support for the poor or imprisoned and helping loved ones who are in real existential crisis (e.g. terminally ill or suicidal), etc. (Some communities may even produce benefits that outweigh the costs of whatever injury to truth and rationality they may do.) It struck me that a Friendly AI that doesn’t understand these needs may not be feasible, so I thought I should bring it up.
This topic interests me quite a bit, and I think it would be well-received here if you focus on the practice and ignore the belief. EY has a number of posts that are unabashedlyinfluenced by religious practices.
Vaniver,
I thought the message from you in my mailbox was private, so I responded in a private manner. But, it was a copy of this public posting; I’ve got the hang of it now. I cannot, however, figure out how to recover the private response I sent you and post it here as a public reply. Feel free to do so if you like!
Thanks js, here was my response to Vaniver, responding to the “initiation_ceremony” link, as mundane as it may be:
The initiation sequence was funny. And very Agatha Christie, revealing the critical piece of information just as Poirot solves the mystery!
11/16.
Would they have let him in?
My name is Scott Starin. I toyed with the idea of using a pseudonym, but I decided that this site is related enough to my real world persona that I should be safe in claiming my LW persona.
I am a spacecraft dynamics and control expert working for NASA. I am a 35-year old man married to another man, and we have a year-old daughter. I am an atheist, and in the past held animist and Christian beliefs. I would describe my ethics as rationally guided with one instinctive impulse to the basic Christian idea of valuing and respecting one’s neighbor, and another instinctive impulse to mistrust everyone and growl at anyone who looks like they might take my food. Understanding my own humanity and human biases seems a good path toward suppressing the instinctive impulses when they are inappropriate.
I came to this site from an unrelated blog that briefly said something like “Eliezer Yudkowsky is frighteningly intelligent” and linked to this site. So, I came to see for myself. I’ve read through a lot of the sequences. I really enjoyed the Three Worlds Collide story and forced my husband to read it. EY does seems to be intelligent, but I’m signing up because he and the rest of the community seem to shine brightest when new ideas are brought in. I have some ideas that I haven’t seen expressed, so I hope to contribute.
One area where I might contribute is from my professional interest in the management of catastrophic risk of spacecraft failure, which shares some ideas with biases associated with existential risk to the human species. Yudkowsky’s book chapter on the topic was really helpful.
Another area is in the difference between religious belief and religious practice. The strong tendency to reject religious belief by members of the LW community may come at the expense of really understanding what powerful emotional, and yet rational, needs may be met by religious practice. This is probably a disservice to those religious readers you have who could benefit from enhanced conversation with LW atheists. Religious communities serve important needs in our society, such as charitable support for the poor or imprisoned and helping loved ones who are in real existential crisis (e.g. terminally ill or suicidal), etc. (Some communities may even produce benefits that outweigh the costs of whatever injury to truth and rationality they may do.) It struck me that a Friendly AI that doesn’t understand these needs may not be feasible, so I thought I should bring it up.
I hope readers will note my ample use of “may” and “might” here. I haven’t come to any firm conclusions, but I have good reasons for my thoughts. (I’ll have to prove that last claim, I know. As a good-faith opener, I do go to a church that has a lot of atheist members—not agnostics, true atheists, like me.) I confess the whole karma thing at this site causes me some anxiety, but I’ve decided to give it a try anyway. I hope we can talk.
(Since I’m identifying myself, I am required by law to say: Nothing I write on this site should be construed as speaking for my employer. I won’t put a disclaimer in every post—that could get annoying—only those where I might reasonably be thought to be speaking for or about my work at NASA.)
Welcome then! Your first idea does sound interesting, and I look forward to heard about it. Don’t worry too much about Karma.
Welcome!
Understanding and overcoming human cognitive biases is, of course, a recurring theme here. So is management of catastrophic (including existential) risks.
Discussions of charity come up from time to time, usually framed as optimization problems. This post gets cited often. We actually had a recent essay contest on efficient charity that might interest you.
The value of religion (as distinct from the value of charity, of community, and so forth) comes up from time to time but rarely goes anywhere useful.
Don’t sweat the karma.
If you don’t mind a personal question: where did you and your husband get married?
We got married in a small town near St. Catharine’s, Ontario, a few weeks after it became legal there.
Thanks for the charity links. I find practical and aesthetic value in the challenging aspect of “shut up and multiply,”(http://lesswrong.com/lw/n3/circular_altruism/), particularly in the example you linked about purchasing charity efficiently. However, it seems to me that oversimplification can occur when we talk about human suffering.
(Please forgive me if the following is rehashing something written earlier.) For example, multiplying a billion people’s suffering for 1 second to make it equal to a billion seconds of consecutive suffering to make it seem way more bad than a million consecutive seconds—almost 12 straight days—of suffering done by one person is just plainly, rationally wrong. One proof of that is that distributing those million seconds as one-second bursts at regular intervals over a person’s life is better than the million consecutive seconds because the person is not otherwise unduly hampered by the occasional one-second annoyances, but would probably become unable to function well in the consecutive case, and might be permanently injured (a la PTSD). My point is there’s something missing from the equation, and that potential lies at the heart of the human impulse to be irrational when presented with the same choice as comparative gain vs. comparative loss.
As you say, a million isolated seconds of suffering isn’t as bad as a million consecutive seconds of suffering, because (among other things) of the knock-on effects of consecutivity (e.g. PTSD). Maybe it’s only 10% as bad, or 1%, or .1%, or .0001%, or whatever. Sure, agreed, of course.
But the moral intuition being challenged by “shut up and multiply” isn’t about that.
If everyone agreed that sure, N dust-specks was worse than 50 years of torture for some N, and we were merely haggling over the price, the thought experiment would not be interesting. That’s why the thought experiment involves ridiculous numbers like 3^^^3 in the first place, so we can skip over all that.
When we’re trying to make practical decisions about what suffering to alleviate, we care about N, and precision matters. At that point we have to do some serious real-world thinking and measuring and, y’know, work.
But what’s challenging about “shut up and multiply” isn’t the value of N, it’s the existence of N. if we’re starting out with a moral intuition that dust-specks and torture simply aren’t commensurable, and therefore there is no value of N… well, then the work of calculating it is doomed before we start.
OK, I now understand the way the site works: If someone responds to your comment, it shows up in your mailbox like an e-mail. Sorry for getting that wrong with Vaniver ( i responded by private mail), and if I can fix it in a little while, I will (edit: and now I have). Now, to content:
Thanks for responding to me! I didn’t feel like I should hijack the welcome thread for something I didn’t know hadn’t been thoroughly discussed elsewhere. So I tried to be succinct, and failed and ended up garbled.
First, 3^^^3 is WAY more than a googolplex ;-)
Second, I fully recognize the existence of N, and I tried to make that clear in the last statement of content-value in my answer to you, by recalling the central lesson of “shut up and multiply”, which is that people, when faced with identical situations presented at one time as gain comparisons, and at another time as loss comparisons, will fail to recognize the identity and choose differently. That is a REALLY useful thing to know about human bias, and I don’t discount it.
I suppose my comment above amounts to a quibble if it’s already understood that EY’s ideas only apply to identical situations presented with different gain/loss values, but I don’t have the impression that’s all he was getting at. Hence, my caveat. If everyone’s already beyond that, feel free to ignore.
I agree that dust-specks and torture are commensurable. If you will allow, a personal story: I have distichiasis. Look it up, it ain’t fun. My oily tear glands, on the insides of my eyelids, produce eyelashes that grow toward my eyes. Every once in a while, one of those (almost invisible, clear—mine rarely have pigment at all) eyelashes grows long enough to brush my eyes. At that instant, I rarely notice, having been inured to the sensation. I only respond when the lash is long enough to wake me up in the middle of the night, and I struggle to pull out the invisible eyelash. Sometimes, rarely, it gets just the right (wrong) length when I’m driving, and I clap my hand over my eye to hold it still until I get home.
If I could reliably relieve myself of this condition in exchange for one full day of hot stinging torture, I would do so, as long as I could schedule it conveniently, because I could then get LASIK, which distichiasis strictly disallows for me stasis quo. I even tried, with electrolysis, which burned and scarred my eyelids enough that the doctor finally suggested I’d better stop.
So, an individual’s choices about how they will consume their lot of torture can be wide-ranging. I recognize that. These calculations of EY’s do not recognize these differences. Sometimes, it makes sense to shut up and multiply. Other times, when it’s available (as it often is), it makes sense to shut up and listen. Because of that inherent fact, of the difference between internal perception and others’ external perception of your suffering, we have a really useful intuition built in to, in otherwise equal situations, defer to the judgment of those who will suffer. We optimize not over suffering, but over choice. That is our human nature. It may be irrational. But, that nature should be addressed—not only failing to multiply human suffering sufficiently objectively.
This topic interests me quite a bit, and I think it would be well-received here if you focus on the practice and ignore the belief. EY has a number of posts that are unabashedly influenced by religious practices.
Vaniver, I thought the message from you in my mailbox was private, so I responded in a private manner. But, it was a copy of this public posting; I’ve got the hang of it now. I cannot, however, figure out how to recover the private response I sent you and post it here as a public reply. Feel free to do so if you like!
There’s a button in the grey tab when you’re in your messages labeled “sent”. In the upper left.
Thanks js, here was my response to Vaniver, responding to the “initiation_ceremony” link, as mundane as it may be:
The initiation sequence was funny. And very Agatha Christie, revealing the critical piece of information just as Poirot solves the mystery! 11/16. Would they have let him in?