I suspect one reason Eliezer did not do this is that when you make a long list of claims without any justification for them, it sounds silly and people don’t pay attention to the rest of the sequence. But if you had first stepped them through the entire argument, they would have found no place at which they can really disagree. That’s a concern, anyway.
I endorse the summary idea. It will help me decide whether and how carefully to read your posts.
I would like to know what you think; depending on what it is, I may or may not be interested in the details of why you think it. For example, I’d rather not spend lots of time reading detailed arguments for a position I already find obvious, in a subject (like metaethics) that I have comparatively little interest in. On the other hand, if your position is something I find counterintuitive, then I may be interested in reading your arguments carefully, to see if I need to update my beliefs.
This is Less Wrong, and you have 5-digit karma. We’re not going to ignore your arguments because your conclusion sounds silly.
Furthermore, you don’t necessarily have to post the summary prominently, if you really don’t want to. You could bury it in these comments right here, for example.
My reaction to this idea depends a lot on how the sequence gets written.
If at every step along the way you appear to be heading towards a known goal, I’m happy to go along for the ride.
If you start to sound like you’re wandering, or have gotten lost in the weeds, or make key assertions I reject and don’t dispose of my objections, or I otherwise lose faith that you know where you’re going, then having a roadmap becomes important.
Also, if your up-front list of claims turns out to be a bunch of stuff I think is unremarkable, I’ll be less interested in reading the posts.
OTOH, if I follow you through the entire argument for an endpoint I think is unremarkable, I’ll still be less interested, but it would be too late to act on that basis.
I would vote against the summary idea. Just in general, I like it better if a writer starts off with observations, builds their way up with chains of reasoning, and gets the reader everything they need to draw the conclusion of the author, as opposed to telling the reader what position they have, and then providing arguments for it. In terms of rationality, it’s probably better to build to your conclusion.
In addition, if you are proposing anything controversial, posting a summary will spark debates before you really had given the requisite background knowledge.
Agreed on all counts. Plus it would just feel like a spoiler, knowing that there was supposed to be a lot building up to it.
(Maybe, to get the best of both options, Luke could post the summary in Discussions, marking it as containing philospoilers; that way people can read through sequence unspoiled if they prefer, while those who want to see a summary in advance can do so, and discuss and inquire about it, with the understanding that “That question/argument will be answered/addressed in the sequence” will always be an acceptable response.)
Agreed. I know from experience how hard it is to convince someone to change their position on meta ethics. The reason for this is that if you post any specific example or claim that people disagree with, they will then look for reasons to disagree with your meta ethics based on that reason alone. Posting only abstract principles prevents this. It’s the exact same motivation as for politics is the mindkiller or any other topic that is both complex and something people feel strongly about (ideal breeding grounds for motivated reasoning).
Nonetheless i would be very interested in seeing such a list.
I vote for writing a summary, and including it with the last post of the sequence. That way, extra-skeptical people can wait until the sequence has been posted in its entirety before deciding to read it based on the summary, without losing much expected value.
I think in practice what would happen is the skeptical people would disagree on each post, and then when presented with the summary would be compelled to disagree with it in order to remain consistent.
You’re right; that sounds like a likely failure unless skeptics could proactively choose to hide that sequence until they could read the summary; which the current LW codebase doesn’t support.
That’s sort of like reading the end of a novel before you buy it. If you do include a summary, please announce what you’re doing and make it something we can skip.
Novels are meant to be entertaining. Luke’s metaethics post(s) would be meant to be useful, so the analogy isn’t valid. Even so, novels frequently have a summary on the inside flap of the dust cover. I hope to see the summary.
I suspect one reason Eliezer did not do this is that when you make a long list of claims without any justification for them, it sounds silly and people don’t pay attention to the rest of the sequence.
Yeah, I am still waiting for someone to thoroughly cite all the relevant science to back up AI going FOOM.
Hmmmm. What do other people think of this idea?
I suspect one reason Eliezer did not do this is that when you make a long list of claims without any justification for them, it sounds silly and people don’t pay attention to the rest of the sequence. But if you had first stepped them through the entire argument, they would have found no place at which they can really disagree. That’s a concern, anyway.
I endorse the summary idea. It will help me decide whether and how carefully to read your posts.
I would like to know what you think; depending on what it is, I may or may not be interested in the details of why you think it. For example, I’d rather not spend lots of time reading detailed arguments for a position I already find obvious, in a subject (like metaethics) that I have comparatively little interest in. On the other hand, if your position is something I find counterintuitive, then I may be interested in reading your arguments carefully, to see if I need to update my beliefs.
This is Less Wrong, and you have 5-digit karma. We’re not going to ignore your arguments because your conclusion sounds silly.
Furthermore, you don’t necessarily have to post the summary prominently, if you really don’t want to. You could bury it in these comments right here, for example.
My reaction to this idea depends a lot on how the sequence gets written.
If at every step along the way you appear to be heading towards a known goal, I’m happy to go along for the ride.
If you start to sound like you’re wandering, or have gotten lost in the weeds, or make key assertions I reject and don’t dispose of my objections, or I otherwise lose faith that you know where you’re going, then having a roadmap becomes important.
Also, if your up-front list of claims turns out to be a bunch of stuff I think is unremarkable, I’ll be less interested in reading the posts.
OTOH, if I follow you through the entire argument for an endpoint I think is unremarkable, I’ll still be less interested, but it would be too late to act on that basis.
I would vote against the summary idea. Just in general, I like it better if a writer starts off with observations, builds their way up with chains of reasoning, and gets the reader everything they need to draw the conclusion of the author, as opposed to telling the reader what position they have, and then providing arguments for it. In terms of rationality, it’s probably better to build to your conclusion.
In addition, if you are proposing anything controversial, posting a summary will spark debates before you really had given the requisite background knowledge.
Agreed on all counts. Plus it would just feel like a spoiler, knowing that there was supposed to be a lot building up to it.
(Maybe, to get the best of both options, Luke could post the summary in Discussions, marking it as containing philospoilers; that way people can read through sequence unspoiled if they prefer, while those who want to see a summary in advance can do so, and discuss and inquire about it, with the understanding that “That question/argument will be answered/addressed in the sequence” will always be an acceptable response.)
I am more motivated to read the rest of your sequence if the summary sounds silly than if I can easily see the arguments myself.
Agreed. I know from experience how hard it is to convince someone to change their position on meta ethics. The reason for this is that if you post any specific example or claim that people disagree with, they will then look for reasons to disagree with your meta ethics based on that reason alone. Posting only abstract principles prevents this. It’s the exact same motivation as for politics is the mindkiller or any other topic that is both complex and something people feel strongly about (ideal breeding grounds for motivated reasoning).
Nonetheless i would be very interested in seeing such a list.
I disagree with the grandparent and endorse not giving a summary.
I vote for writing a summary, and including it with the last post of the sequence. That way, extra-skeptical people can wait until the sequence has been posted in its entirety before deciding to read it based on the summary, without losing much expected value.
There will without a doubt at least be a summary toward the end of the sequence.
I think in practice what would happen is the skeptical people would disagree on each post, and then when presented with the summary would be compelled to disagree with it in order to remain consistent.
You’re right; that sounds like a likely failure unless skeptics could proactively choose to hide that sequence until they could read the summary; which the current LW codebase doesn’t support.
Why doesn’t that apply to abstracts?
That’s sort of like reading the end of a novel before you buy it. If you do include a summary, please announce what you’re doing and make it something we can skip.
Novels are meant to be entertaining. Luke’s metaethics post(s) would be meant to be useful, so the analogy isn’t valid. Even so, novels frequently have a summary on the inside flap of the dust cover. I hope to see the summary.
Yeah, I am still waiting for someone to thoroughly cite all the relevant science to back up AI going FOOM.