I don’t understand what is the point of the post, apart from defining a new word. I don’t see any interesting insight. Don’t want to see more similar posts.
I always downvote stream-of-consciousness posts. If you do not respect your audience enough to put an effort into writing something readable (clear, concise, catchy), you deserve a downvote.
To clarify for my understanding: you disliked my writing style (as you describe it, “stream-of-consciousness”), and feel that it was not ‘readable’ because of this—yes?
Always, ALWAYS use your opening paragraph to clearly state your main point.
If you feel you cannot adequately do that, chances are you do not know what your main point is. In that case, do not post, work on your draft until you do know.
I have tried to give an example by extracting the main point of your post from the mud that it is, but, unfortunately, came up empty. Well, almost empty, there was one definition I found:
“I describe acrohumanity as that state of achieving the maximum optimization of the human condition and capabilities by an arbitrary person that are available to that arbitrary person.” Naive though it might be, at least it is a core you can form your opening paragraph around.
Well, almost empty, there was one definition I found:
Well, my goal quite frankly was to foster conversation about the concept so as to improve the concept itself. I’ll have to think more on how to target that to the LW audience a little better, as it is becoming clearer to me over time that my patterns of thinking on various topics do not fall in line with the folks here.
Flesch-Kincaid reading ease score of 10. There are some articles for which that level of effort would be worth it; this did not seem to be one of them.
I have checked a few popular LW posts using the online Readability Calculator and they all came up in the 60-70 range, meaning “easily understandable by 13- to 15-year-old students”. This seems like an exaggeration, but still a vast improvement over the score of 23 for your post (“best understood by university graduates”).
I wonder if the LW post editor could use a button “Estimate Readability”.
Using a different calculator I found that the ten highest scoring articles on LessWrong averaged a score of 37, range 27-46. That suggests that there’s a fair bit of variance between scoring methods, but if we could find a consistent method, a “Estimate Readability’ button in the post editor could be interesting.
That seems to illustrate a potential shortcoming of the Readability Estimator, though. The Simple Truth doesn’t use as much sophisticated vocabulary as many posts on Less Wrong (it seems that posts are penalized heavily for multisyllabic words) but it is a fair bit harder to understand then to read.
I didn’t really get it (if by ‘get it’ you mean ‘see why Eliezer wrote it, and what questions it was intended to answer’) until I’d read most of the rest of the site.
In short, it seems like a decent measure of writing clarity, but it’s not a measure of inferential distance at all.
In short, it seems like a decent measure of writing clarity, but it’s not a measure of inferential distance at all.
Very true. The reason I picked The Simple Truth for an example is that I thought it did a good job of explaining a hard idea in simple language. The idea was still hard to get, but the writing made it much easier than it could have been.
Without knowing your point, it’s hard for me to answer that. It could be unclear writing, or maybe you didn’t have a point in mind at all. Given the downvotes, it’s probably not my failure to read correctly.
“What follows is an as-yet poorly formed notion on my part that I am relating in an attempt to get at the meat of it and perhaps contribute to the higher-order goal of becoming a better rationalist myself.”
“”acrohumanity”. This is a direct analogue to “posthuman” and “transhuman”; ‘acro-’ being a prefix meaning, essentially, “highest”. So a strictly minimal definition of the term could be “the highest of the humane condition”, or “the pinnacle of humanity”.”
“I believe this is a topic that bears greater investigation and as such am sharing these rambling thoughts with you all. I am hopeful of a greatly productive conversation—for others, and for myself.”
The first quote is where I stated my purpose. The second quote is the notion that purpose references. The third is my reiteration/conclusion.
With these pointed out directly, is there something about them that is difficult to understand, notice, or retain?
The first quote is where I stated my purpose. The second quote is the notion that purpose references. The third is my reiteration/conclusion.
The words-to-substance ratio is very bad, especially in the first and third quotes. The middle one feels like it needs to interact in some way with the fun theory sequence. And after reading it, I have no idea what you think acrohumanity is (your definitions include the magical terms “highest” and “pinnacle”).
With these pointed out directly, is there something about them that is difficult to understand, notice, or retain?
It’s not clear that there is anything there to be retained. Sorry!
The middle one feels like it needs to interact in some way with the fun theory sequence.
Could you elaborate on why you believe this to be the case?
And after reading it, I have no idea what you think acrohumanity is (your definitions include the magical terms “highest” and “pinnacle”).
I wrote a great deal more in providing a definition of the term than just those two sentences. About a third of the effort invested in the article was in fleshing out that definition. But one must always start somewhere, when introducing a new term. So if it was your goal to introduce the term, how would you start it?
Could you elaborate on why you believe this to be the case?
Have you read the fun theory sequence? If you have and think it isn’t relevant, then I misunderstand your point here to a greater degree than I thought. If you haven’t read it then go read it.
I wrote a great deal more in providing a definition of the term than just those two sentences. About a third of the effort invested in the article was in fleshing out that definition.
From the next paragraph: “I intentionally refrain from defining what form that optimization takes...”
But one must always start somewhere, when introducing a new term. So if it was your goal to introduce the term, how would you start it?
I still don’t understand what you’re trying to say, so I can’t really answer this.
Have you read the fun theory sequence? If you have and think it isn’t relevant, then I misunderstand your point here to a greater degree than I thought. If you haven’t read it then go read it.
I haven’t read it deeply. I was hoping to get insight as to how you feel it should “interact”. It is entirely plausible that I may incorporate elements of said sequence into the body of lore of acrohumanism. I will note that from what I myself have seen, there is a categorical difference between “being free to optimize” and having optimization itself as a higher-order goal. (Part of this is possibly resultant from my having a low value on hedonism in general, which seems to be a primary focus of the Fun Theory sequence. I would even go so far as to state that my idea of acrohumanism would have anti-hedonistic results: it takes as a given the notion that one should never be satisfied with where he currently is on his personal optimization track; that he should be permanently dissatisfied.)
From the next paragraph: “I intentionally refrain from defining what form that optimization takes...”
Indeed. But I also gave several examples of what I meant by the term, and I associated it with other specific notions: transhumanism / posthumanism—from these contextually my meaning should be obvious enough.
This is a point, however, I freely recognize I am currently weak on. I do not—morally cannot—assert that I am fit to determine what universally optimal would be for all persons. But I do not believe that optimization itself—augmentation of the self to within whatever tolerance-limits our biological frailties limit us—is an impossible topic.
I still don’t understand what you’re trying to say, so I can’t really answer this.
Fair enough. Are there any specific points you believe I could clarify?
The first three paragraphs seemed to me devoid of useful content, and, after skimming the post, I was left with a feeling of “So what?” and that it wasn’t worth rereading more carefully.
Acknowledge: Initial skimming rather than reading was likely influenced by the number of downvotes already on the post.
The first three paragraphs seemed to me devoid of useful content,
If I had simply begun with a brief sentence asking for an open dialogue and then jumped into the definition of the term, do you believe—currently—that this might have altered your opinion of the idea of discussing it?
NMDV, but it is long-winded, coins unnecessary neologisms, and doesn’t contain much of anything new to Less Wrong. There is something squicky about the tone, too.
NMDV is “not my down vote”. I didn’t down vote you, I’m just guessing about those who did.
Point of order: what are you considering a neologism? The only term(s) I coined to my knowledge are acrohuman and its associated variations.
Thats the term I’m talking about.
With regard the squickiness, that’s always hard to articulate. I think it has to do with using a really authoritative and academic tone without authoritative and academic content—it sort of pattern matches to bad philosophy and pseudoscience.
Hrm. One of the things I’ve struggled with and why I bothered with it at all is that there really isn’t, to my knowledge, already a term that encapsulates the meaning of “a person with an agenda of maximally optimizing his own experience of the human condition to within the limits of what is possible” or the state of being so “optimized.” If I might ask—why do you feel that it was unnecessary? Are you familiar with a term that already carries this meaning?
I think it has to do with using a really authoritative and academic tone without authoritative and academic content
That’s strange… I honestly thought I was doing the opposite of this; I was, I thought, careful to elaborate that I was solely relating my own opinion, with the intention of introducing the topics in question for dialectical examination by, well, all of you.
For those of you downvoting, assuming any of you are still seeing this page:
Please use this space as an opportunity to list your reasons why.
I don’t understand what is the point of the post, apart from defining a new word. I don’t see any interesting insight. Don’t want to see more similar posts.
I see. Thank you for your candor.
I always downvote stream-of-consciousness posts. If you do not respect your audience enough to put an effort into writing something readable (clear, concise, catchy), you deserve a downvote.
To clarify for my understanding: you disliked my writing style (as you describe it, “stream-of-consciousness”), and feel that it was not ‘readable’ because of this—yes?
Always, ALWAYS use your opening paragraph to clearly state your main point.
If you feel you cannot adequately do that, chances are you do not know what your main point is. In that case, do not post, work on your draft until you do know.
I have tried to give an example by extracting the main point of your post from the mud that it is, but, unfortunately, came up empty. Well, almost empty, there was one definition I found:
“I describe acrohumanity as that state of achieving the maximum optimization of the human condition and capabilities by an arbitrary person that are available to that arbitrary person.” Naive though it might be, at least it is a core you can form your opening paragraph around.
Well, my goal quite frankly was to foster conversation about the concept so as to improve the concept itself. I’ll have to think more on how to target that to the LW audience a little better, as it is becoming clearer to me over time that my patterns of thinking on various topics do not fall in line with the folks here.
Thank you.
This looks like a classic example of “sour grapes”, an attempt to resolve your cognitive dissonance.
Flesch-Kincaid reading ease score of 10. There are some articles for which that level of effort would be worth it; this did not seem to be one of them.
Interesting. I wonder if there’s a relatively easy way to derive the score of the average LW article.
I have checked a few popular LW posts using the online Readability Calculator and they all came up in the 60-70 range, meaning “easily understandable by 13- to 15-year-old students”. This seems like an exaggeration, but still a vast improvement over the score of 23 for your post (“best understood by university graduates”).
I wonder if the LW post editor could use a button “Estimate Readability”.
Using a different calculator I found that the ten highest scoring articles on LessWrong averaged a score of 37, range 27-46. That suggests that there’s a fair bit of variance between scoring methods, but if we could find a consistent method, a “Estimate Readability’ button in the post editor could be interesting.
I’m contemplating using some wget trickery to get a larger sampling-size.
Don’t contemplate, just do it!
I second (third?) the suggestion of a readability estimator; I need it. I have a tendency toward excessively long sentences.
Another comparison: The Simple Truth Flesch Reading Ease of 69.51, and supposedly needs only 8.51 years of education to read.
That seems to illustrate a potential shortcoming of the Readability Estimator, though. The Simple Truth doesn’t use as much sophisticated vocabulary as many posts on Less Wrong (it seems that posts are penalized heavily for multisyllabic words) but it is a fair bit harder to understand then to read.
I didn’t really get it (if by ‘get it’ you mean ‘see why Eliezer wrote it, and what questions it was intended to answer’) until I’d read most of the rest of the site.
In short, it seems like a decent measure of writing clarity, but it’s not a measure of inferential distance at all.
Very true. The reason I picked The Simple Truth for an example is that I thought it did a good job of explaining a hard idea in simple language. The idea was still hard to get, but the writing made it much easier than it could have been.
Yeah, polysyllabicity gets a bad rap ’round some parts.
I couldn’t figure out what your point was.
If you don’t mind my asking: why not?
Without knowing your point, it’s hard for me to answer that. It could be unclear writing, or maybe you didn’t have a point in mind at all. Given the downvotes, it’s probably not my failure to read correctly.
“What follows is an as-yet poorly formed notion on my part that I am relating in an attempt to get at the meat of it and perhaps contribute to the higher-order goal of becoming a better rationalist myself.”
“”acrohumanity”. This is a direct analogue to “posthuman” and “transhuman”; ‘acro-’ being a prefix meaning, essentially, “highest”. So a strictly minimal definition of the term could be “the highest of the humane condition”, or “the pinnacle of humanity”.”
“I believe this is a topic that bears greater investigation and as such am sharing these rambling thoughts with you all. I am hopeful of a greatly productive conversation—for others, and for myself.”
The first quote is where I stated my purpose. The second quote is the notion that purpose references. The third is my reiteration/conclusion.
With these pointed out directly, is there something about them that is difficult to understand, notice, or retain?
The words-to-substance ratio is very bad, especially in the first and third quotes. The middle one feels like it needs to interact in some way with the fun theory sequence. And after reading it, I have no idea what you think acrohumanity is (your definitions include the magical terms “highest” and “pinnacle”).
It’s not clear that there is anything there to be retained. Sorry!
Could you elaborate on why you believe this to be the case?
I wrote a great deal more in providing a definition of the term than just those two sentences. About a third of the effort invested in the article was in fleshing out that definition. But one must always start somewhere, when introducing a new term. So if it was your goal to introduce the term, how would you start it?
Have you read the fun theory sequence? If you have and think it isn’t relevant, then I misunderstand your point here to a greater degree than I thought. If you haven’t read it then go read it.
From the next paragraph: “I intentionally refrain from defining what form that optimization takes...”
I still don’t understand what you’re trying to say, so I can’t really answer this.
I haven’t read it deeply. I was hoping to get insight as to how you feel it should “interact”. It is entirely plausible that I may incorporate elements of said sequence into the body of lore of acrohumanism. I will note that from what I myself have seen, there is a categorical difference between “being free to optimize” and having optimization itself as a higher-order goal. (Part of this is possibly resultant from my having a low value on hedonism in general, which seems to be a primary focus of the Fun Theory sequence. I would even go so far as to state that my idea of acrohumanism would have anti-hedonistic results: it takes as a given the notion that one should never be satisfied with where he currently is on his personal optimization track; that he should be permanently dissatisfied.)
Indeed. But I also gave several examples of what I meant by the term, and I associated it with other specific notions: transhumanism / posthumanism—from these contextually my meaning should be obvious enough.
This is a point, however, I freely recognize I am currently weak on. I do not—morally cannot—assert that I am fit to determine what universally optimal would be for all persons. But I do not believe that optimization itself—augmentation of the self to within whatever tolerance-limits our biological frailties limit us—is an impossible topic.
Fair enough. Are there any specific points you believe I could clarify?
The first three paragraphs seemed to me devoid of useful content, and, after skimming the post, I was left with a feeling of “So what?” and that it wasn’t worth rereading more carefully.
Acknowledge: Initial skimming rather than reading was likely influenced by the number of downvotes already on the post.
If I had simply begun with a brief sentence asking for an open dialogue and then jumped into the definition of the term, do you believe—currently—that this might have altered your opinion of the idea of discussing it?
I think that I would have still downvoted it for leaving me with a ‘So what?’ feeling, but I feel that reducing the length would have made happier.
NMDV, but it is long-winded, coins unnecessary neologisms, and doesn’t contain much of anything new to Less Wrong. There is something squicky about the tone, too.
(Nothing personal/you asked).
I did, and have upvoted you for your cooperation.
I am unfamiliar with this acronym. Elucidate me?
Point of order: what are you considering a neologism? The only term(s) I coined to my knowledge are acrohuman and its associated variations.
Is there any chance you could elaborate on this?
NMDV is “not my down vote”. I didn’t down vote you, I’m just guessing about those who did.
Thats the term I’m talking about.
With regard the squickiness, that’s always hard to articulate. I think it has to do with using a really authoritative and academic tone without authoritative and academic content—it sort of pattern matches to bad philosophy and pseudoscience.
Hrm. One of the things I’ve struggled with and why I bothered with it at all is that there really isn’t, to my knowledge, already a term that encapsulates the meaning of “a person with an agenda of maximally optimizing his own experience of the human condition to within the limits of what is possible” or the state of being so “optimized.” If I might ask—why do you feel that it was unnecessary? Are you familiar with a term that already carries this meaning?
That’s strange… I honestly thought I was doing the opposite of this; I was, I thought, careful to elaborate that I was solely relating my own opinion, with the intention of introducing the topics in question for dialectical examination by, well, all of you.