Without knowing your point, it’s hard for me to answer that. It could be unclear writing, or maybe you didn’t have a point in mind at all. Given the downvotes, it’s probably not my failure to read correctly.
“What follows is an as-yet poorly formed notion on my part that I am relating in an attempt to get at the meat of it and perhaps contribute to the higher-order goal of becoming a better rationalist myself.”
“”acrohumanity”. This is a direct analogue to “posthuman” and “transhuman”; ‘acro-’ being a prefix meaning, essentially, “highest”. So a strictly minimal definition of the term could be “the highest of the humane condition”, or “the pinnacle of humanity”.”
“I believe this is a topic that bears greater investigation and as such am sharing these rambling thoughts with you all. I am hopeful of a greatly productive conversation—for others, and for myself.”
The first quote is where I stated my purpose. The second quote is the notion that purpose references. The third is my reiteration/conclusion.
With these pointed out directly, is there something about them that is difficult to understand, notice, or retain?
The first quote is where I stated my purpose. The second quote is the notion that purpose references. The third is my reiteration/conclusion.
The words-to-substance ratio is very bad, especially in the first and third quotes. The middle one feels like it needs to interact in some way with the fun theory sequence. And after reading it, I have no idea what you think acrohumanity is (your definitions include the magical terms “highest” and “pinnacle”).
With these pointed out directly, is there something about them that is difficult to understand, notice, or retain?
It’s not clear that there is anything there to be retained. Sorry!
The middle one feels like it needs to interact in some way with the fun theory sequence.
Could you elaborate on why you believe this to be the case?
And after reading it, I have no idea what you think acrohumanity is (your definitions include the magical terms “highest” and “pinnacle”).
I wrote a great deal more in providing a definition of the term than just those two sentences. About a third of the effort invested in the article was in fleshing out that definition. But one must always start somewhere, when introducing a new term. So if it was your goal to introduce the term, how would you start it?
Could you elaborate on why you believe this to be the case?
Have you read the fun theory sequence? If you have and think it isn’t relevant, then I misunderstand your point here to a greater degree than I thought. If you haven’t read it then go read it.
I wrote a great deal more in providing a definition of the term than just those two sentences. About a third of the effort invested in the article was in fleshing out that definition.
From the next paragraph: “I intentionally refrain from defining what form that optimization takes...”
But one must always start somewhere, when introducing a new term. So if it was your goal to introduce the term, how would you start it?
I still don’t understand what you’re trying to say, so I can’t really answer this.
Have you read the fun theory sequence? If you have and think it isn’t relevant, then I misunderstand your point here to a greater degree than I thought. If you haven’t read it then go read it.
I haven’t read it deeply. I was hoping to get insight as to how you feel it should “interact”. It is entirely plausible that I may incorporate elements of said sequence into the body of lore of acrohumanism. I will note that from what I myself have seen, there is a categorical difference between “being free to optimize” and having optimization itself as a higher-order goal. (Part of this is possibly resultant from my having a low value on hedonism in general, which seems to be a primary focus of the Fun Theory sequence. I would even go so far as to state that my idea of acrohumanism would have anti-hedonistic results: it takes as a given the notion that one should never be satisfied with where he currently is on his personal optimization track; that he should be permanently dissatisfied.)
From the next paragraph: “I intentionally refrain from defining what form that optimization takes...”
Indeed. But I also gave several examples of what I meant by the term, and I associated it with other specific notions: transhumanism / posthumanism—from these contextually my meaning should be obvious enough.
This is a point, however, I freely recognize I am currently weak on. I do not—morally cannot—assert that I am fit to determine what universally optimal would be for all persons. But I do not believe that optimization itself—augmentation of the self to within whatever tolerance-limits our biological frailties limit us—is an impossible topic.
I still don’t understand what you’re trying to say, so I can’t really answer this.
Fair enough. Are there any specific points you believe I could clarify?
I couldn’t figure out what your point was.
If you don’t mind my asking: why not?
Without knowing your point, it’s hard for me to answer that. It could be unclear writing, or maybe you didn’t have a point in mind at all. Given the downvotes, it’s probably not my failure to read correctly.
“What follows is an as-yet poorly formed notion on my part that I am relating in an attempt to get at the meat of it and perhaps contribute to the higher-order goal of becoming a better rationalist myself.”
“”acrohumanity”. This is a direct analogue to “posthuman” and “transhuman”; ‘acro-’ being a prefix meaning, essentially, “highest”. So a strictly minimal definition of the term could be “the highest of the humane condition”, or “the pinnacle of humanity”.”
“I believe this is a topic that bears greater investigation and as such am sharing these rambling thoughts with you all. I am hopeful of a greatly productive conversation—for others, and for myself.”
The first quote is where I stated my purpose. The second quote is the notion that purpose references. The third is my reiteration/conclusion.
With these pointed out directly, is there something about them that is difficult to understand, notice, or retain?
The words-to-substance ratio is very bad, especially in the first and third quotes. The middle one feels like it needs to interact in some way with the fun theory sequence. And after reading it, I have no idea what you think acrohumanity is (your definitions include the magical terms “highest” and “pinnacle”).
It’s not clear that there is anything there to be retained. Sorry!
Could you elaborate on why you believe this to be the case?
I wrote a great deal more in providing a definition of the term than just those two sentences. About a third of the effort invested in the article was in fleshing out that definition. But one must always start somewhere, when introducing a new term. So if it was your goal to introduce the term, how would you start it?
Have you read the fun theory sequence? If you have and think it isn’t relevant, then I misunderstand your point here to a greater degree than I thought. If you haven’t read it then go read it.
From the next paragraph: “I intentionally refrain from defining what form that optimization takes...”
I still don’t understand what you’re trying to say, so I can’t really answer this.
I haven’t read it deeply. I was hoping to get insight as to how you feel it should “interact”. It is entirely plausible that I may incorporate elements of said sequence into the body of lore of acrohumanism. I will note that from what I myself have seen, there is a categorical difference between “being free to optimize” and having optimization itself as a higher-order goal. (Part of this is possibly resultant from my having a low value on hedonism in general, which seems to be a primary focus of the Fun Theory sequence. I would even go so far as to state that my idea of acrohumanism would have anti-hedonistic results: it takes as a given the notion that one should never be satisfied with where he currently is on his personal optimization track; that he should be permanently dissatisfied.)
Indeed. But I also gave several examples of what I meant by the term, and I associated it with other specific notions: transhumanism / posthumanism—from these contextually my meaning should be obvious enough.
This is a point, however, I freely recognize I am currently weak on. I do not—morally cannot—assert that I am fit to determine what universally optimal would be for all persons. But I do not believe that optimization itself—augmentation of the self to within whatever tolerance-limits our biological frailties limit us—is an impossible topic.
Fair enough. Are there any specific points you believe I could clarify?