Introduction: “Acrohumanity”
Greetings, fellow LessWrongians.
What follows is an as-yet poorly formed notion on my part that I am relating in an attempt to get at the meat of it and perhaps contribute to the higher-order goal of becoming a better rationalist myself. As such I will attempt to restrict any responses to comments I give to explanations of points of fact or explanations of my own opinions if directly requested, but otherwise will not argue any particulars for purposes of persuasion.
For a few years now a general notion—what originally led me to discover the LessWrong site itself, in fact—has rattled around in my brain, which I only today have derived a sufficiently satisfactory term to label it with: “acrohumanity”. This is a direct analogue to “posthuman” and “transhuman”; ‘acro-’ being a prefix meaning, essentially, “highest”. So a strictly minimal definition of the term could be “the highest of the humane condition”, or “the pinnacle of humanity”.
In brief, I describe acrohumanity as that state of achieving the maximum optimization of the human condition and capabilities *by* an arbitrary person that are available *to* that arbitrary person. I intentionally refrain from defining what form that optimization takes; but my own personal intuitions and opinions on the topic, as a life-long transhumanist and currently aspiring-rationalist, tend towards mental conditioning and improvements upon ways of thinking and optimization of thought, memory, and perception. “Acrohumanism”, then, would be the belief in, practice of, and advocacy of achieving or approaching acrohumanity, in much the same way that transhumanism is the belief in or advocacy of achieving transhuman conditions. (In fact; I tend to associate the two terms, at least personally; what interests me *most* about transhumanism is achieving greater capacity for thought, recollection, and awareness than is humanly possible today.)
Instrumental rationality is, thusly, a core component of any approach to the acrohuman condition/state. But while it is a requirement, it is not sufficient in and of itself to focus on one’s capabilities as a rationalist. There are other avenues of optimization of the self that should also bear investigation. The simplest and most widely exercised of these is the practice of exercising the body; which does little to improve one’s rationality and if one’s primary goal is simply to become a better rationalist exercising does little to nothing to advance that goal. But if one’s goal is to “in general optimize yourself to the limits available”, exercising is just as key as focusing on instrumental rationality. Additional examples of a more cognitive nature could include techniques for improving recollection. Mnemotechnics has existed long enough that many cultures developed their own variants of it before they even developed a written language. It occurs to me that developing mnemotechnical skill would be convergent with becoming a better rationalist by making it easier to recall the various biases and heuristics we utilize in a broader array of contexts. Still another, also cognitive in nature, would be developing skill/practice in meditative reflection. While there is a lot of what Michael Shermer calls “woo” around meditation, the simple truth is that it is an effective tool for metacognition. My own history with meditative practice originated in my early-teens with martial arts training which I then extended into basic biofeedback as a result of coping with chronic pain. I quickly found that the same skill-level needed to achieve success in that arena had a wide array of applications, from coping with various stimuli to handling other physiological symptoms or indulging specific senses.
Taken as an aggregate, an individual with strong skill in biofeedback, a history of rigorous exercise and physical health, skill and knowledge of instrumental rationality, mnemotechnics, metacognition, and through metacognition strong influence over his own emotional states (note; as I myself am male, I am in the habit of using masculine pronouns as gender-neutrals), represents an individual who is relatively far from what at least consists of my personal image of the baseline ‘average human’. And yet I am certain that there might be other techniques or skillsets that one might add to his ‘grab-bag’ of tools for improving upon his own overall capabilities as a person—none of which individually exceeding what is humanly possible, but definitely impressively approaching those limits when taken as a whole.
I believe this is a topic that bears greater investigation and as such am sharing these rambling thoughts with you all. I am hopeful of a greatly productive conversation—for others, and for myself.
There are a lot of applause lights.
This is not true.
It is my understanding that the term “applause lights” refers to statements which are meant to be approved but otherwise contain little substance.. Would you care to enlighten me as to where these ‘applause lights’ are in what I wrote?
Exercise does prevent decline in cognition into senescence, yes. I am unfamiliar with any work that associates higher cognitive function directly with improved rationality, however. Could you provide me links to such work?
Indirect can still be important.
Some examples of applause lights:
This tells us nothing new: there was even a competition to get more information.
Summary: Someone who is above average is above average, but they are probably not “perfect” yet. So what? This isn’t a new idea to LW, there is at least one sequence directly focusing on similar ideas.
As dlthomas said, indirect is still important. I don’t know of such work of the top of my head, but there is certainly anecdotal evidence for it, e.g. the readership of LW, people like Eliezer and Luke, etc.
Would you mind providing your working definition of what an “applause light” is? I’m struck by the observation that originality is not synonymous with substance; and I also have to ask—does a claim have to be controversial to be a worthwhile contribution to a topic?
So—a person who is otherwise average but has mastered the skills mentioned would be no-longer-average. The thing you summarized was itself a summarization meant to demonstrate the usefulness of the topic itself.
I’m not aware that I ever said that what I was describing was strongly original.
I can understand how it would be easy to note the superficial resemblance to self-help strategies and claim that was all that was being done here; but there is a fundamental difference in orientation between strategies meant to mitigate flaws and strategies meant to introduce new capabilities or improve upon them. I can see that this is something I shall have to devote more time to clarifying.
It can be, yes.
The reason why you don’t know of any such work, to be quite frank, is because it doesn’t exist. There is as yet no sufficiently quantized definition of rationality as to allow for such testing to be conducted. That being said; would you be willing to allow for the validity of the following claim? “There are very intelligent people who are very irrational.”
Anything that doesn’t add anything to the discussion in its context, and just appears to be an effort to gain validity in a community.
As far as I can tell, the only original idea in the post is the word “acrohumanism” itself. The rest seems to be a conglomeration of several LW memes.
I’m willing to cede this point. But, you seem to have not applied this argument to your own ideas:
There are lots of people with very good/eidetic memory who aren’t good rationalists.
Yes, of course. That wasn’t under debate. Do you do the same for: “There are unintelligent people who are very irrational.”?
I’m isolating this to point this out more clearly: will you acknowledge that this is a sufficiently inaccurate quotation of me as to represent an example of the Cherry Picking fallacy?
Possibly, but it wasn’t intentional. Sorry.
But, what you did say was along the lines of “We can get better memory, so we can remember more rationalist things like biases and heuristics! Gee, wouldn’t that be nice!”. (It sounded like an applause light to me.)
I see. Where, exactly, are you getting the “Gee, wouldn’t that be nice!” element from?
You didn’t say why just remembering biases and heuristics would be all that useful.
EDIT: And this is where hyperlinks and the massive number of other articles of LW nicely work together.
I see. Well, I can see that any future writings I submit anywhere will have to flesh that element out then. I had assumed the positive utility of easier recall ability for those categories of items would be obvious.
I didn’t say it wasn’t obvious. I was meaning that it seemed like you tacked it on the end to superficially validate mnemotechnics by using LW “buzz words”.
I can definitively tell you that this impression is not valid. If I had meant to target the jargon/patois of the LessWrong community, it would have been obvious.
I included the ease of recollection of pre-developed heuristics and known cognitive biases as an example of how optimization approaches can be convergent. This was the sum of my intent.
I see. Then I remain unable to resolve your provided definition with the items you assert to be applause lights.
May I also add that there are now two separate instances where you have ‘dropped’ relevant portions of a statement when characterizing things I have said. First in the notion that improving recollection would qualify as an acrohuman agenda (which was followed by a discussion of a specific vehicle to achieve that end, a fact you ‘missed’), and now again with the notion that developing strong skill with ars memorativa would be convergent with becoming a better rationalist to an acrohumanist -- (you leave out the part where I described using that skill to increase the efficacy of recalling various biases and heuristics.)
I am curious as to why you feel it is a good idea to so misrepresent my statements. Could you explain?
Providing a label to a specific memeplex can in and of itself be a useful activity.
While many of these ideas are certainly present on LW, I do not know that they originated on LW.
I did not claim that what I was doing was creating a revolutionary concept or approach. I was providing a name to a thought I had already had, and opening the floor to discussion of said thought in order to possibly refine it.
Meditation is a controversial topic on LW. My suggestion of investigating it from a purely rationalistic perspective (that is, stripped of all religious associations) for its utility-enhancing functions is not something I have seen discussed in any top-level LW posting.
To my knowledge (and Google searching) my post is the first and only mention of the term “mnemotechnics” on LessWrong.
I would not have submitted this article to LW if I did not believe that it would be agreed to in general and specific on these items, so it should come as no surprise that much of it seems very familiar. It is my belief that the introduced term of “acrohumanism” is a worthwhile contribution—in beginning the formalized systemization of the various odds-and-ends many LW’ers are familiar with towards that nominal end-goal.
Certainly. Which is why I didn’t stop that assertion where you stopped it.
Are you sure this is true?
Absolutely.
Yes, neither were very good examples.
I sincerely apologise. I was not trying to misrepresent them, I think that what you say is not surprising (as in agree with most of it), so the bits I quoted were what I thought to be the relevant parts, and the rest was “obvious” (and I agree with it).
Thanks for introducing it to us. I believe that LW has considered memory-enhancement techniques even without using the term.
Yes! Definitely! But you don’t need a 750 word post to define a new word. Especially when the post is on a website where the memeplex is of its central tenets. For example (this is a rewrite/summary of your post):
(And I would have some more detail on why inventing a new word was necessary (e.g. how it is different to “rationalist”, or “transhumanist”))
Yep, the existence of at least one intelligent person who is also irrational was not under debate.
I had a strong suspicion this was the case, which is why I have made several statements very similar to this. Perhaps I failed to sufficiently convey that what I was contributing wasn’t something that was meant to be “un-obvious” in any specific detail, but rather was meant to be a new manner of thinking about those items.
In other words; I was hoping to introduce a new higher-order goal and maybe kick off organized efforts towards establishing goals in alignment with said goal.
… I can see how you would find that to be valid, but I do have a few thoughts here:
The notion of acrohumanism—though not by this name—predates my knowledge of the LessWrong community and most assuredly is not limited to it. Where LW agrees with its agenda(s), my history of thought and LW are in agreement.
The rephrasing of “reaching the upper bounds of the human condition” to “try to be the best possible person you can be” robs significant meaning from the statement, related to directional focus. Even with the additive caveat of “i.e.; optimise every aspect of themselves”, I’m not sure the idea retains my intended message. Certainly, Google results for the query, “be the best person you can be” gets seven million hits, whereas “reach the upper bounds of the human condition” gets zero. This speaks to the dilution of the message. While they appear superficially related, the “best person” formulation implies that this is a minimization of flaws, whereas the “upper bounds” formulation implies the maximization of utilities.
2.a. -- Corollary: this touches on why I am leary of defining what the optimal human condition even is: it becomes a moral dilemma.
2.b. -- There is a secondary implication that ‘reach the upper bounds of the human condition’ implies that it is not the normal state for an arbitrary human being to be in this state. So, as trite as it seems, I’m discussing the idea of “being more human than human”
I’m sure. But I also wanted to do more than merely provide the definition; I wanted to give practical examples and encourage the discussion of other practical skills or “tools”. Which is why I discussed examples of such skills and their potential benefits.
Definitely so. But it’s mostly instances of re-inventing the wheel, with a rather “grab-bag” / haphazard approach with no ideal model of what the end-goal such efforts are tending towards aside from (what I can discern) “becoming a better rationalist”
Given the current state of the dialogue, I think it’s time that I start putting an eye towards a potential future re-submission of this notion for consideration. I’ve gotten some good materials from you to digest towards that end, even if I did fail at sparking the dialogue I had hoped for.
Make that substitution then (as far as I can tell, what you mean by your phrase is identical to what I meant by mine).
I think this post could do with some editing to make this clear, and “encouraging discussion” on several different specific skills and tools with a long monolithic post is unlikely to work very well. A smaller discussion post with something like (for example) “I know we know that SRS seems to work, but does anyone know anything about these other mnemotechnic techniques? ” might work better.
(I probably haven’t conveyed this very well, but I don’t dislike most of your post (I quite like the word “acrohumanism”, for example))
Yeah, that’s going to be something of a problem. I’m not sure how to properly emphasize the notion of the focus on not only being the best you can be but improving the range of the “bestness” that you can have to the upper limits of what is humanly possible.
If it helps any, a different perspective would be to consider this a “pure software” approach to transhumanism.
I can hardly fault someone else for poorly conveying themselves when that’s the majority of what got me in ‘trouble’ with this post in the first place. Thank you for the affirmation of the term, at least! :)
For those of you downvoting, assuming any of you are still seeing this page:
Please use this space as an opportunity to list your reasons why.
I don’t understand what is the point of the post, apart from defining a new word. I don’t see any interesting insight. Don’t want to see more similar posts.
I see. Thank you for your candor.
I always downvote stream-of-consciousness posts. If you do not respect your audience enough to put an effort into writing something readable (clear, concise, catchy), you deserve a downvote.
To clarify for my understanding: you disliked my writing style (as you describe it, “stream-of-consciousness”), and feel that it was not ‘readable’ because of this—yes?
Always, ALWAYS use your opening paragraph to clearly state your main point.
If you feel you cannot adequately do that, chances are you do not know what your main point is. In that case, do not post, work on your draft until you do know.
I have tried to give an example by extracting the main point of your post from the mud that it is, but, unfortunately, came up empty. Well, almost empty, there was one definition I found:
“I describe acrohumanity as that state of achieving the maximum optimization of the human condition and capabilities by an arbitrary person that are available to that arbitrary person.” Naive though it might be, at least it is a core you can form your opening paragraph around.
Well, my goal quite frankly was to foster conversation about the concept so as to improve the concept itself. I’ll have to think more on how to target that to the LW audience a little better, as it is becoming clearer to me over time that my patterns of thinking on various topics do not fall in line with the folks here.
Thank you.
This looks like a classic example of “sour grapes”, an attempt to resolve your cognitive dissonance.
Flesch-Kincaid reading ease score of 10. There are some articles for which that level of effort would be worth it; this did not seem to be one of them.
Interesting. I wonder if there’s a relatively easy way to derive the score of the average LW article.
I have checked a few popular LW posts using the online Readability Calculator and they all came up in the 60-70 range, meaning “easily understandable by 13- to 15-year-old students”. This seems like an exaggeration, but still a vast improvement over the score of 23 for your post (“best understood by university graduates”).
I wonder if the LW post editor could use a button “Estimate Readability”.
Using a different calculator I found that the ten highest scoring articles on LessWrong averaged a score of 37, range 27-46. That suggests that there’s a fair bit of variance between scoring methods, but if we could find a consistent method, a “Estimate Readability’ button in the post editor could be interesting.
I’m contemplating using some wget trickery to get a larger sampling-size.
Don’t contemplate, just do it!
I second (third?) the suggestion of a readability estimator; I need it. I have a tendency toward excessively long sentences.
Another comparison: The Simple Truth Flesch Reading Ease of 69.51, and supposedly needs only 8.51 years of education to read.
That seems to illustrate a potential shortcoming of the Readability Estimator, though. The Simple Truth doesn’t use as much sophisticated vocabulary as many posts on Less Wrong (it seems that posts are penalized heavily for multisyllabic words) but it is a fair bit harder to understand then to read.
I didn’t really get it (if by ‘get it’ you mean ‘see why Eliezer wrote it, and what questions it was intended to answer’) until I’d read most of the rest of the site.
In short, it seems like a decent measure of writing clarity, but it’s not a measure of inferential distance at all.
Very true. The reason I picked The Simple Truth for an example is that I thought it did a good job of explaining a hard idea in simple language. The idea was still hard to get, but the writing made it much easier than it could have been.
Yeah, polysyllabicity gets a bad rap ’round some parts.
I couldn’t figure out what your point was.
If you don’t mind my asking: why not?
Without knowing your point, it’s hard for me to answer that. It could be unclear writing, or maybe you didn’t have a point in mind at all. Given the downvotes, it’s probably not my failure to read correctly.
“What follows is an as-yet poorly formed notion on my part that I am relating in an attempt to get at the meat of it and perhaps contribute to the higher-order goal of becoming a better rationalist myself.”
“”acrohumanity”. This is a direct analogue to “posthuman” and “transhuman”; ‘acro-’ being a prefix meaning, essentially, “highest”. So a strictly minimal definition of the term could be “the highest of the humane condition”, or “the pinnacle of humanity”.”
“I believe this is a topic that bears greater investigation and as such am sharing these rambling thoughts with you all. I am hopeful of a greatly productive conversation—for others, and for myself.”
The first quote is where I stated my purpose. The second quote is the notion that purpose references. The third is my reiteration/conclusion.
With these pointed out directly, is there something about them that is difficult to understand, notice, or retain?
The words-to-substance ratio is very bad, especially in the first and third quotes. The middle one feels like it needs to interact in some way with the fun theory sequence. And after reading it, I have no idea what you think acrohumanity is (your definitions include the magical terms “highest” and “pinnacle”).
It’s not clear that there is anything there to be retained. Sorry!
Could you elaborate on why you believe this to be the case?
I wrote a great deal more in providing a definition of the term than just those two sentences. About a third of the effort invested in the article was in fleshing out that definition. But one must always start somewhere, when introducing a new term. So if it was your goal to introduce the term, how would you start it?
Have you read the fun theory sequence? If you have and think it isn’t relevant, then I misunderstand your point here to a greater degree than I thought. If you haven’t read it then go read it.
From the next paragraph: “I intentionally refrain from defining what form that optimization takes...”
I still don’t understand what you’re trying to say, so I can’t really answer this.
I haven’t read it deeply. I was hoping to get insight as to how you feel it should “interact”. It is entirely plausible that I may incorporate elements of said sequence into the body of lore of acrohumanism. I will note that from what I myself have seen, there is a categorical difference between “being free to optimize” and having optimization itself as a higher-order goal. (Part of this is possibly resultant from my having a low value on hedonism in general, which seems to be a primary focus of the Fun Theory sequence. I would even go so far as to state that my idea of acrohumanism would have anti-hedonistic results: it takes as a given the notion that one should never be satisfied with where he currently is on his personal optimization track; that he should be permanently dissatisfied.)
Indeed. But I also gave several examples of what I meant by the term, and I associated it with other specific notions: transhumanism / posthumanism—from these contextually my meaning should be obvious enough.
This is a point, however, I freely recognize I am currently weak on. I do not—morally cannot—assert that I am fit to determine what universally optimal would be for all persons. But I do not believe that optimization itself—augmentation of the self to within whatever tolerance-limits our biological frailties limit us—is an impossible topic.
Fair enough. Are there any specific points you believe I could clarify?
The first three paragraphs seemed to me devoid of useful content, and, after skimming the post, I was left with a feeling of “So what?” and that it wasn’t worth rereading more carefully.
Acknowledge: Initial skimming rather than reading was likely influenced by the number of downvotes already on the post.
If I had simply begun with a brief sentence asking for an open dialogue and then jumped into the definition of the term, do you believe—currently—that this might have altered your opinion of the idea of discussing it?
I think that I would have still downvoted it for leaving me with a ‘So what?’ feeling, but I feel that reducing the length would have made happier.
NMDV, but it is long-winded, coins unnecessary neologisms, and doesn’t contain much of anything new to Less Wrong. There is something squicky about the tone, too.
(Nothing personal/you asked).
I did, and have upvoted you for your cooperation.
I am unfamiliar with this acronym. Elucidate me?
Point of order: what are you considering a neologism? The only term(s) I coined to my knowledge are acrohuman and its associated variations.
Is there any chance you could elaborate on this?
NMDV is “not my down vote”. I didn’t down vote you, I’m just guessing about those who did.
Thats the term I’m talking about.
With regard the squickiness, that’s always hard to articulate. I think it has to do with using a really authoritative and academic tone without authoritative and academic content—it sort of pattern matches to bad philosophy and pseudoscience.
Hrm. One of the things I’ve struggled with and why I bothered with it at all is that there really isn’t, to my knowledge, already a term that encapsulates the meaning of “a person with an agenda of maximally optimizing his own experience of the human condition to within the limits of what is possible” or the state of being so “optimized.” If I might ask—why do you feel that it was unnecessary? Are you familiar with a term that already carries this meaning?
That’s strange… I honestly thought I was doing the opposite of this; I was, I thought, careful to elaborate that I was solely relating my own opinion, with the intention of introducing the topics in question for dialectical examination by, well, all of you.
This post could be clearer and more concise. I do agree that if there is such a thing as the best I can be, it would be nice to be it, especially if we’re defining “best” in terms of how nice it is.
Well, yes. In future visitations of this concept I hope to be capable of achieving both of those goals. I did say I’m early in the rough-draft stages of the thought.
I would advise being very careful of any definition of optimal states which is vulnerable to wire-heading. I personally am not an adherent in general to hedonism. That being said:
My specific reason for avoiding defining ‘best’ was a moral one: I do not believe myself fit to decide what the maximally optimal state is for anyone other than myself. I did attempt to provide substance towards what that state would be for me. There is very little in the way of literature that describes what, precisely, it means to be possessed of “humanity”; what specific qualities that state is describing.
The notion of the human condition is a very nebulous one; and the idea of intentionally altering it in any way is one that seems to my observations very insular to the transhumanist movement (of which I am, admittedly, a member). This informs my notions pretty heavily. The whole concept I’m espousing here is essentially an answer to a dilemma pretty much all contemporary transhumanists face: we desire to be ‘improved’, but lack the means to improvement. For example; I currently dose modafinil, knowing full well that there are no measurable cognitive improvements associated with it, because it’s the closest to a genuine nootropic that’s available to me. Being able to remain alert and clear-of-mind at any hour is of benefit to me (especially as I work overnight shifts; modafinil is actually an on-label medication for me.)
I do not believe that I was making any great ground-shaking claims when I stated that people should want to ‘be better’. That’s trivial. Instead, it is my hope that by introducing this term I might begin a dialogue towards a few ends.
The establishment of a ‘movement’ or ‘agenda’ of individual maximal-optimization.
The establishment of a dialogue or body of lore facilitating the implementation of that goal.
The establishment of a label for both the idyllic end-state (useful as a symbolic analogue more than anything else), a label for the agenda/movement, and the practice of implementing said goal.
In other words; if somewhere along down the line there were a group or groups of people who spoke of “acrohumanism” (or whatever term comes to supplant that), or described themselves as “acrohumanists”, and had a body of techniques which were in place to that end, I would consider myself to have succeeded well beyond my current expectations of maximal probability. If I can, from this dialogue, pick up a few new ways of reaching that end myself, or at least establish a means of communicating my notions more clearly, I will have achieved my specific agenda in making this post.
You are using too many big words and your writing is too flowery. Read politics and the english language by george orwell. Use smaller words and shorter sentences.
also, TL;DR.