[The claim that exercise does little to improve ones’ rationality] is not true
Exercise does prevent decline in cognition into senescence, yes. I am unfamiliar with any work that associates higher cognitive function directly with improved rationality, however. Could you provide me links to such work?
Additional examples of a more cognitive nature could include techniques for improving recollection
This tells us nothing new: there was even a competition to get more information.
Taken as an aggregate, an individual with strong skill in biofeedback, a history of rigorous exercise and physical health, skill and knowledge of instrumental rationality, mnemotechnics, metacognition, and through metacognition strong influence over his own emotional states (note; as I myself am male, I am in the habit of using masculine pronouns as gender-neutrals), represents an individual who is relatively far from what at least consists of my personal image of the baseline ‘average human’. And yet I am certain that there might be other techniques or skillsets that one might add to his ‘grab-bag’ of tools for improving upon his own overall capabilities as a person—none of which individually exceeding what is humanly possible, but definitely impressively approaching those limits when taken as a whole.
Summary: Someone who is above average is above average, but they are probably not “perfect” yet. So what? This isn’t a new idea to LW, there is at least one sequence directly focusing on similar ideas.
[The claim that exercise does little to improve ones’ rationality] is not true
Exercise does prevent decline in cognition into senescence, yes. I am unfamiliar with any work that associates higher cognitive function directly with improved rationality, however. Could you provide me links to such work?
As dlthomas said, indirect is still important. I don’t know of such work of the top of my head, but there is certainly anecdotal evidence for it, e.g. the readership of LW, people like Eliezer and Luke, etc.
Additional examples of a more cognitive nature could include techniques for improving recollection
This tells us nothing new: there was even a competition to get more information.
Would you mind providing your working definition of what an “applause light” is? I’m struck by the observation that originality is not synonymous with substance; and I also have to ask—does a claim have to be controversial to be a worthwhile contribution to a topic?
Summary: Someone who is above average is above average, but they are probably not “perfect” yet. So what?
So—a person who is otherwise average but has mastered the skills mentioned would be no-longer-average. The thing you summarized was itself a summarization meant to demonstrate the usefulness of the topic itself.
This isn’t a new idea to LW, there is at least one sequence directly focusing on similar ideas.
I’m not aware that I ever said that what I was describing was strongly original.
I can understand how it would be easy to note the superficial resemblance to self-help strategies and claim that was all that was being done here; but there is a fundamental difference in orientation between strategies meant to mitigate flaws and strategies meant to introduce new capabilities or improve upon them. I can see that this is something I shall have to devote more time to clarifying.
As dlthomas said, indirect is still important.
It can be, yes.
I don’t know of such work of the top of my head, but there is certainly anecdotal evidence for it, e.g. the readership of LW, people like Eliezer and Luke, etc.
The reason why you don’t know of any such work, to be quite frank, is because it doesn’t exist. There is as yet no sufficiently quantized definition of rationality as to allow for such testing to be conducted. That being said; would you be willing to allow for the validity of the following claim? “There are very intelligent people who are very irrational.”
Would you mind providing your working definition of what an “applause light” is?
Anything that doesn’t add anything to the discussion in its context, and just appears to be an effort to gain validity in a community.
I’m not aware that I ever said that what I was describing was strongly original.
As far as I can tell, the only original idea in the post is the word “acrohumanism” itself. The rest seems to be a conglomeration of several LW memes.
The reason why you don’t know of any such work, to be quite frank, is because it doesn’t exist. There is as yet no sufficiently quantized definition of rationality as to allow for such testing to be conducted.
I’m willing to cede this point. But, you seem to have not applied this argument to your own ideas:
It occurs to me that developing mnemotechnical skill would be convergent with becoming a better rationalist
There are lots of people with very good/eidetic memory who aren’t good rationalists.
That being said; would you be willing to allow for the validity of the following claim? “There are very intelligent people who are very irrational.”
Yes, of course. That wasn’t under debate. Do you do the same for: “There are unintelligent people who are very irrational.”?
The reason why you don’t know of any such work, to be quite frank, is because it doesn’t exist. There is as yet no sufficiently quantized definition of rationality as to allow for such testing to be conducted.
I’m willing to cede this point. But, you seem to have not applied this argument to your own ideas:
It occurs to me that developing mnemotechnical skill would be convergent with becoming a better rationalist
I’m isolating this to point this out more clearly: will you acknowledge that this is a sufficiently inaccurate quotation of me as to represent an example of the Cherry Picking fallacy?
But, what you did say was along the lines of “We can get better memory, so we can remember more rationalist things like biases and heuristics! Gee, wouldn’t that be nice!”. (It sounded like an applause light to me.)
But, what you did say was along the lines of “We can get better memory, so we can remember more rationalist things like biases and heuristics! Gee, wouldn’t that be nice!”. (It sounded like an applause light to me.)
I see. Where, exactly, are you getting the “Gee, wouldn’t that be nice!” element from?
You didn’t say why just remembering biases and heuristics would be all that useful.
I see. Well, I can see that any future writings I submit anywhere will have to flesh that element out then. I had assumed the positive utility of easier recall ability for those categories of items would be obvious.
I didn’t say it wasn’t obvious. I was meaning that it seemed like you tacked it on the end to superficially validate mnemotechnics by using LW “buzz words”.
I was meaning that it seemed like you tacked it on the end to superficially validate mnemotechnics by using LW “buzz words”.
I can definitively tell you that this impression is not valid. If I had meant to target the jargon/patois of the LessWrong community, it would have been obvious.
I included the ease of recollection of pre-developed heuristics and known cognitive biases as an example of how optimization approaches can be convergent. This was the sum of my intent.
Anything that doesn’t add anything to the discussion in its context, and just appears to be an effort to gain validity in a community.
I see. Then I remain unable to resolve your provided definition with the items you assert to be applause lights.
May I also add that there are now two separate instances where you have ‘dropped’ relevant portions of a statement when characterizing things I have said. First in the notion that improving recollection would qualify as an acrohuman agenda (which was followed by a discussion of a specific vehicle to achieve that end, a fact you ‘missed’), and now again with the notion that developing strong skill with ars memorativa would be convergent with becoming a better rationalist to an acrohumanist -- (you leave out the part where I described using that skill to increase the efficacy of recalling various biases and heuristics.)
I am curious as to why you feel it is a good idea to so misrepresent my statements. Could you explain?
As far as I can tell, the only original idea in the post is the word “acrohumanism” itself. The rest seems to be a conglomeration of several LW memes.
Providing a label to a specific memeplex can in and of itself be a useful activity.
While many of these ideas are certainly present on LW, I do not know that they originated on LW.
I did not claim that what I was doing was creating a revolutionary concept or approach. I was providing a name to a thought I had already had, and opening the floor to discussion of said thought in order to possibly refine it.
Meditation is a controversial topic on LW. My suggestion of investigating it from a purely rationalistic perspective (that is, stripped of all religious associations) for its utility-enhancing functions is not something I have seen discussed in any top-level LW posting.
To my knowledge (and Google searching) my post is the first and only mention of the term “mnemotechnics” on LessWrong.
I would not have submitted this article to LW if I did not believe that it would be agreed to in general and specific on these items, so it should come as no surprise that much of it seems very familiar. It is my belief that the introduced term of “acrohumanism” is a worthwhile contribution—in beginning the formalized systemization of the various odds-and-ends many LW’ers are familiar with towards that nominal end-goal.
There are lots of people with very good/eidetic memory who aren’t good rationalists.
Certainly. Which is why I didn’t stop that assertion where you stopped it.
Yes, of course. That wasn’t under debate.
Are you sure this is true?
Do you do the same for: “There are unintelligent people who are very irrational.”?
I see. Then I remain unable to resolve your provided definition with the items you assert to be applause lights.
Yes, neither were very good examples.
I am curious as to why you feel it is a good idea to so misrepresent my statements. Could you explain?
I sincerely apologise. I was not trying to misrepresent them, I think that what you say is not surprising (as in agree with most of it), so the bits I quoted were what I thought to be the relevant parts, and the rest was “obvious” (and I agree with it).
To my knowledge (and Google searching) my post is the first and only mention of the term “mnemotechnics” on LessWrong.
Thanks for introducing it to us. I believe that LW has considered memory-enhancement techniques even without using the term.
Providing a label to a specific memeplex can in and of itself be a useful activity.
Yes! Definitely! But you don’t need a 750 word post to define a new word. Especially when the post is on a website where the memeplex is of its central tenets. For example (this is a rewrite/summary of your post):
I have been thinking about how to best explain the idea LW has that a person should try to be the best possible person they can (i.e. optimise every aspect of themselves). To do this, I have been using the word “acrohumanism”* (cf. “transhumanism”, “posthuman”). I think that it is a useful way of encapsulating a person with (and committed to obtaining): high physical health, good memory, control and understanding of their emotions, etc.
*using “acro-” which approximately means “highest”
(And I would have some more detail on why inventing a new word was necessary (e.g. how it is different to “rationalist”, or “transhumanist”))
Are you sure this is true?
Yep, the existence of at least one intelligent person who is also irrational was not under debate.
I think that what you say is not surprising (as in agree with most of it), so the bits I quoted were what I thought to be the relevant parts, and the rest was “obvious” (and I agree with it).
I had a strong suspicion this was the case, which is why I have made several statements very similar to this. Perhaps I failed to sufficiently convey that what I was contributing wasn’t something that was meant to be “un-obvious” in any specific detail, but rather was meant to be a new manner of thinking about those items.
In other words; I was hoping to introduce a new higher-order goal and maybe kick off organized efforts towards establishing goals in alignment with said goal.
For example (this is a rewrit/summary of your post):
… I can see how you would find that to be valid, but I do have a few thoughts here:
The notion of acrohumanism—though not by this name—predates my knowledge of the LessWrong community and most assuredly is not limited to it. Where LW agrees with its agenda(s), my history of thought and LW are in agreement.
The rephrasing of “reaching the upper bounds of the human condition” to “try to be the best possible person you can be” robs significant meaning from the statement, related to directional focus. Even with the additive caveat of “i.e.; optimise every aspect of themselves”, I’m not sure the idea retains my intended message. Certainly, Google results for the query, “be the best person you can be” gets seven million hits, whereas “reach the upper bounds of the human condition” gets zero. This speaks to the dilution of the message. While they appear superficially related, the “best person” formulation implies that this is a minimization of flaws, whereas the “upper bounds” formulation implies the maximization of utilities.
2.a. -- Corollary: this touches on why I am leary of defining what the optimal human condition even is: it becomes a moral dilemma.
2.b. -- There is a secondary implication that ‘reach the upper bounds of the human condition’ implies that it is not the normal state for an arbitrary human being to be in this state. So, as trite as it seems, I’m discussing the idea of “being more human than human”
Yes! Definitely! But you don’t need a 750 word post to define a new word.
I’m sure. But I also wanted to do more than merely provide the definition; I wanted to give practical examples and encourage the discussion of other practical skills or “tools”. Which is why I discussed examples of such skills and their potential benefits.
Thanks for introducing it to us. I believe that LW has considered memory-enhancement techniques even without using the term.
Definitely so. But it’s mostly instances of re-inventing the wheel, with a rather “grab-bag” / haphazard approach with no ideal model of what the end-goal such efforts are tending towards aside from (what I can discern) “becoming a better rationalist”
Given the current state of the dialogue, I think it’s time that I start putting an eye towards a potential future re-submission of this notion for consideration. I’ve gotten some good materials from you to digest towards that end, even if I did fail at sparking the dialogue I had hoped for.
The rephrasing of “reaching the upper bounds of the human condition” to “try to be the best possible person you can be” robs significant meaning from the statement, related to directional focus
Make that substitution then (as far as I can tell, what you mean by your phrase is identical to what I meant by mine).
I’m sure. But I also wanted to do more than merely provide the definition; I wanted to give practical examples and encourage the discussion of other practical skills or “tools”. Which is why I discussed examples of such skills and their potential benefits.
I think this post could do with some editing to make this clear, and “encouraging discussion” on several different specific skills and tools with a long monolithic post is unlikely to work very well. A smaller discussion post with something like (for example) “I know we know that SRS seems to work, but does anyone know anything about these other mnemotechnic techniques? ” might work better.
(I probably haven’t conveyed this very well, but I don’t dislike most of your post (I quite like the word “acrohumanism”, for example))
Make that substitution then (as far as I can tell, what you mean by your phrase is identical to what I meant by mine).
Yeah, that’s going to be something of a problem. I’m not sure how to properly emphasize the notion of the focus on not only being the best you can be but improving the range of the “bestness” that you can have to the upper limits of what is humanly possible.
If it helps any, a different perspective would be to consider this a “pure software” approach to transhumanism.
(I probably haven’t conveyed this very well, but I don’t dislike most of your post (I quite like the word “acrohumanism”, for example))
I can hardly fault someone else for poorly conveying themselves when that’s the majority of what got me in ‘trouble’ with this post in the first place. Thank you for the affirmation of the term, at least! :)
It is my understanding that the term “applause lights” refers to statements which are meant to be approved but otherwise contain little substance.. Would you care to enlighten me as to where these ‘applause lights’ are in what I wrote?
Exercise does prevent decline in cognition into senescence, yes. I am unfamiliar with any work that associates higher cognitive function directly with improved rationality, however. Could you provide me links to such work?
Indirect can still be important.
Some examples of applause lights:
This tells us nothing new: there was even a competition to get more information.
Summary: Someone who is above average is above average, but they are probably not “perfect” yet. So what? This isn’t a new idea to LW, there is at least one sequence directly focusing on similar ideas.
As dlthomas said, indirect is still important. I don’t know of such work of the top of my head, but there is certainly anecdotal evidence for it, e.g. the readership of LW, people like Eliezer and Luke, etc.
Would you mind providing your working definition of what an “applause light” is? I’m struck by the observation that originality is not synonymous with substance; and I also have to ask—does a claim have to be controversial to be a worthwhile contribution to a topic?
So—a person who is otherwise average but has mastered the skills mentioned would be no-longer-average. The thing you summarized was itself a summarization meant to demonstrate the usefulness of the topic itself.
I’m not aware that I ever said that what I was describing was strongly original.
I can understand how it would be easy to note the superficial resemblance to self-help strategies and claim that was all that was being done here; but there is a fundamental difference in orientation between strategies meant to mitigate flaws and strategies meant to introduce new capabilities or improve upon them. I can see that this is something I shall have to devote more time to clarifying.
It can be, yes.
The reason why you don’t know of any such work, to be quite frank, is because it doesn’t exist. There is as yet no sufficiently quantized definition of rationality as to allow for such testing to be conducted. That being said; would you be willing to allow for the validity of the following claim? “There are very intelligent people who are very irrational.”
Anything that doesn’t add anything to the discussion in its context, and just appears to be an effort to gain validity in a community.
As far as I can tell, the only original idea in the post is the word “acrohumanism” itself. The rest seems to be a conglomeration of several LW memes.
I’m willing to cede this point. But, you seem to have not applied this argument to your own ideas:
There are lots of people with very good/eidetic memory who aren’t good rationalists.
Yes, of course. That wasn’t under debate. Do you do the same for: “There are unintelligent people who are very irrational.”?
I’m isolating this to point this out more clearly: will you acknowledge that this is a sufficiently inaccurate quotation of me as to represent an example of the Cherry Picking fallacy?
Possibly, but it wasn’t intentional. Sorry.
But, what you did say was along the lines of “We can get better memory, so we can remember more rationalist things like biases and heuristics! Gee, wouldn’t that be nice!”. (It sounded like an applause light to me.)
I see. Where, exactly, are you getting the “Gee, wouldn’t that be nice!” element from?
You didn’t say why just remembering biases and heuristics would be all that useful.
EDIT: And this is where hyperlinks and the massive number of other articles of LW nicely work together.
I see. Well, I can see that any future writings I submit anywhere will have to flesh that element out then. I had assumed the positive utility of easier recall ability for those categories of items would be obvious.
I didn’t say it wasn’t obvious. I was meaning that it seemed like you tacked it on the end to superficially validate mnemotechnics by using LW “buzz words”.
I can definitively tell you that this impression is not valid. If I had meant to target the jargon/patois of the LessWrong community, it would have been obvious.
I included the ease of recollection of pre-developed heuristics and known cognitive biases as an example of how optimization approaches can be convergent. This was the sum of my intent.
I see. Then I remain unable to resolve your provided definition with the items you assert to be applause lights.
May I also add that there are now two separate instances where you have ‘dropped’ relevant portions of a statement when characterizing things I have said. First in the notion that improving recollection would qualify as an acrohuman agenda (which was followed by a discussion of a specific vehicle to achieve that end, a fact you ‘missed’), and now again with the notion that developing strong skill with ars memorativa would be convergent with becoming a better rationalist to an acrohumanist -- (you leave out the part where I described using that skill to increase the efficacy of recalling various biases and heuristics.)
I am curious as to why you feel it is a good idea to so misrepresent my statements. Could you explain?
Providing a label to a specific memeplex can in and of itself be a useful activity.
While many of these ideas are certainly present on LW, I do not know that they originated on LW.
I did not claim that what I was doing was creating a revolutionary concept or approach. I was providing a name to a thought I had already had, and opening the floor to discussion of said thought in order to possibly refine it.
Meditation is a controversial topic on LW. My suggestion of investigating it from a purely rationalistic perspective (that is, stripped of all religious associations) for its utility-enhancing functions is not something I have seen discussed in any top-level LW posting.
To my knowledge (and Google searching) my post is the first and only mention of the term “mnemotechnics” on LessWrong.
I would not have submitted this article to LW if I did not believe that it would be agreed to in general and specific on these items, so it should come as no surprise that much of it seems very familiar. It is my belief that the introduced term of “acrohumanism” is a worthwhile contribution—in beginning the formalized systemization of the various odds-and-ends many LW’ers are familiar with towards that nominal end-goal.
Certainly. Which is why I didn’t stop that assertion where you stopped it.
Are you sure this is true?
Absolutely.
Yes, neither were very good examples.
I sincerely apologise. I was not trying to misrepresent them, I think that what you say is not surprising (as in agree with most of it), so the bits I quoted were what I thought to be the relevant parts, and the rest was “obvious” (and I agree with it).
Thanks for introducing it to us. I believe that LW has considered memory-enhancement techniques even without using the term.
Yes! Definitely! But you don’t need a 750 word post to define a new word. Especially when the post is on a website where the memeplex is of its central tenets. For example (this is a rewrite/summary of your post):
(And I would have some more detail on why inventing a new word was necessary (e.g. how it is different to “rationalist”, or “transhumanist”))
Yep, the existence of at least one intelligent person who is also irrational was not under debate.
I had a strong suspicion this was the case, which is why I have made several statements very similar to this. Perhaps I failed to sufficiently convey that what I was contributing wasn’t something that was meant to be “un-obvious” in any specific detail, but rather was meant to be a new manner of thinking about those items.
In other words; I was hoping to introduce a new higher-order goal and maybe kick off organized efforts towards establishing goals in alignment with said goal.
… I can see how you would find that to be valid, but I do have a few thoughts here:
The notion of acrohumanism—though not by this name—predates my knowledge of the LessWrong community and most assuredly is not limited to it. Where LW agrees with its agenda(s), my history of thought and LW are in agreement.
The rephrasing of “reaching the upper bounds of the human condition” to “try to be the best possible person you can be” robs significant meaning from the statement, related to directional focus. Even with the additive caveat of “i.e.; optimise every aspect of themselves”, I’m not sure the idea retains my intended message. Certainly, Google results for the query, “be the best person you can be” gets seven million hits, whereas “reach the upper bounds of the human condition” gets zero. This speaks to the dilution of the message. While they appear superficially related, the “best person” formulation implies that this is a minimization of flaws, whereas the “upper bounds” formulation implies the maximization of utilities.
2.a. -- Corollary: this touches on why I am leary of defining what the optimal human condition even is: it becomes a moral dilemma.
2.b. -- There is a secondary implication that ‘reach the upper bounds of the human condition’ implies that it is not the normal state for an arbitrary human being to be in this state. So, as trite as it seems, I’m discussing the idea of “being more human than human”
I’m sure. But I also wanted to do more than merely provide the definition; I wanted to give practical examples and encourage the discussion of other practical skills or “tools”. Which is why I discussed examples of such skills and their potential benefits.
Definitely so. But it’s mostly instances of re-inventing the wheel, with a rather “grab-bag” / haphazard approach with no ideal model of what the end-goal such efforts are tending towards aside from (what I can discern) “becoming a better rationalist”
Given the current state of the dialogue, I think it’s time that I start putting an eye towards a potential future re-submission of this notion for consideration. I’ve gotten some good materials from you to digest towards that end, even if I did fail at sparking the dialogue I had hoped for.
Make that substitution then (as far as I can tell, what you mean by your phrase is identical to what I meant by mine).
I think this post could do with some editing to make this clear, and “encouraging discussion” on several different specific skills and tools with a long monolithic post is unlikely to work very well. A smaller discussion post with something like (for example) “I know we know that SRS seems to work, but does anyone know anything about these other mnemotechnic techniques? ” might work better.
(I probably haven’t conveyed this very well, but I don’t dislike most of your post (I quite like the word “acrohumanism”, for example))
Yeah, that’s going to be something of a problem. I’m not sure how to properly emphasize the notion of the focus on not only being the best you can be but improving the range of the “bestness” that you can have to the upper limits of what is humanly possible.
If it helps any, a different perspective would be to consider this a “pure software” approach to transhumanism.
I can hardly fault someone else for poorly conveying themselves when that’s the majority of what got me in ‘trouble’ with this post in the first place. Thank you for the affirmation of the term, at least! :)