The Limits of My Rationality
As requested here is an introductory abstract.
The search for bias in the linguistic representations of our cognitive processes serves several purposes in this community. By pruning irrational thoughts, we can potentially effect each other in complex ways. Leaning heavy on cognitivist pedagogy, this essay represents my subjective experience trying to reconcile a perceived conflict between the rhetorical goals of the community and the absence of a generative, organic conceptualization of rationality.
The Story
Though I’ve only been here a short time, I find myself fascinated by this discourse community. To discover a group of individuals bound together under the common goal of applied rationality has been an experience that has enriched my life significantly. So please understand, I do not mean to insult by what I am about to say, merely to encourage a somewhat more constructive approach to what I understand as the goal of this community: to apply collectively reinforced notions of rational thought to all areas of life.
As I followed the links and read the articles on the homepage, I found myself somewhat disturbed by the juxtaposition of these highly specific definitions of biases to the narrative structures of parables providing examples in which a bias results in an incorrect conclusion. At first, I thought that perhaps my emotional reaction stemmed from rejecting the unfamiliar; naturally, I decided to learn more about the situation.
As I read on, my interests drifted from the rhetorical structure of each article (if anyone is interested I might pursue an analysis of rhetoric further though I’m not sure I see a pressing need for this), towards the mystery of how others in the community apply the lessons contained therein. My belief was that the parables would cause most readers to form a negative association of the bias with an undesirable outcome.
Even a quick skim of the discussions taking place on this site will reveal energetic debate on a variety of topics of potential importance, peppered heavily with accusations of bias. At this point, I noticed the comments that seem to get voted up are ones that are thoughtfully composed, well informed, soundly conceptualized and appropriately referential. Generally, this is true of the articles as well, and so it should be in productive discourse communities. Though I thought it prudent to not read every conversation in absolute detail, I also noticed that the most participated in lines of reasoning were far more rhetorically complex than the parables’ portrayal of bias alone could explain. Sure the establishment of bias still seemed to represent the most commonly used rhetorical device on the forums …
At this point, I had been following a very interesting discussion on this site about politics. I typically have little or no interest in political theory, but “NRx” vs. “Prog” Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) seemed so out of place in a community whose political affiliations might best be summarized the phrase “politics is the mind killer” that I couldn’t help but investigate. More specifically, I was trying to figure out why it had been posted here at all (I didn’t take issue with either the scholarship or intent of the article, but the latter wasn’t obvious to me, perhaps because I was completely unfamiliar with the coinage “neoreactionary”).
On my third read, I made a connection to an essay about the socio-historical foundations of rhetoric. In structure, the essay progressed through a wide variety of specific observations on both theory and practice of rhetoric in classical Europe, culminating in a well argued but very unwieldy thesis; at some point in the middle of the essay, I recall a paragraph that begins with the assertion that every statement has political dimensions. I conveyed this idea as eloquently as I could muster, and received a fair bit of karma for it. And to think that it all began with a vague uncomfortable feeling and a desire to understand!
The Lesson
So you are probably wondering what any of this has to do with rationality, cognition, or the promise of some deeply insightful transformative advice mentioned in the first paragraph. Very good.
Cognition, a prerequisite for rationality, is a complex process; cognition can be described as the process by which ideas form, interact and evolve. Notice that this definition alone cannot explain how concepts like rationality form, why ideas form or how they should interact to produce intelligence. That specific shortcoming has long crippled cognitivist pedagogies in many disciplines—no matter which factors you believe to determine intelligence, it is undeniably true that the process by which it occurs organically is not well-understood.
More intricate models of cognition traditionally vary according to the sets of behavior they seek to explain; in general, this forum seems to concern itself with the wider sets of human behavior, with a strange affinity for statistical analysis. It also seems as if most of the people here associate agency with intelligence, though this should be regarded as unsubstantiated anecdote; I have little interest in what people believe, but those beliefs can have interesting consequences. In general, good models of cognition that yield a sense of agency have to be able to explain how a mushy organic collection of cells might become capable of generating a sense of identity. For this reason, our discussion of cognition will treat intelligence as a confluence of passive processes that lead to an approximation of agency.
Who are we? What is intelligence? To answer these or any natural language questions we first search for stored-solutions to whatever we perceive as the problem, even as we generate our conception of the question as a set of abstract problems from interactions between memories. In the absence of recognizing a pattern that triggers a stored solution, a new solution is generated by processes of association and abstraction. This process may be central to the generation of every rational and irrational thought a human will ever have. I would argue that the phenomenon of agency approximates an answer to the question: “who am I?” and that any discussion of consciousness should at least acknowledge how critical natural language use is to universal agreement on any matter. I will gladly discuss this matter further and in greater detail if asked.
At this point, I feel compelled to mention that my initial motivation for pursuing this line of reasoning stems from the realization that this community discusses rationality in a way that differs somewhat from my past encounters with the word.
Out there, it is commonly believed that rationality develops (in hindsight) to explain the subjective experience of cognition; here we assert a fundamental difference between rationality and this other concept called rationalization. I do not see the utility of this distinction, nor have I found a satisfying explanation of how this distinction operates within accepted models for human learning in such a way that does not assume an a priori method of sorting the values which determine what is considered “rational”. Thus we find there is a general derth of generative models of rational cognition beside a plethora of techniques for spotting irrational or biased methods of thinking.
I see a lot of discussion on the forums very concerned with objective predictions of the future wherein it seems as if rationality (often of a highly probabilistic nature) is, in many cases, expected to bridge the gap between the worlds we can imagine to be possible and our many somewhat subjective realities. And the force keeping these discussions from splintering off into unproductive pissing about is a constant search for bias.
I know I’m not going to be the first among us to suggest that the search for bias is not truly synonymous with rationality, but I would like to clarify before concluding. Searching for bias in cognitive processes can be a very productive way to spend one’s waking hours, and it is a critical element to structuring the subjective world of cognition in such a way that allows abstraction to yield the kind of useful rules that comprise rationality. But it is not, at its core, a generative process.
Let us consider the cognitive process of association (when beliefs, memories, stimuli or concepts become connected to form more complex structures). Without that period of extremely associative and biased cognition experienced during early childhood, we might never learn to attribute the perceived cause of a burn to a hot stove. Without concepts like better and worse to shape our young minds, I imagine many of us would simply lack the attention span to learn about ethics. And what about all the biases that make parables an effective way of conveying information? After all, the strength of a rhetorical argument is in it’s appeal to the interpretive biases of it’s intended audience and not the relative consistency of the conceptual foundations of that argument.
We need to shift discussions involving bias towards models of cognition more complex than portraying it as simply an obstacle to rationality. In my conception of reality, recognizing the existence of bias seems to play a critical role in the development of more complex methods of abstraction; indeed, biases are an intrinsic side effect of the generative grouping of observations that is the core of Bayesian reasoning.
In short, biases are not generative processes. Discussions of bias are not necessarily useful, rational or intelligent. A deeper understanding of the nature of intelligence requires conceptualizations that embrace the organic truths at the core of sentience; we must be able to describe our concepts of intelligence, our “rationality”, such that it can emerge organically as the generative processes at the core of cognition.
The Idea
I’d be interested to hear some thoughts about how we might grow to recognize our own biases as necessary to the formative stages of abstraction alongside learning to collectively search for and eliminate biases from our decision making processes. The human mind is limited and while most discussions in natural language never come close to pressing us to those limits, our limitations can still be relevant to those discussions as well as to discussions of artificial intelligences. The way I see things, a bias free machine possessing a model of our own cognition would either have to have stored solutions for every situation it could encounter or methods of generating stored solutions for all future perceived problems (both of which sound like descriptions of oracles to me, though the latter seems more viable from a programmer’s perspective).
A machine capable of making the kinds of decisions considered “easy” for humans, might need biases at some point during it’s journey to the complex and self consistent methods of decision making associated with rationality. This is a rhetorically complex community, but at the risk of my reach exceeding my grasp, I would be interested in seeing an examination of the Affect Heuristic in human decision making as an allegory for the historic utility of fuzzy values in chess AI.
Thank you for your time, and I look forward to what I can only hope will be challenging and thoughtful responses.
I cut and pasted the article into a text editor. Here is a readable version:
The Story Though I’ve only been here a short time, I find myself fascinated by this discourse community. To discover a group of individuals bound together under the common goal of applied rationality has been an experience that has enriched my life significantly. So please understand, I do not mean to insult by what I am about to say, merely to encourage a somewhat more constructive approach to what I understand as the goal of this community: to apply collectively reinforced notions of rational thought to all areas of life.
As I followed the links and read the articles on the homepage, I found myself somewhat disturbed by the juxtaposition of these highly specific definitions of biases to the narrative structures of parables providing examples in which a bias results in an incorrect conclusion. At first, I thought that perhaps my emotional reaction stemmed from rejecting the unfamiliar; naturally, I decided to learn more about the situation.
As I read on, my interests drifted from the rhetorical structure of each article (if anyone is interested I might pursue an analysis of rhetoric further though I’m not sure I see a pressing need for this), towards the mystery of how others in the community apply the lessons contained therein. My belief was that the parables would cause most readers to form a negative association of the bias with an undesirable outcome.
Even a quick skim of the discussions taking place on this site will reveal energetic debate on a variety of topics of potential importance, peppered heavily with accusations of bias. At this point, I noticed the comments that seem to get voted up are ones that are thoughtfully composed, well informed, soundly conceptualized and appropriately referential. Generally, this is true of the articles as well, and so it should be in productive discourse communities. Though I thought it prudent to not read every conversation in absolute detail, I also noticed that the most participated in lines of reasoning were far more rhetorically complex than the parables’ portrayal of bias alone could explain. Sure the establishment of bias still seemed to represent the most commonly used rhetorical device on the forums …
At this point, I had been following a very interesting discussion on this site about politics. I typically have little or no interest in political theory, but “NRx” vs. “Prog” Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) seemed so out of place in a community whose political affiliations might best be summarized the phrase “politics is the mind killer” that I couldn’t help but investigate. More specifically, I was trying to figure out why it had been posted here at all (I didn’t take issue with either the scholarship or intent of the article, but the latter wasn’t obvious to me, perhaps because I was completely unfamiliar with the coinage “neoreactionary”).
On my third read, I made a connection to an essay about the socio-historical foundations of rhetoric. In structure, the essay progressed through a wide variety of specific observations on both theory and practice of rhetoric in classical Europe, culminating in a well argued but very unwieldy thesis; at some point in the middle of the essay, I recall a paragraph that begins with the assertion that every statement has political dimensions. I conveyed this idea as eloquently as I could muster, and received a fair bit of karma for it. And to think that it all began with a vague uncomfortable feeling and a desire to understand!
The Lesson So you are probably wondering what any of this has to do with rationality, cognition, or the promise of some deeply insightful transformative advice mentioned in the first paragraph. Very good.
Cognition, a prerequisite for rationality, is a complex process; cognition can be described as the process by which ideas form, interact and evolve. Notice that this definition alone cannot explain how concepts like rationality form, why ideas form or how they should interact to produce intelligence. That specific shortcoming has long crippled cognitivist pedagogies in many disciplines—no matter which factors you believe to determine intelligence, it is undeniably true that the process by which it occurs organically is not well-understood.
More intricate models of cognition traditionally vary according to the sets of behaviour they seek to explain; in general, this forum seems to concern itself with the wider sets of human behaviour, with a strange affinity for statistical analysis. It also seems as if most of the people here associate agency with intelligence, though this should be regarded as unsubstantiated anecdote; I have little interest in what people believe, but those beliefs can have interesting consequences. In general, good models of cognition that yield a sense of agency have to be able to explain how a mushy organic collection of cells might become capable of generating a sense of identity. For this reason, our discussion of cognition will treat intelligence as a confluence of passive processes that lead to an approximation of agency.
Who are we? What is intelligence? To answer these or any natural language questions we first search for stored-solutions to whatever we perceive as the problem, even as we generate our conception of the question as a set of abstract problems from interactions between memories. In the absence of recognizing a pattern that triggers a stored solution, a new solution is generated by processes of association and abstraction. This process may be central to the generation of every rational and irrational thought a human will ever have. I would argue that the phenomenon of agency approximates an answer to the question: “who am I?” and that any discussion of consciousness should at least acknowledge how critical natural language use is to universal agreement on any matter. I will gladly discuss this matter further and in greater detail if asked. At this point, I feel compelled to mention that my initial motivation for pursuing this line of reasoning stems from the realization that this community discusses rationality in a way that differs somewhat from my past encounters with the word.
Out there, it is commonly believed that rationality develops (in hindsight) to explain the subjective experience of cognition; here we assert a fundamental difference between rationality and this other concept called rationalization. I do not see the utility of this distinction, nor have I found a satisfying explanation of how this distinction operates within accepted models for human learning in such a way that does not assume an a priori method of sorting the values which determine what is considered “rational”. Thus we find there is a general derth of generative models of rational cognition beside a plethora of techniques for spotting irrational or biased methods of thinking.
I see a lot of discussion on the forums very concerned with objective predictions of the future wherein it seems as if rationality (often of a highly probabilistic nature) is, in many cases, expected to bridge the gap between the worlds we can imagine to be possible and our many somewhat subjective realities. And the force keeping these discussions from splintering off into unproductive pissing about is a constant search for bias.
I know I’m not going to be the first among us to suggest that the search for bias is not truly synonymous with rationality, but I would like to clarify before concluding. Searching for bias in cognitive processes can be a very productive way to spend one’s waking hours, and it is a critical element to structuring the subjective world of cognition in such a way that allows abstraction to yield the kind of useful rules that comprise rationality. But it is not, at its core, a generative process.
Let us consider the cognitive process of association (when beliefs, memories, stimuli or concepts become connected to form more complex structures). Without that period of extremely associative and biased cognition experienced during early childhood, we might never learn to attribute the perceived cause of a burn to a hot stove. Without concepts like better and worse to shape our young minds, I imagine many of us would simply lack the attention span to learn about ethics. And what about all the biases that make parables an effective way of conveying information? After all, the strength of a rhetorical argument is in it’s appeal to the interpretive biases of it’s intended audience and not the relative consistency of the conceptual foundations of that argument.
We need to shift discussions involving bias towards models of cognition more complex than portraying it as simply an obstacle to rationality. In my conception of reality, recognizing the existence of bias seems to play a critical role in the development of more complex methods of abstraction; indeed, biases are an intrinsic side effect of the generative grouping of observations that is the core of Baysian reasoning.
In short, biases are not generative processes. Discussions of bias are not necessarily useful, rational or intelligent. A deeper understanding of the nature of intelligence requires conceptualizations that embrace the organic truths at the core of sentience; we must be able to describe our concepts of intelligence, our “rationality”, such that it can emerge organically as the generative processes at the core of cognition.
The Idea I’d be interested to hear some thoughts about how we might grow to recognize our own biases as necessary to the formative stages of abstraction alongside learning to collectively search for and eliminate biases from our decision making processes. The human mind is limited and while most discussions in natural language never come close to pressing us to those limits, our limitations can still be relevant to those discussions as well as to discussions of artificial intelligences. The way I see things, a bias free machine possessing a model of our own cognition would either have to have stored solutions for every situation it could encounter or methods of generating stored solutions for all future perceived problems (both of which sound like descriptions of oracles to me, though the latter seems more viable from a programmer’s perspective).
A machine capable of making the kinds of decisions considered “easy” for humans, might need biases at some point during it’s journey to the complex and self consistent methods of decision making associated with rationality. This is a rhetorically complex community, but at the risk of my reach exceeding my grasp, I would be interested in seeing an examination of the Affect Heuristic in human decision making as an allegory for the historic utility of fuzzy values in chess AI.
Thank you for your time, and I look forward to what I can only hope will be challenging and thoughtful responses.
Thanks so much. The formatting is now officially fixed thanks to feedback from the community. I appreciate what you did here none the less.
On first glance your post looked like it was written by someone who lacks the ability to write clearly. At second glance it looks like it’s simply the product of deconstrutivist thinking and therefore not easily accessible.
I’m not sure that the way a few terms get used here is clear to you.
Economists started to speak about rational agents when they mean an agent that optimizes it’s actions according to an utility function. In those models it’s not important whether or not the agent has reasons for his decisions that he can articulate. On LW we use a notion of rationality that’s derived from that idea. Rationality is using a systemized process which has maximizes utility.
In retrospect I’m not sure whether that’s a good way to use the word, but it’s the way it’s evolved in this community. Here it’s not about engaging in an action that can be rationalized.
Parables do happen to be a nice tool but it’s a tool that’s not easily understood. Cognitive Science suggests that our naive intuitions about what parables do are not good.
Eliezer recently wrote on facebook:
The concept that deep parables do exist and do things with time lags of a year does get acknowledged by Eliezer. On the other hand we lack any good theory. If you read HPMOR with a critical eye you see that it’s full of parables.
The problem of parables is that it’s hard to talk about them directly.
Also, I’d like to steer away from a debate on the question of whether “deep parables” exist. Let’s ask directly, “are the parables here on LW deep?” Are they effective?
LW is quite diverse. There are a lot of different people with different views.
That alone is not an obstacle necessarily. We must establish what these views have in common and how they differ in structure and content.
Why is it difficult to talk about parables directly? We have the word and the abstract concept. Seems like a good start.
I feel like you’ve pointed out what is at least a genuine inconsistency in purpose. The point of this article was not meant to subvert any discussion of economic rationality but rather to focus discussions of intelligence on more universally acceptable models of cognition.
Because most people think that when they read an article they either agree or disagree and that’s pretty clear the moment they read the article.
The idea that the article contains a parable that creates cognitive change with a time lack of a day, week, month or year isn’t in the common understanding of cognition. It’s not a phenomena that’s well studied.
That means there a lot of claims on the subject for which people would want proof but no scientific studies to back up those claims.
I just read Dune and it contains the description of a character:
Speaking in that way where phrases generally have more than one meaning is not easy when you try to make complex arguments that are defensible.
I think I must have explained myself poorly … you don’t have to take my subjective experience or my observations as proof of anything on the subject of parables or on cognition. I agree that double entendre can make complex arguments less defensible, but would caution that it may never be completely eliminated from natural language because of the way discourse communities are believed to function.
Specifically, what subject contains many claims for which there is little proof? Are we talking now about literary analysis?
If you also mean to refer to the many claims about the mechanisms of cognition that lack a well founded neuro-biological foundation, there are several source materials informing my opinion on the subject. I understand that the lack experimentally verifiable results in the field of cognition seems troubling at first glance. For the purposes of streamlining the essay, I assumed a relationship between cognition and intelligence by which intelligence can only be achieved through cognition. Whether this inherently cements the concept of intelligence into the unverifiable annals of natural language, I gladly leave up to each reader to decide. Based on my sense of how the concepts are used here on LW, intelligence and cognition are not completely well-defined in such a way that they could be implemented in strictly rational terms.
However, your thoughts on this are welcome.
I found the post extremely difficult to read and to understand the point. From the last section I concluded that you are saying that biases are necessary for some reason? I am still not sure what you are giving as a reason for this.
I give several reasons in the text as to why biases are necessary. Essentially, all generative cognitive processes are “biased” if we accept the LW concept of bias as an absolute. Here is an illustrated version—it seems you aren’t the only one uncertain as to how I warrant the claim that bias is necessary. I should have put more argument in the conclusion, and, if this is the consensus, I will edit in the following to amend the essay.
To clarify, there was a time in your life before you were probably even aware of cognition during which the process of cognition emerged organically. Sorting through thoughts and memories, optimizing according to variables such as time and calorie consumption, deferring to future selves … these are all techniques that depend on a preexisting set of conditions by which cognition has ALREADY emerged from, existing in whomever is performing these complex tasks. While searching for bias is helpful in eliminating irrationality from cognitive processes, it does not generate the conditions from which cognition emerges nor explain the generative processes at the core of cognition.
I am critical of the LW parables because, from a standpoint of rhetorical analysis, parables get people to associate actions with outcomes. The parables LW use vary in some ways, but are united in that the search for bias is associated with traditionally positive outcomes, whereas the absence of a search for bias becomes associated with comparatively less desirable outcomes. While I expect some learn deeper truths, I find that the most consistent form of analysis being employed on the forums is clearly the ongoing search for bias.
There are, additionally, LW writings about how rationality is essentially generative and creative and should not be limited to bias searches. This essay was my first shot at an attempt to explain the existence of bias without relying on some evolutionary set of imperatives. If you have any questions feel free to ask; I hope this helps clarify at least what I should have written.
Post needs an executive summary / abstract
I have added a short introductory abstract to clarify my intended purpose in writing. Hopefully it helps.
I’m impressed by how mindful you reflected on the content of LW. Your high level view of LW is interesting and provides an interesting data point of how newbies and outsiders perceive LW. The problem (besides the formatting) I see is that your post is quite long and I had trouble seeing the tread or objective of your post. I think a few reader will have trouble connecting to this. It is not exactly on topic. Also to me it appears a bit like stream of consciousness writing which I find difficult—but interesting.
I tried really hard to imitate and blend the structure of argumentation employed by the most successful articles here. I found that in spite of the high minded academic style of writing, structures tended to be overwhelmingly narratives split into three segments that vary greatly in content and structure (the first always establishes tone and subject, the second contains the bulk of the argumentation and the third is an often incomplete analysis of impacts the argument may have on some hypothetical future state). I can think of a lot of different ways of organizing my observations on the subject of cognitive bias and though I decided on this structure, I was concerned that, since it was decidedly non-haegalian, it would come off as poorly organized.
But I feel good about your lumping it in with data on how newcomers perceive LW because that was one of my goals.
Interesting. I understand how you arrived at that. The sequences and esp. EYs posts are often written in that style. But you don’t need to write that way (actually I don’t think you succeeded at that). My first tries were also somewhat trying to fit in but overdoing it—and somewhat failing too. Good luck. TRrying and failing is better than not trying and thus not learning.
http://lesswrong.com/lw/dg7/what_have_you_recently_tried_and_failed_at/
Thank you for your feedback. I am not sure what I think, but the general response so far seems to support the notion that I have tried to adapt the structure to a rhetorical position poorly suited for my writing style. I’m hearing a lot of “stream of consciousness” … the first section specifically might require more argumentation regarding effective rhetorical structures. I attack parables without offering a replacement, which is at best rude but potentially deconstructive past the point of utility. I’m currently working on an introduction that might help generate more discussion based on content.
Agreed that your post is impressively mindful. In terms of writing style, maybe try writing more like Steven Pinker or Paul Graham. (If you’ve haven’t read Paul Graham yet, the low-hanging fruit here is to go to his essays page and read a few essays that appeal to you, then copy that style as closely as possible. Here are some favorites of mine. Paul Graham is great at both writing and thinking so you’ll do triple duty learning about writing, thinking, and also whatever idea he’s trying to communicate.)
I’ve read both. Paul Graham’s style is wonderful … so long as he keeps himself from reducing all of history to a triangular diagram. I prefer Stanley Fish for clarity on linguistics.
You might be interested in Thinking at the Edge—it’s the only system I know of for getting cognitive value out of those vague feelings.
An interesting response. I did not mean to imply that the feeling had implicit value, but rather that my discomfort interacted with a set of preexisting conditions in me and triggered many associated thoughts to arise.
I’m not familiar with this specific philosophy; are you suggesting I might benefit from this or would be interested in it from an academic perspective? Both perhaps?
Do you have any thoughts on the rest of the three page article? I’m beginning to feel like I brought an elephant into the room that no one wants to comment on.
OK so this is marginally better. Found notepad and copied and pasted after turning on word wrap. will continue to tweak until the pagination is not obnoxiously bad.
Turn OFF word wrap. You should also not be concerned with pagination at all. Separate paragraphs by an empty line.
okay I did that and am about to paste.
You are officially my hero Lumifer. Thank you so much.
The formatting of your article is broken. It would be much easier if you could structure it into normal paragraphs.
I know. I’m trouble shooting now :-)
It’s fixed now.
HURRAY! Thank you everyone who helped me format this! As far as reciprocal altruism should dictate, Lumifer, I owe you.
It’s debatable whether “reciprocal altruism” isn’t a contradiction in terms, and whether “quid pro quo” wouldn’t be the more accurate descriptor for what is in essence “you scratched my back, so I’ll scratch yours”. Then again, I may just be griping because you made me look up Hegelianism in your other comment.
You are correct. Reciprocal altruism is an ideal not necessarily implementable and I should have written, “As far as the spirit of reciprocal altruism should dictate”. :-)
I seem to be in the process of crashing my computer. I hope to have resolved this issue in approximately 10 minutes.
I will remove this as soon as I have been directed to the appropriate channels, I promise it’s intelligent and well written … I just can’t seem to narrow down where the problem is and what I can do to fix it.
I don’t know how to fix this article … every time I copy and paste I end up with the format all messed up and the above is the resulting mess. I’m using a freeware program called Kingsoft Writer, and would really appreciate any instruction on what I might do to get this into a readable format. Help me please.
You’re outputting ′ ′ (a hard space) instead of the normal space character to start with. You’re also setting margins within your text.
Why don’t you export your document to text, open and save it in a text editor (e.g. Notepad) and then post it?
I will try this. Thank you for being constructive in spite of the mess.
Or paste it as plaintext.
I will try this after I try the above suggestion. Thank you also.
I recommend Openoffice/libreoffice, or google docs
good to know. I’ve used Openoffice in the past and am regretting not using it on this computer. At least I’m learning :-)
This is breaking the formatting for me—you probably have an extra end tag in your html.
Maybe just copy and paste into the gui editor?
GUI … graphical user interface … as in the one this website uses.
This is what happens as a result of my copy and pasting from the document. I have tried several different file formats … this was .txt which is fairly universally readable … I ran into the problem with the default file format in Kingsoft reader as well.
What you want right now is a plain-ASCII text file. No Unicode, no HTML, no nothing.
Thank you. I will try this and see if it helps with the paragraph double spacing problem.
Wow. My encoding options are limited to two Unicode variants, ANSI and UTF-8. Will any of those work for these purposes?
For future reference, ANSI is not Unicode. You can google up the gory details if interested, but basically ASCII is a seven-bit character set with 128 symbols. The so-called ANSI (it’s a misnomer) extends ASCII to 8 bits and so another 128 symbols, but without specifying what these symbols should be. On most Anglophone computers these will correspond to ISO 8859-1 (or a very similar Windows codepage 1252), but in other parts of the world they will correspond to whatever the local codepage is and it can be anything it wants to be.
UTF-8, on the other hand, is proper Unicode. So it seems the closest you can get to plain ASCII is to use ANSI.
So, if I understand the implication, anything encoded in ANSI is not universally machine readable (there are several unfamiliar terms for me here “anglophone” “ISO 8859-1″ and “Windows codepage 1252”)? I probably won’t look up all the details, because I rarely need to know how many bits a method of encryption involves (I’m probably betraying my naivety here) irregardless of the character set used, but I appreciate how solid of a handle you seem to have on the subject.
English-speaking.
An 8-bit character set (i.e., representing 256 different characters) suitable for many Western European languages.
Something very much like ISO-8859-1 but slightly different, used on computers running Microsoft Windows. It’s slightly different because for some reason (there are more and less cynical explanations) Microsoft seem unable to use anything standardized without modifying it a little.
Microsoft-Windows-ese for “an 8-bit character set whose first half is the same as ASCII”. Specifying the second half is the job of a “code page”, such as the “code page 1252” mentioned above.
Not machine-readable without knowledge of which “code page” (see above) it uses. If you know that, or can guess it, you’re OK.
Not actually encryption, despite the term “encoding”. A character encoding is a way of representing characters as smallish numbers suitable for storing in a computer. Strictly speaking, every time I said “character set” above I should have said “encoding”. Every time you have any text on a computer, it’s represented internally via some encoding. Common encodings include ASCII (7 bits so 128 characters, but actually some of those 128 slots are reserved for things that aren’t really characters), ISO-8859-1 (8 bits, suitable for much Western European text, though actually nowadays the slightly different ISO-8859-15 is preferred because it includes the Euro currency symbol), UTF-8 (variable length, from 8 to 24 bits per character, represents the whole—very large—Unicode character repertoire). For most purposes UTF-8 is a good bet.
Regardless. (Sorry.)
[EDITED to answer the question about “not universally machine readable”.]
It has nothing to do with my article but you’ve made me very happy by explaining this to me. I think I understand better what is meant by “encoding”. Also the bit about regardless I found quite witty and even laughed out loud (xkcd.com kept me informed about the OED’s decision on that word).
So the encoding was probably not the problem then because most programs default ANSI and it was not the unanimous first suggestion from everyone to switch to 7 bit encoding … although I do understand why ASCII is more universal now. Open questions in my mind now include: does the GUI read ASCII and ANSI? And what encoding is used for copy and pasting text?
The main problem was most likely that your text was full of nonbreaking spaces. A conversion to actual ASCII would have got rid of those because the (rather limited) ASCII character repertoire doesn’t include nonbreaking spaces. I doubt that using an “ANSI” character set did that, though, so yes, the encoding was probably a red herring.
What GUI?
That would be an implementation detail of your operating system; if it’s competently implemented (which I think pretty much everything is these days) you should think of what’s copied and pasted as being made up of characters, not of the numbers used to encode them.
However, at least on some systems, if you copy from one application that supports (not just plain text but) formatted text into another, the formatting will be (at least roughly) preserved. This will happen, e.g., if you copy and paste from a web browser into Microsoft Word. I find that this is scarcely ever what I want. There’s usually a way to paste in just the text (sometimes categorized as “Paste Special”, which may offer other less-common options for pasting stuff too).
cool :-)
Either way, I owe you.
ANSI works if I turn off word wrap and put the space between paragraphs, as you suggested. Thanks again Lumifer.