In the previous Open Thread NancyLebovitz posted an article about the living-Biblically-for-one-year guy deciding to try living one year rationally. Alicorn noticed that the article was from 2008, so the project was probably cancelled.
However, I was thinking… if someone tried to do this, what would be the best way to do it. (It’s easy to imagine wrong ways: Hollywood rationality, etc.) We can assume that the person trying this experiment is not among the most rational people in the world, because they would already be too busy optimizing the universe, and wouldn’t have a year of time to spend on such experiment. Also, they would probably already be living pretty rationally, so there would be no big change in their life, and therefore not an interesting report. (Although the participation in the experiment might create some extra incentive to behave rationally more consistently.) On the other hand, too irrational person would not be able to perform the task successfully. So, let’s assume that the experimental person is… maybe an average LW reader, or someone generally LW-compatible who haven’t found the website yet. (This also assumes that the LW model of rationality is approximately correct. Well, without this assumption it doesn’t make much sense to discuss the best strategy here.)
So… let’s suppose we have a volunteer who says: “I will try living the next year as rationally as possible, of course within my limits, so give me an advice about how to do it best. (In exchange I promise to keep logs, diaries, and publish the whole story, which could create some popularity for LW and CFAR.)” What advice would we give them?
A good meta-advice would be to keep a feedback loop with other aspiring rationalists. Not just take some initial advice, go away, return after one year with the report and risk getting a “you completely misunderstood it” reaction. Instead they should keep in contact; the question is merely how frequent and how detailed would the optimum contact be, to avoid wasting too much time in web discussions. I could imagine: asking specific difficult questions whenever necessary, and writing a detailed report every month, with the plans for the following months, so people on LW could comment on the strategy. Of course even this decision could be consulted on LW.
Now this feels a bit like cheating. Are we trying to test what one person can achieve during a year of living rationally, or are we using a LW hive-mind to optimize the person? In other words, would the results of the experiment speak about the benefits of rationality on one person, or about benefits of having a LW hive-mind available? Uhm… maybe there is actually no difference there? I mean, it is rational to use the best tools available. Virtue of scholarship, optimizing our social environment, munchkin attitude, etc. For a munchkin, there is no such thing as “cheating”; there is only more or less winning. -- But the important question is what is the goal of this experiment. Is it optimizing the one person’s life? Or is it describing a strategy that dozens of other people may follow? Because if too many people decide to follow it, the LW hive-mind may be unable to provide a quality advice to all of them. On the other hand, such an event might motivate the LW hive-mind to become stronger and invent more efficient ways of supporting the aspiring rationalists. -- Uhm… I guess some forms of cheating should be prohibited. For example, if a poor person volunteers for the project, and some people from LW will send them money, and then they would rationalize it as winning by being rational even if the person does nothing else smart. (“What? In their situation it was rational to volunteer for the rationality experiment and ask people for money. It was a strategy that successfully increased their utility, and rationality by definition is winning.”) On the other hand, if the person asks LW members for an expert advice in a domain they didn’t study, I think that is completely fair; that is what they could (and perhaps should) have done even without the experiment. So, some kinds of support feel okay, other kinds feel not. Maybe the proper question is: Imagine that after successfully publishing the report, the next day 1000 more people would want to try using the same strategy. Would we feel that this contributed to our goal of raising the sanity waterline?
I also think that this kind of experiment would be fun, which is probably the main reason why I describe it; but as a side effect, if successful, it could be a great marketing material. What do you think? Is this “try one year of living most rationally with the support of LW hive-mind” experiment a good idea? Is anyone interested in being a volunteer? Are enough people interested in supporting them? (If yes, maybe we could launch the project on April 1st, the Fools’ Day, because it’s about all of us being less foolish, isn’t it?)
I think this is an excellent project, so excellent that I have to ask, why are we not (if indeed we are not) already doing this, all the time?
Weight-watcher groups are watching their weight all the time, not just meeting to talk about it. People meeting to help each other learn a foreign language are learning that language the rest of the time, at least, for as many hours a day as they find useful. University students make studying a full-time job (the ones that are serious). Rationality is supposed to be applicable to everything; every moment is an opportunity for practice.
You mentioned specific groups which try to reach a specific goal. That’s great, on the level of individual goals. But we also need to go more meta. The foreign language group will not tell you to stop learning the language if your life situation changes so that the original purpose of learning the language is no longer valid; or if simply a better opportunity appears and it would be rational to move your limited resources from the language towards something else. Also, if you already haven’t decided to study a specific language, the group will not find you and explore with you whether starting learning the language would be a good idea for you.
A rationalist group could help with this. We could already provide this support to each other; on meetups, on skype, on mailing lists. Some of us already use our good friends for this purpose; but the problem is that these friends are not always LW-style rationalists, so sometimes we only get their “cached thoughts” as an advice. Also, some people may use a psychologist; not necessarily as a source of rational advice, but as someone to listen and reflect on obvious irrationalities.
So, I think many of us are already using somewhat similar solutions, but either they were not consciously optimized, or they were optimized only for a partial goal.
Bad idea if you just go “live rationally”, imo. I predict it’d end up either as mostly useless cargo-cult behavior by a sufficiently incompetent-and-unaware-of-it participant, going crazy and frustrated trying to apply not yet processed for daily human consumption raw heuristics and biases research to everyday living 24⁄7, or doing the general wise living thing that smart people with life experience often already do to the best of their ability but which you can’t really impart in an instruction manual very well without having the “smart” and “life experience” parts covered.
Might be salvageable if you narrowed it down a bit. Live rationally to what end? Not having a clear idea of what my goals are is the first problem that comes to my mind when looking at this. I don’t see why “my goals are doing exactly what I already do day in, day out, so I’ve already been living rationally all this time, thank you very much” would necessarily be incoherent for example. So maybe go for success for society-wide measuring sticks, like impressive performace in standardised education and good income? A lot of people are doing that, but I’m not seeing terribly much sentiment here for people trying to maximize their earning potential and professional placement as the end goal in life, though some do consider it instumentally.
So maybe say the goal is to live the good life. Only it seems that the good life consists of goals that are often not quite accessible to the conscious mind and methods to search pursue them that can be quite elaborate and often need to be improvised on the spot.
Not to be all bleak and obscurantist though, there is the Wissner-Gross entropy thing, which is a quite interesting idea for an universal goal heuristic, something like “maneuver to maximize your decision space”. Also pretty squarely in the not yet ready for human consumption, will drive you crazy if you try to naively apply it 24⁄7 bin. And if you could actually codify how well someone’s satisfying a goal like that you’d probably be getting a PhD and a research job at Google, not running a forum challenge.
The participant could be observed by the LW community; something like a reality show. The costs of observation would have to be weighted, but I imagine the volunteer would provide:
A short log every day. Such as: “Learned 30 new words with Anki. Met with a friend and discussed our plans; seems interested, but we didn’t agree on anything specific. Exercise. Wrote two pages of my thesis, the last one needs a rewrite. Spent 3 hours on internet.” Not too detailed, not to waste too much time, but detailed enough to provide an idea about progress. (The log would be outside of LW, to reduce the volunteer’s temptation to procrastinate here.)
A plan every week: What do I want to achieve; what needs to be done. Something like a GTD plan with “next actions”. What could go wrong, and how will I react. What do I want to avoid, and how. -- At the end of the week: A summary, what happened as expected, what was different, what lessons can be taken. -- The LW hive mind would discuss this, and the volunteer can decide to follow their suggestions.
Every month: A comment-sized report in LW Group Rationality Diary; for the same reason other people write there: to encourage each other.
going crazy and frustrated trying to apply not yet processed for daily human consumption raw heuristics and biases research to everyday living 24⁄7
In this case I would recommend giving feedback: “I’m trying to do this, and it drives me crazy. Any advice? I spent thinking five minutes about it, and here are my ideas: X, Y, Z.”
or doing the general wise living thing that smart people with life experience often already do to the best of their ability
This could probably be solved by making a prediction at the beginning of the project. The volunteer would list the changes in the previous years, successes and failures, and interpolate: “Using my previous years as an outside view, I predict that if I didn’t participate in this experiment, I would probably do A, B, C.” At the end of the project, the actual outcomes can be compared with the prediction.
Live rationally to what end? Not having a clear idea of what my goals are is the first problem that comes to my mind when looking at this.
Sure. The goals would be stated by the volunteer, either from the beginning, or at least at the end of the first month.
I don’t see why “my goals are doing exactly what I already do day in, day out, so I’ve already been living rationally all this time, thank you very much” would necessarily be incoherent for example.
It’s perfectly okay. It just does not make sense to participate in the experiment for this specific person. The experiment is meant for people who are not in this situation.
Instead of trying to do the perfect thing immediately, I would recommend continuous improvement. Find the most painful problems, and fix them first. Find the obvious mistakes, and do better (not best, just better). Progress towards your current goals, but when you realize they were mistaken, improve them. If you think you couldn’t do a big change, start with doing small changes; and once in a while reconsider your beliefs about the big change. The goal is not to be perfect, but to keep improving.
If at the end you are significantly better than a prediction based on your past, that’s a success. If as a side effect we get better experimental data, or if you can rewrite and publish your logs as an e-book to make extra money and do an advertisement for CFAR, that’s even better. If you inspire dozen other people, and if most of them also become significantly better than the predictions based on their pasts; and if the improvement is still there even after the end of experiments; that would be completely awesome.
The decision of what is “better” is of course individual, but I hope there would be strong correlation. (On the other hand, I would expect different opinions on what is “best”.)
I think most Christians would say that Jabocs completely misunderstood Christianity.
I think that experiments like this which take ideas very seriously are good because they give us an additional perspective of what rationality happens to be.
He cheated by approximating the outside behavior, while preserving his inside behavior (thoughts and beliefs) as much as possible. When the year was over, he probably reverted back to normal. That kind of experiment is only good for examining a strawman. And also… for publicity.
I believe that in this community it is completely obvious that we are not trying to perform the Hollywood rationality. However, there is still a risk that our understanding is imperfect, and taking ideas seriously will expose the imperfections. For example, we may publicly profess that emotions are important, and yet our “rational” plans may fail to consider them. But this is where we need to use our ability to go meta and think: “okay, this plan sounded completely reasonable, but now that I am doing it for two months, I feel somehow unhappy and unmotivated”, so we try to update the plan, instead of merely (a) blindly following it, or (b) giving up completely.
One of the best example that I have for a rational plan is my attempt to gain weight by adding 800 kcal of maltodextrose to my daily tea consumption. It made so much sense.
On the other hand it didn’t work and it took me 2 months to admit that my scale showed still the same weight. The planes didn’t land.
However, there is still a risk that our understanding is imperfect, and taking ideas seriously will expose the imperfections.
I think it’s pretty certain that our understanding isn’t 100% perfect. We can run controlled trials to update our understanding of rationality and as far as I understand CFAR wants to does go that way.
Taking ideas overseriously is another way to see imperfections and gather knowledge. I think that when one tries to gather knowledge about a domain it’s useful to use many different approaches to gather knowledge.
Yes I still have that goal. I’m 181 cm tall and at 56 kg with +-2 kg for the last 3 years. Probably also the last ten but only in the last 3 I had regular measurements.
I’m curious about the qualifications to embark on such an experiment, and what makes it different from what everyone who posts in the rationality diaries has already been doing. I mean, for all intents and purposes, I made an effort to live more LW Rationally after reading the sequences, but that’s clearly not what we’re aiming for, here. Also (as I whine about frequently), I’m “poor”/legally blind/living in bible-thumping rural America, but would prefer to improve locally rather than move to, say, the SF Bay area; does any of that disqualify me from volunteering for the experiment?
I do really like the sound of this, though, and I’m hoping something useful (or at minimum entertaining) comes of it.
I don’t have a completely clear idea at this moment—to be honest, I am not even sure if the whole thing is not completely insane—but I imagine a very high commitment to the cause. For example, I am somewhat trying to live rationally, but I keep forgetting some useful things, or I break my rules all the time if I don’t feel like following them, etc. And the idea is that as a volunteer, I would stop making various kinds of excuses, and would give other people more control over my life or express my objections as honestly as I could. For example if other people told me to quit my job and do something else, I would either do it, or would write an explanation of why I am not convinced that this is the rational thing to do. I would respond to all rationality advice (at least by saying “sorry, for some emotional reasons I don’t completely understand, I can’t do this”), instead of merely picking the parts that feel nice and silently ignoring the rest, and even forgetting those nice parts whenever it is convenient.
In other words, I would make an extraordinary effort to live rationally, the whole year. If some things are taboo to me, I would try to declare that in advance (of course I cannot predict everything), to make a difference between something that is unacceptable in long term, and a short-term desire to avoid some inconvenience. I would take risks, when told to, assuming that the given advice is good. Or I would make some clear limits, such as: the money I have now in my bank account I want to remain there untouched during the experiment, and I will not take any debt; but of the money I make during the experiment, you can tell me how to use it. -- Something like the guy who lived one year Biblically did: he also had some limits, e.g. didn’t stone anyone, but otherwise he tried to follow the rules.
Your refusal to move to a different area, even if adviced to, does not disqualify you automatically. If this is your psychological constraint, it’s good to be open about it. On the other hand, if such constraint would make the members of the hive-mind so disappointed that they refuse to provide you support within these limits, that could disqualify you. (Or they may just decide to use their limited resources on someone with less constraints.) But if we communicate this all in advance, we minimize the risk of disappointment during the experiment; everyone either agrees on the same rules, or does not participate. If the hive-mind knows about your constraints in advance, they have no right to complain later.
EDIT: After reading what I wrote… seems like I’m equating “living rationally” and “obeying the LW hive-mind”, which technically are two different things. The idea behind this is that there is only one rationality, there is no “my rationality” and “your rationality” (there may be different values, though). I usually behave irrationally when I am under control of my impulses, because at that moment, they are all I see. Replacing these impulses with outside control should improve things. And the communication would make my thoughts more explicit, which also should help.
EDIT2: I believe it will be entertaining. It will be like a reality show (which is something humans love to watch, even if they are ashamed to admit it), only instead of stupid people doing pointless things there will be smart people doing potentially awesome things. Humans love stories. Humans love being a part of story.
Something like the guy who lived one year Biblically did: he also had some limits, e.g. didn’t stone anyone, but otherwise he tried to follow the rules.
He didn’t stone anybody to death but he still did through some pebble at other people to at least sort of follow the guideline.
In the previous Open Thread NancyLebovitz posted an article about the living-Biblically-for-one-year guy deciding to try living one year rationally. Alicorn noticed that the article was from 2008, so the project was probably cancelled.
However, I was thinking… if someone tried to do this, what would be the best way to do it. (It’s easy to imagine wrong ways: Hollywood rationality, etc.) We can assume that the person trying this experiment is not among the most rational people in the world, because they would already be too busy optimizing the universe, and wouldn’t have a year of time to spend on such experiment. Also, they would probably already be living pretty rationally, so there would be no big change in their life, and therefore not an interesting report. (Although the participation in the experiment might create some extra incentive to behave rationally more consistently.) On the other hand, too irrational person would not be able to perform the task successfully. So, let’s assume that the experimental person is… maybe an average LW reader, or someone generally LW-compatible who haven’t found the website yet. (This also assumes that the LW model of rationality is approximately correct. Well, without this assumption it doesn’t make much sense to discuss the best strategy here.)
So… let’s suppose we have a volunteer who says: “I will try living the next year as rationally as possible, of course within my limits, so give me an advice about how to do it best. (In exchange I promise to keep logs, diaries, and publish the whole story, which could create some popularity for LW and CFAR.)” What advice would we give them?
A good meta-advice would be to keep a feedback loop with other aspiring rationalists. Not just take some initial advice, go away, return after one year with the report and risk getting a “you completely misunderstood it” reaction. Instead they should keep in contact; the question is merely how frequent and how detailed would the optimum contact be, to avoid wasting too much time in web discussions. I could imagine: asking specific difficult questions whenever necessary, and writing a detailed report every month, with the plans for the following months, so people on LW could comment on the strategy. Of course even this decision could be consulted on LW.
Now this feels a bit like cheating. Are we trying to test what one person can achieve during a year of living rationally, or are we using a LW hive-mind to optimize the person? In other words, would the results of the experiment speak about the benefits of rationality on one person, or about benefits of having a LW hive-mind available? Uhm… maybe there is actually no difference there? I mean, it is rational to use the best tools available. Virtue of scholarship, optimizing our social environment, munchkin attitude, etc. For a munchkin, there is no such thing as “cheating”; there is only more or less winning. -- But the important question is what is the goal of this experiment. Is it optimizing the one person’s life? Or is it describing a strategy that dozens of other people may follow? Because if too many people decide to follow it, the LW hive-mind may be unable to provide a quality advice to all of them. On the other hand, such an event might motivate the LW hive-mind to become stronger and invent more efficient ways of supporting the aspiring rationalists. -- Uhm… I guess some forms of cheating should be prohibited. For example, if a poor person volunteers for the project, and some people from LW will send them money, and then they would rationalize it as winning by being rational even if the person does nothing else smart. (“What? In their situation it was rational to volunteer for the rationality experiment and ask people for money. It was a strategy that successfully increased their utility, and rationality by definition is winning.”) On the other hand, if the person asks LW members for an expert advice in a domain they didn’t study, I think that is completely fair; that is what they could (and perhaps should) have done even without the experiment. So, some kinds of support feel okay, other kinds feel not. Maybe the proper question is: Imagine that after successfully publishing the report, the next day 1000 more people would want to try using the same strategy. Would we feel that this contributed to our goal of raising the sanity waterline?
I also think that this kind of experiment would be fun, which is probably the main reason why I describe it; but as a side effect, if successful, it could be a great marketing material. What do you think? Is this “try one year of living most rationally with the support of LW hive-mind” experiment a good idea? Is anyone interested in being a volunteer? Are enough people interested in supporting them? (If yes, maybe we could launch the project on April 1st, the Fools’ Day, because it’s about all of us being less foolish, isn’t it?)
[pollid:642]
I think this is an excellent project, so excellent that I have to ask, why are we not (if indeed we are not) already doing this, all the time?
Weight-watcher groups are watching their weight all the time, not just meeting to talk about it. People meeting to help each other learn a foreign language are learning that language the rest of the time, at least, for as many hours a day as they find useful. University students make studying a full-time job (the ones that are serious). Rationality is supposed to be applicable to everything; every moment is an opportunity for practice.
You mentioned specific groups which try to reach a specific goal. That’s great, on the level of individual goals. But we also need to go more meta. The foreign language group will not tell you to stop learning the language if your life situation changes so that the original purpose of learning the language is no longer valid; or if simply a better opportunity appears and it would be rational to move your limited resources from the language towards something else. Also, if you already haven’t decided to study a specific language, the group will not find you and explore with you whether starting learning the language would be a good idea for you.
A rationalist group could help with this. We could already provide this support to each other; on meetups, on skype, on mailing lists. Some of us already use our good friends for this purpose; but the problem is that these friends are not always LW-style rationalists, so sometimes we only get their “cached thoughts” as an advice. Also, some people may use a psychologist; not necessarily as a source of rational advice, but as someone to listen and reflect on obvious irrationalities.
So, I think many of us are already using somewhat similar solutions, but either they were not consciously optimized, or they were optimized only for a partial goal.
Bad idea if you just go “live rationally”, imo. I predict it’d end up either as mostly useless cargo-cult behavior by a sufficiently incompetent-and-unaware-of-it participant, going crazy and frustrated trying to apply not yet processed for daily human consumption raw heuristics and biases research to everyday living 24⁄7, or doing the general wise living thing that smart people with life experience often already do to the best of their ability but which you can’t really impart in an instruction manual very well without having the “smart” and “life experience” parts covered.
Might be salvageable if you narrowed it down a bit. Live rationally to what end? Not having a clear idea of what my goals are is the first problem that comes to my mind when looking at this. I don’t see why “my goals are doing exactly what I already do day in, day out, so I’ve already been living rationally all this time, thank you very much” would necessarily be incoherent for example. So maybe go for success for society-wide measuring sticks, like impressive performace in standardised education and good income? A lot of people are doing that, but I’m not seeing terribly much sentiment here for people trying to maximize their earning potential and professional placement as the end goal in life, though some do consider it instumentally.
So maybe say the goal is to live the good life. Only it seems that the good life consists of goals that are often not quite accessible to the conscious mind and methods to search pursue them that can be quite elaborate and often need to be improvised on the spot.
Not to be all bleak and obscurantist though, there is the Wissner-Gross entropy thing, which is a quite interesting idea for an universal goal heuristic, something like “maneuver to maximize your decision space”. Also pretty squarely in the not yet ready for human consumption, will drive you crazy if you try to naively apply it 24⁄7 bin. And if you could actually codify how well someone’s satisfying a goal like that you’d probably be getting a PhD and a research job at Google, not running a forum challenge.
The participant could be observed by the LW community; something like a reality show. The costs of observation would have to be weighted, but I imagine the volunteer would provide:
A short log every day. Such as: “Learned 30 new words with Anki. Met with a friend and discussed our plans; seems interested, but we didn’t agree on anything specific. Exercise. Wrote two pages of my thesis, the last one needs a rewrite. Spent 3 hours on internet.” Not too detailed, not to waste too much time, but detailed enough to provide an idea about progress. (The log would be outside of LW, to reduce the volunteer’s temptation to procrastinate here.)
A plan every week: What do I want to achieve; what needs to be done. Something like a GTD plan with “next actions”. What could go wrong, and how will I react. What do I want to avoid, and how. -- At the end of the week: A summary, what happened as expected, what was different, what lessons can be taken. -- The LW hive mind would discuss this, and the volunteer can decide to follow their suggestions.
Every month: A comment-sized report in LW Group Rationality Diary; for the same reason other people write there: to encourage each other.
In this case I would recommend giving feedback: “I’m trying to do this, and it drives me crazy. Any advice? I spent thinking five minutes about it, and here are my ideas: X, Y, Z.”
This could probably be solved by making a prediction at the beginning of the project. The volunteer would list the changes in the previous years, successes and failures, and interpolate: “Using my previous years as an outside view, I predict that if I didn’t participate in this experiment, I would probably do A, B, C.” At the end of the project, the actual outcomes can be compared with the prediction.
Sure. The goals would be stated by the volunteer, either from the beginning, or at least at the end of the first month.
It’s perfectly okay. It just does not make sense to participate in the experiment for this specific person. The experiment is meant for people who are not in this situation.
Instead of trying to do the perfect thing immediately, I would recommend continuous improvement. Find the most painful problems, and fix them first. Find the obvious mistakes, and do better (not best, just better). Progress towards your current goals, but when you realize they were mistaken, improve them. If you think you couldn’t do a big change, start with doing small changes; and once in a while reconsider your beliefs about the big change. The goal is not to be perfect, but to keep improving.
If at the end you are significantly better than a prediction based on your past, that’s a success. If as a side effect we get better experimental data, or if you can rewrite and publish your logs as an e-book to make extra money and do an advertisement for CFAR, that’s even better. If you inspire dozen other people, and if most of them also become significantly better than the predictions based on their pasts; and if the improvement is still there even after the end of experiments; that would be completely awesome.
The decision of what is “better” is of course individual, but I hope there would be strong correlation. (On the other hand, I would expect different opinions on what is “best”.)
I think most Christians would say that Jabocs completely misunderstood Christianity.
I think that experiments like this which take ideas very seriously are good because they give us an additional perspective of what rationality happens to be.
Jacobs’ Biblical behavior : Christianity = Hollywood rationality : LW rationality
He cheated by approximating the outside behavior, while preserving his inside behavior (thoughts and beliefs) as much as possible. When the year was over, he probably reverted back to normal. That kind of experiment is only good for examining a strawman. And also… for publicity.
I believe that in this community it is completely obvious that we are not trying to perform the Hollywood rationality. However, there is still a risk that our understanding is imperfect, and taking ideas seriously will expose the imperfections. For example, we may publicly profess that emotions are important, and yet our “rational” plans may fail to consider them. But this is where we need to use our ability to go meta and think: “okay, this plan sounded completely reasonable, but now that I am doing it for two months, I feel somehow unhappy and unmotivated”, so we try to update the plan, instead of merely (a) blindly following it, or (b) giving up completely.
One of the best example that I have for a rational plan is my attempt to gain weight by adding 800 kcal of maltodextrose to my daily tea consumption. It made so much sense.
On the other hand it didn’t work and it took me 2 months to admit that my scale showed still the same weight. The planes didn’t land.
I think it’s pretty certain that our understanding isn’t 100% perfect. We can run controlled trials to update our understanding of rationality and as far as I understand CFAR wants to does go that way.
Taking ideas overseriously is another way to see imperfections and gather knowledge. I think that when one tries to gather knowledge about a domain it’s useful to use many different approaches to gather knowledge.
Why were you trying to gain weight and is it still a goal?
I deliberately adjust my weight up or down by ~ 10kg fairly regularly and depending on your situation, I might be able to offer some ideas.
Yes I still have that goal. I’m 181 cm tall and at 56 kg with +-2 kg for the last 3 years. Probably also the last ten but only in the last 3 I had regular measurements.
I’m curious about the qualifications to embark on such an experiment, and what makes it different from what everyone who posts in the rationality diaries has already been doing. I mean, for all intents and purposes, I made an effort to live more LW Rationally after reading the sequences, but that’s clearly not what we’re aiming for, here. Also (as I whine about frequently), I’m “poor”/legally blind/living in bible-thumping rural America, but would prefer to improve locally rather than move to, say, the SF Bay area; does any of that disqualify me from volunteering for the experiment?
I do really like the sound of this, though, and I’m hoping something useful (or at minimum entertaining) comes of it.
I don’t have a completely clear idea at this moment—to be honest, I am not even sure if the whole thing is not completely insane—but I imagine a very high commitment to the cause. For example, I am somewhat trying to live rationally, but I keep forgetting some useful things, or I break my rules all the time if I don’t feel like following them, etc. And the idea is that as a volunteer, I would stop making various kinds of excuses, and would give other people more control over my life or express my objections as honestly as I could. For example if other people told me to quit my job and do something else, I would either do it, or would write an explanation of why I am not convinced that this is the rational thing to do. I would respond to all rationality advice (at least by saying “sorry, for some emotional reasons I don’t completely understand, I can’t do this”), instead of merely picking the parts that feel nice and silently ignoring the rest, and even forgetting those nice parts whenever it is convenient.
In other words, I would make an extraordinary effort to live rationally, the whole year. If some things are taboo to me, I would try to declare that in advance (of course I cannot predict everything), to make a difference between something that is unacceptable in long term, and a short-term desire to avoid some inconvenience. I would take risks, when told to, assuming that the given advice is good. Or I would make some clear limits, such as: the money I have now in my bank account I want to remain there untouched during the experiment, and I will not take any debt; but of the money I make during the experiment, you can tell me how to use it. -- Something like the guy who lived one year Biblically did: he also had some limits, e.g. didn’t stone anyone, but otherwise he tried to follow the rules.
Your refusal to move to a different area, even if adviced to, does not disqualify you automatically. If this is your psychological constraint, it’s good to be open about it. On the other hand, if such constraint would make the members of the hive-mind so disappointed that they refuse to provide you support within these limits, that could disqualify you. (Or they may just decide to use their limited resources on someone with less constraints.) But if we communicate this all in advance, we minimize the risk of disappointment during the experiment; everyone either agrees on the same rules, or does not participate. If the hive-mind knows about your constraints in advance, they have no right to complain later.
EDIT: After reading what I wrote… seems like I’m equating “living rationally” and “obeying the LW hive-mind”, which technically are two different things. The idea behind this is that there is only one rationality, there is no “my rationality” and “your rationality” (there may be different values, though). I usually behave irrationally when I am under control of my impulses, because at that moment, they are all I see. Replacing these impulses with outside control should improve things. And the communication would make my thoughts more explicit, which also should help.
EDIT2: I believe it will be entertaining. It will be like a reality show (which is something humans love to watch, even if they are ashamed to admit it), only instead of stupid people doing pointless things there will be smart people doing potentially awesome things. Humans love stories. Humans love being a part of story.
He didn’t stone anybody to death but he still did through some pebble at other people to at least sort of follow the guideline.