Bad idea if you just go “live rationally”, imo. I predict it’d end up either as mostly useless cargo-cult behavior by a sufficiently incompetent-and-unaware-of-it participant, going crazy and frustrated trying to apply not yet processed for daily human consumption raw heuristics and biases research to everyday living 24⁄7, or doing the general wise living thing that smart people with life experience often already do to the best of their ability but which you can’t really impart in an instruction manual very well without having the “smart” and “life experience” parts covered.
Might be salvageable if you narrowed it down a bit. Live rationally to what end? Not having a clear idea of what my goals are is the first problem that comes to my mind when looking at this. I don’t see why “my goals are doing exactly what I already do day in, day out, so I’ve already been living rationally all this time, thank you very much” would necessarily be incoherent for example. So maybe go for success for society-wide measuring sticks, like impressive performace in standardised education and good income? A lot of people are doing that, but I’m not seeing terribly much sentiment here for people trying to maximize their earning potential and professional placement as the end goal in life, though some do consider it instumentally.
So maybe say the goal is to live the good life. Only it seems that the good life consists of goals that are often not quite accessible to the conscious mind and methods to search pursue them that can be quite elaborate and often need to be improvised on the spot.
Not to be all bleak and obscurantist though, there is the Wissner-Gross entropy thing, which is a quite interesting idea for an universal goal heuristic, something like “maneuver to maximize your decision space”. Also pretty squarely in the not yet ready for human consumption, will drive you crazy if you try to naively apply it 24⁄7 bin. And if you could actually codify how well someone’s satisfying a goal like that you’d probably be getting a PhD and a research job at Google, not running a forum challenge.
The participant could be observed by the LW community; something like a reality show. The costs of observation would have to be weighted, but I imagine the volunteer would provide:
A short log every day. Such as: “Learned 30 new words with Anki. Met with a friend and discussed our plans; seems interested, but we didn’t agree on anything specific. Exercise. Wrote two pages of my thesis, the last one needs a rewrite. Spent 3 hours on internet.” Not too detailed, not to waste too much time, but detailed enough to provide an idea about progress. (The log would be outside of LW, to reduce the volunteer’s temptation to procrastinate here.)
A plan every week: What do I want to achieve; what needs to be done. Something like a GTD plan with “next actions”. What could go wrong, and how will I react. What do I want to avoid, and how. -- At the end of the week: A summary, what happened as expected, what was different, what lessons can be taken. -- The LW hive mind would discuss this, and the volunteer can decide to follow their suggestions.
Every month: A comment-sized report in LW Group Rationality Diary; for the same reason other people write there: to encourage each other.
going crazy and frustrated trying to apply not yet processed for daily human consumption raw heuristics and biases research to everyday living 24⁄7
In this case I would recommend giving feedback: “I’m trying to do this, and it drives me crazy. Any advice? I spent thinking five minutes about it, and here are my ideas: X, Y, Z.”
or doing the general wise living thing that smart people with life experience often already do to the best of their ability
This could probably be solved by making a prediction at the beginning of the project. The volunteer would list the changes in the previous years, successes and failures, and interpolate: “Using my previous years as an outside view, I predict that if I didn’t participate in this experiment, I would probably do A, B, C.” At the end of the project, the actual outcomes can be compared with the prediction.
Live rationally to what end? Not having a clear idea of what my goals are is the first problem that comes to my mind when looking at this.
Sure. The goals would be stated by the volunteer, either from the beginning, or at least at the end of the first month.
I don’t see why “my goals are doing exactly what I already do day in, day out, so I’ve already been living rationally all this time, thank you very much” would necessarily be incoherent for example.
It’s perfectly okay. It just does not make sense to participate in the experiment for this specific person. The experiment is meant for people who are not in this situation.
Instead of trying to do the perfect thing immediately, I would recommend continuous improvement. Find the most painful problems, and fix them first. Find the obvious mistakes, and do better (not best, just better). Progress towards your current goals, but when you realize they were mistaken, improve them. If you think you couldn’t do a big change, start with doing small changes; and once in a while reconsider your beliefs about the big change. The goal is not to be perfect, but to keep improving.
If at the end you are significantly better than a prediction based on your past, that’s a success. If as a side effect we get better experimental data, or if you can rewrite and publish your logs as an e-book to make extra money and do an advertisement for CFAR, that’s even better. If you inspire dozen other people, and if most of them also become significantly better than the predictions based on their pasts; and if the improvement is still there even after the end of experiments; that would be completely awesome.
The decision of what is “better” is of course individual, but I hope there would be strong correlation. (On the other hand, I would expect different opinions on what is “best”.)
Bad idea if you just go “live rationally”, imo. I predict it’d end up either as mostly useless cargo-cult behavior by a sufficiently incompetent-and-unaware-of-it participant, going crazy and frustrated trying to apply not yet processed for daily human consumption raw heuristics and biases research to everyday living 24⁄7, or doing the general wise living thing that smart people with life experience often already do to the best of their ability but which you can’t really impart in an instruction manual very well without having the “smart” and “life experience” parts covered.
Might be salvageable if you narrowed it down a bit. Live rationally to what end? Not having a clear idea of what my goals are is the first problem that comes to my mind when looking at this. I don’t see why “my goals are doing exactly what I already do day in, day out, so I’ve already been living rationally all this time, thank you very much” would necessarily be incoherent for example. So maybe go for success for society-wide measuring sticks, like impressive performace in standardised education and good income? A lot of people are doing that, but I’m not seeing terribly much sentiment here for people trying to maximize their earning potential and professional placement as the end goal in life, though some do consider it instumentally.
So maybe say the goal is to live the good life. Only it seems that the good life consists of goals that are often not quite accessible to the conscious mind and methods to search pursue them that can be quite elaborate and often need to be improvised on the spot.
Not to be all bleak and obscurantist though, there is the Wissner-Gross entropy thing, which is a quite interesting idea for an universal goal heuristic, something like “maneuver to maximize your decision space”. Also pretty squarely in the not yet ready for human consumption, will drive you crazy if you try to naively apply it 24⁄7 bin. And if you could actually codify how well someone’s satisfying a goal like that you’d probably be getting a PhD and a research job at Google, not running a forum challenge.
The participant could be observed by the LW community; something like a reality show. The costs of observation would have to be weighted, but I imagine the volunteer would provide:
A short log every day. Such as: “Learned 30 new words with Anki. Met with a friend and discussed our plans; seems interested, but we didn’t agree on anything specific. Exercise. Wrote two pages of my thesis, the last one needs a rewrite. Spent 3 hours on internet.” Not too detailed, not to waste too much time, but detailed enough to provide an idea about progress. (The log would be outside of LW, to reduce the volunteer’s temptation to procrastinate here.)
A plan every week: What do I want to achieve; what needs to be done. Something like a GTD plan with “next actions”. What could go wrong, and how will I react. What do I want to avoid, and how. -- At the end of the week: A summary, what happened as expected, what was different, what lessons can be taken. -- The LW hive mind would discuss this, and the volunteer can decide to follow their suggestions.
Every month: A comment-sized report in LW Group Rationality Diary; for the same reason other people write there: to encourage each other.
In this case I would recommend giving feedback: “I’m trying to do this, and it drives me crazy. Any advice? I spent thinking five minutes about it, and here are my ideas: X, Y, Z.”
This could probably be solved by making a prediction at the beginning of the project. The volunteer would list the changes in the previous years, successes and failures, and interpolate: “Using my previous years as an outside view, I predict that if I didn’t participate in this experiment, I would probably do A, B, C.” At the end of the project, the actual outcomes can be compared with the prediction.
Sure. The goals would be stated by the volunteer, either from the beginning, or at least at the end of the first month.
It’s perfectly okay. It just does not make sense to participate in the experiment for this specific person. The experiment is meant for people who are not in this situation.
Instead of trying to do the perfect thing immediately, I would recommend continuous improvement. Find the most painful problems, and fix them first. Find the obvious mistakes, and do better (not best, just better). Progress towards your current goals, but when you realize they were mistaken, improve them. If you think you couldn’t do a big change, start with doing small changes; and once in a while reconsider your beliefs about the big change. The goal is not to be perfect, but to keep improving.
If at the end you are significantly better than a prediction based on your past, that’s a success. If as a side effect we get better experimental data, or if you can rewrite and publish your logs as an e-book to make extra money and do an advertisement for CFAR, that’s even better. If you inspire dozen other people, and if most of them also become significantly better than the predictions based on their pasts; and if the improvement is still there even after the end of experiments; that would be completely awesome.
The decision of what is “better” is of course individual, but I hope there would be strong correlation. (On the other hand, I would expect different opinions on what is “best”.)