Hmm, sure my approach is definitely instrumental rationality-oriented, but I value epistemology a lot and you won’t find me complaining about it. As far as I can predict the experience of someone who has a pressing need to learn epistemic rationality efficiently and tries LW, they are going to be very frustrated (beyond the standard sequences). Eliezer worked not only as idea-adder, but also as idea-distiller and sequence-stringer. So maybe it’s just that the rest of LW engages in idea-adding only?
About my instrumental ideas, sure I’m interested in sharing them, but because of excessive lurking I have built quite some inferential distances in a few areas that are important to me. So for now I feel like it’s easier to write about things that I do not know too much about… (It’s actually a good meta-example of how “sentence-stringing” could be seen as the real “magic” behind teaching and learning rationality, and it’s (maybe?) a separate vital skill not many people have?) I’m generally baffled about how to communicate about this, especially the stuff related to “rationality of happiness”—I guess mostly because I know this part would sound utterly uninteresting. Mostly: here’s a bunch of methods that work not too bad, if you fine tune them for a long time… here’s some splitting of mental buckets to have more nuanced language… here’s a few tricks I stole from various sources and tested empirically… here’s my rough model of how to start success spirals of self change by slowly building confidence and accountability, but who the hell knows how it works really, I only tested this on myself so there may be dozens of other factors. You get the idea.
All this reminds me of how it typically goes when you try talk to people about regulating sleep. Problem 1: everyone is an expert. Problem 2: there’s no single method that works. Problem 3: no method works instantly. Problem 4: for anything to work, it needs to be fine tuned for the individual and it also depends on all other factors, so you can’t test these things in isolation. Problem 5: hearing a description of a method that works does not seem to justify the effort, until you experience by yourself what the benefits are. Problem 6: the benefits are spread over time, so it’s hard to notice them even if they are big and obvious in the “big picture” view.
All of this basically applies to teaching/learning instrumental rationality.
I really like lists as a way to gather the possible good and possible bad solutions to the problem. So long as people recognise it’s a list of ideas; not an instruction manual or the answers. I would like to get around to writing about . Understanding that if this advice worked for someone there was a way that it worked. And considering if there is a way to make it work for you can maybe help you find a way to make it work for you too.
I remember reading through that list sometime in the past, and I wanted to point something out to you.
[Disclaimer: all of the below is per my current understanding. It is a strong opinion moderately held.]
Sleep regulation is an example of optimizing a highly non-linear and volatile system with a multi-dimensional parameter space.
And in this class of problems, listing various parameters is good only as a way to know what is the space we are trying to optimize over. But if you try to gather information about how useful is each of those, you are shooting yourself in the foot before you even started.
If you hear a report of a method that worked for someone, it merely means it was the last missing piece to reach a local optimum.
In other words, this class of problems inherently do not have stable object level solutions.
Edit: please tell me if what I’m saying sounds wrong to your ears, I’m afraid I’ve forgotten myself a little and ignored the possible inferential distances I might have here and there. So from my perspective this simply points to the idea to apply and test some of the meta-level strategies that work in other contexts, like timeboxing imitations of various people, or upsetting the system on purpose to find a new local optimum, both of which may work better than random walk on the parameter space.
As I said; is a viable strategy, and as a step in the process; understanding why advice is applicable; can help you in applying it.
Example: advice—spend less time organising and just get down to it, (was offered to me by a student who was borderline OCD, enjoyed the scheduling side of things).
I looked at this advice and realised it is really great advice (for herself, or others in her position,) for people who spend too much time organising, but entirely not helpful for myself who spends zero (+/-) time organising myself. By understanding the reason why; (as you said), “a method that worked for someone… to reach a local optimum.” you can better plan and try to apply solutions to your own situation. (I appear to be strongly agreeing with you)
Hmm, sure my approach is definitely instrumental rationality-oriented, but I value epistemology a lot and you won’t find me complaining about it. As far as I can predict the experience of someone who has a pressing need to learn epistemic rationality efficiently and tries LW, they are going to be very frustrated (beyond the standard sequences). Eliezer worked not only as idea-adder, but also as idea-distiller and sequence-stringer. So maybe it’s just that the rest of LW engages in idea-adding only?
About my instrumental ideas, sure I’m interested in sharing them, but because of excessive lurking I have built quite some inferential distances in a few areas that are important to me. So for now I feel like it’s easier to write about things that I do not know too much about… (It’s actually a good meta-example of how “sentence-stringing” could be seen as the real “magic” behind teaching and learning rationality, and it’s (maybe?) a separate vital skill not many people have?) I’m generally baffled about how to communicate about this, especially the stuff related to “rationality of happiness”—I guess mostly because I know this part would sound utterly uninteresting. Mostly: here’s a bunch of methods that work not too bad, if you fine tune them for a long time… here’s some splitting of mental buckets to have more nuanced language… here’s a few tricks I stole from various sources and tested empirically… here’s my rough model of how to start success spirals of self change by slowly building confidence and accountability, but who the hell knows how it works really, I only tested this on myself so there may be dozens of other factors. You get the idea.
All this reminds me of how it typically goes when you try talk to people about regulating sleep. Problem 1: everyone is an expert. Problem 2: there’s no single method that works. Problem 3: no method works instantly. Problem 4: for anything to work, it needs to be fine tuned for the individual and it also depends on all other factors, so you can’t test these things in isolation. Problem 5: hearing a description of a method that works does not seem to justify the effort, until you experience by yourself what the benefits are. Problem 6: the benefits are spread over time, so it’s hard to notice them even if they are big and obvious in the “big picture” view.
All of this basically applies to teaching/learning instrumental rationality.
not a terrible way to offer solutions.
I wrote a very long list of sleep maintenance suggestions to help. Not so
I really like lists as a way to gather the possible good and possible bad solutions to the problem. So long as people recognise it’s a list of ideas; not an instruction manual or the answers. I would like to get around to writing about . Understanding that if this advice worked for someone there was a way that it worked. And considering if there is a way to make it work for you can maybe help you find a way to make it work for you too.
I remember reading through that list sometime in the past, and I wanted to point something out to you.
[Disclaimer: all of the below is per my current understanding. It is a strong opinion moderately held.]
Sleep regulation is an example of optimizing a highly non-linear and volatile system with a multi-dimensional parameter space.
And in this class of problems, listing various parameters is good only as a way to know what is the space we are trying to optimize over. But if you try to gather information about how useful is each of those, you are shooting yourself in the foot before you even started.
If you hear a report of a method that worked for someone, it merely means it was the last missing piece to reach a local optimum.
In other words, this class of problems inherently do not have stable object level solutions.
Edit: please tell me if what I’m saying sounds wrong to your ears, I’m afraid I’ve forgotten myself a little and ignored the possible inferential distances I might have here and there. So from my perspective this simply points to the idea to apply and test some of the meta-level strategies that work in other contexts, like timeboxing imitations of various people, or upsetting the system on purpose to find a new local optimum, both of which may work better than random walk on the parameter space.
As I said; is a viable strategy, and as a step in the process; understanding why advice is applicable; can help you in applying it.
Example: advice—spend less time organising and just get down to it, (was offered to me by a student who was borderline OCD, enjoyed the scheduling side of things).
I looked at this advice and realised it is really great advice (for herself, or others in her position,) for people who spend too much time organising, but entirely not helpful for myself who spends zero (+/-) time organising myself. By understanding the reason why; (as you said), “a method that worked for someone… to reach a local optimum.” you can better plan and try to apply solutions to your own situation. (I appear to be strongly agreeing with you)