One short answer is: “I, Raemon, do not really bite bullets. What I do is something more like “flag where there were bullets I didn’t bite, or areas that I am confused about, and mark those on my Internal Map with a giant red-pen ‘PLEASE EVALUATE LATER WHEN YOU ARE HAVE TIME AND/OR ARE WISER’ label.”
One example of this: I describe my moral intuitions as “Sort of like median-preference utilitarianism, but not really. Median-preference-utilitarianism seems to break slightly less often in ways slightly more forgiveable than other moral theories, but not by much.”
Meanwhile, my decision-making is something like “95% selfish, 5% altruistic within the ‘sort of but not really median-preference-utilitarian-lens’, but I look for ways for the 95% selfish part to get what it wants while generating positive externalities for the 5% altruistic part.” And I endorse people doing a similarly hacky system as they figure themselves out.
(Also, while I don’t remember exactly how I phrased things, I don’t actually think robust agency is a thing people should pursue by default. It’s something that’s useful for certain types of people who have certain precursor properties. I tried to phrase my posts like ‘here are some reasons it might be better to be more robustly-agentic, where you’ll be experiencing a tradeoff if you don’t do it’, but not making the claim that the RA tradeoffs are correct for everyone)
On the flipside, I think a disagreement I have with habryka (or did, a year or two ago), was something like habryka saying: “It’s better to build an explicit model, try to use the model for real, and then notice when it breaks, and then build a new model. This will cause you to underperform initially but eventually outclass those who were trying to hack together various bits of cultural wisdom without understanding them.”
I think I roughly agree with that statement of his, I just think that the cost of lots of people doing this at once are fairly high and that you should instead do something like ‘start with vague cultural wisdom that seems to work and slowly replace it with more robust things as you gain skills that enable you to do so.’
start with vague cultural wisdom that seems to work and slowly replace it with more robust things as you gain skills that enable you to do so.′
I think the thing I actually do here most often is start with a bunch of incompatible models that I learned elsewhere, then try to randomly apply them and see my results. Over time I notice that certain parts work and don’t, and that certain models tend to work in certain situations. Eventually, I examine my actual beliefs on the situation and find something like “Oh, I’ve actually developed my own theory of this that ties together the best parts of all of these models and my own observations.” Sometimes I help this along explicitly by introspecting on the switching rules/similarities and differences between models, etc.
This feels related to the thing that happens with my moral intuitions, except that there are internal models that didn’t seem to come from outside or my own experiences at all, basic things I like and dislike, and so sometimes all these models converge and I still have a separate thing that’s like NOPE, still not there yet.
I think the thing I actually do here most often is start with a bunch of incompatible models that I learned elsewhere, then try to randomly apply them and see my results.
This seems basically fine, but I mean my advice to apply to, like, 4 and 12 year olds who don’t really understand what a model is. Anything model-shaped or robust-shaped has to bootstrap from something that’s more Cultural wisdom shaped. (but, I probably agree that you can have cultural wisdom that more directly bootstraps you into ‘learn to build models’)
Thanks for the crisp articulation.
One short answer is: “I, Raemon, do not really bite bullets. What I do is something more like “flag where there were bullets I didn’t bite, or areas that I am confused about, and mark those on my Internal Map with a giant red-pen ‘PLEASE EVALUATE LATER WHEN YOU ARE HAVE TIME AND/OR ARE WISER’ label.”
One example of this: I describe my moral intuitions as “Sort of like median-preference utilitarianism, but not really. Median-preference-utilitarianism seems to break slightly less often in ways slightly more forgiveable than other moral theories, but not by much.”
Meanwhile, my decision-making is something like “95% selfish, 5% altruistic within the ‘sort of but not really median-preference-utilitarian-lens’, but I look for ways for the 95% selfish part to get what it wants while generating positive externalities for the 5% altruistic part.” And I endorse people doing a similarly hacky system as they figure themselves out.
(Also, while I don’t remember exactly how I phrased things, I don’t actually think robust agency is a thing people should pursue by default. It’s something that’s useful for certain types of people who have certain precursor properties. I tried to phrase my posts like ‘here are some reasons it might be better to be more robustly-agentic, where you’ll be experiencing a tradeoff if you don’t do it’, but not making the claim that the RA tradeoffs are correct for everyone)
On the flipside, I think a disagreement I have with habryka (or did, a year or two ago), was something like habryka saying: “It’s better to build an explicit model, try to use the model for real, and then notice when it breaks, and then build a new model. This will cause you to underperform initially but eventually outclass those who were trying to hack together various bits of cultural wisdom without understanding them.”
I think I roughly agree with that statement of his, I just think that the cost of lots of people doing this at once are fairly high and that you should instead do something like ‘start with vague cultural wisdom that seems to work and slowly replace it with more robust things as you gain skills that enable you to do so.’
I think the thing I actually do here most often is start with a bunch of incompatible models that I learned elsewhere, then try to randomly apply them and see my results. Over time I notice that certain parts work and don’t, and that certain models tend to work in certain situations. Eventually, I examine my actual beliefs on the situation and find something like “Oh, I’ve actually developed my own theory of this that ties together the best parts of all of these models and my own observations.” Sometimes I help this along explicitly by introspecting on the switching rules/similarities and differences between models, etc.
This feels related to the thing that happens with my moral intuitions, except that there are internal models that didn’t seem to come from outside or my own experiences at all, basic things I like and dislike, and so sometimes all these models converge and I still have a separate thing that’s like NOPE, still not there yet.
This seems basically fine, but I mean my advice to apply to, like, 4 and 12 year olds who don’t really understand what a model is. Anything model-shaped or robust-shaped has to bootstrap from something that’s more Cultural wisdom shaped. (but, I probably agree that you can have cultural wisdom that more directly bootstraps you into ‘learn to build models’)
I think I was viewing “cultural wisdom’ as basically its’ own blackbox model, and in practice I think this is basically how I treat it.
Nitpick: Human’s are definitely creating models at 12, and able to understand that what they’re creating are models.