I used to be in-practice orienting to trying to help MIRI with recruiting. (Not, mostly, to trying to develop an art of human rationality, though there was some of that.)
MIRI is mostly not recruiting, or at least not in the way it used to be for the research programs it discontinued, so that is no longer a viable model for impact, which if you like you could reasonably accurately see as a cause of why I personally have been primarily trying to understand the world and to look for joints, rather than primarily trying to run mainlines at scale.
I do not think I’ve given up in any important sense, and I do not personally think CFAR has given up in any important sense either, although one of the strengths of our community has always been its disagreeableness, and the amount of scaling down and changing activities and such is enough that I will not think someone necessarily uninformed if they say the opposite.
My guess is actually that we’ll be less focused on AI or other narrow interventions, and more focused on something sort of like “human rationality broadly” (without “rationality” necessarily being quite the central thing—maybe more like: “sanity” or “ability to build and inquire and be sane and conscious and to stay able to care”). (“Rationality” in the sense of the “lens that sees its own flaws” is an amazing goal, but may have some more basic things as prereqs / necessary context, so may need to have a home as part of a larger goal.) But it’s hard to say.
We are open to hiring new people. Message me if you’re interested. If you come to CFAR, you’ll have a lot of freedom to do things you personally have telos to do, whether or not the rest of us fully see it; and you may be able to get some cool collaborations with us or others in our orbit, although we are few at the moment and also that part depends on whether whoever else sees sense in your project. Also, we have a great venue.
Afterthoughts / later additions:
I used to be in-practice orienting to trying to help MIRI with recruiting. (Not, mostly, to trying to develop an art of human rationality, though there was some of that.)
MIRI is mostly not recruiting, or at least not in the way it used to be for the research programs it discontinued, so that is no longer a viable model for impact, which if you like you could reasonably accurately see as a cause of why I personally have been primarily trying to understand the world and to look for joints, rather than primarily trying to run mainlines at scale.
I do not think I’ve given up in any important sense, and I do not personally think CFAR has given up in any important sense either, although one of the strengths of our community has always been its disagreeableness, and the amount of scaling down and changing activities and such is enough that I will not think someone necessarily uninformed if they say the opposite.
My guess is actually that we’ll be less focused on AI or other narrow interventions, and more focused on something sort of like “human rationality broadly” (without “rationality” necessarily being quite the central thing—maybe more like: “sanity” or “ability to build and inquire and be sane and conscious and to stay able to care”). (“Rationality” in the sense of the “lens that sees its own flaws” is an amazing goal, but may have some more basic things as prereqs / necessary context, so may need to have a home as part of a larger goal.) But it’s hard to say.
We are open to hiring new people. Message me if you’re interested. If you come to CFAR, you’ll have a lot of freedom to do things you personally have telos to do, whether or not the rest of us fully see it; and you may be able to get some cool collaborations with us or others in our orbit, although we are few at the moment and also that part depends on whether whoever else sees sense in your project. Also, we have a great venue.