This post has tentatively entered my professional worldview. “Big if true.”
I’m looking at this through the lens of “how do we find/create the right people to help solve x-risk and other key urgent problems.” The track record of AI/rationalist training programs doesn’t seem that great. (i.e. they seem to typically work mostly via selection[1]).
In the past year, I’ve seen John attempt to make an actual training regimen for solving problems we don’t understand. I feel at least somewhat optimistic about his current training attempts, partly because his models make sense to me and partly based on his writeup of the results here. But I think we’re another couple years out before I really know how well it panned out.
I almost reviewed this post without re-reading it, but am glad I stopped to fully re-read. The mechanics/math of the how the bits-of-selection worked were particularly helpful and I’d forgotten them. One thing they highlight: you might need a lot of different skills. And maybe some of those skills are ineffable and hard to teach. But others might be much more teachable. So maybe you need to select on one hard-to-find property, but can train a lot of other skills.
Some musings on training
I’m maybe more optimistic than John about what percentage of “school” is “training”. I think maybe 10-15% of what I learned in middle/high-school was at least somewhat relevant to my longterm career, and later when I went to a trade school, I’d say closer to 50% of it was actual training, which I’d have had a harder time doing on my own. (And, my trade school created half of it’s classes out of an attempt to be an accredited university. i.e. half the classes were definitively bullshit, and the other half were basically all useful if you were going into the domain of computer-animation).
When I say “rationality training turned out to mostly be selection”, I think probably what I mean was “it didn’t create superheroes, the way HPMOR might have vaguely led you to believe.” And perhaps, “it mostly didn’t produce great researchers.” I do think the CFAR-and-Leverage-ecosystem produced a bunch of relevant skills for navigating life, which raise the sanity-and-coordination-waterline. I think it had the positive impact of “producing pretty good citizens.” I’ve heard CFAR instructors complain that mostly they don’t seem to imbue the spark of rationality into people, they only find people who already had the spark. But, it clearly IMO created an environment where people-with-that-spark cultivated it and leveled up at it.
I’ve heard grad school successful training people in the ineffable domain of research (or, the “hard-to-eff” domain of research). The thing that seems off/unsatisfactory about it, from the perspective of the x-risk-landscape, is it doesn’t really train goal directed research, where you’re actually trying to accomplish a particular task, and notice when you might be confused about how to approach it.
This post has tentatively entered my professional worldview. “Big if true.”
I’m looking at this through the lens of “how do we find/create the right people to help solve x-risk and other key urgent problems.” The track record of AI/rationalist training programs doesn’t seem that great. (i.e. they seem to typically work mostly via selection[1]).
In the past year, I’ve seen John attempt to make an actual training regimen for solving problems we don’t understand. I feel at least somewhat optimistic about his current training attempts, partly because his models make sense to me and partly based on his writeup of the results here. But I think we’re another couple years out before I really know how well it panned out.
I almost reviewed this post without re-reading it, but am glad I stopped to fully re-read. The mechanics/math of the how the bits-of-selection worked were particularly helpful and I’d forgotten them. One thing they highlight: you might need a lot of different skills. And maybe some of those skills are ineffable and hard to teach. But others might be much more teachable. So maybe you need to select on one hard-to-find property, but can train a lot of other skills.
Some musings on training
I’m maybe more optimistic than John about what percentage of “school” is “training”. I think maybe 10-15% of what I learned in middle/high-school was at least somewhat relevant to my longterm career, and later when I went to a trade school, I’d say closer to 50% of it was actual training, which I’d have had a harder time doing on my own. (And, my trade school created half of it’s classes out of an attempt to be an accredited university. i.e. half the classes were definitively bullshit, and the other half were basically all useful if you were going into the domain of computer-animation).
When I say “rationality training turned out to mostly be selection”, I think probably what I mean was “it didn’t create superheroes, the way HPMOR might have vaguely led you to believe.” And perhaps, “it mostly didn’t produce great researchers.” I do think the CFAR-and-Leverage-ecosystem produced a bunch of relevant skills for navigating life, which raise the sanity-and-coordination-waterline. I think it had the positive impact of “producing pretty good citizens.” I’ve heard CFAR instructors complain that mostly they don’t seem to imbue the spark of rationality into people, they only find people who already had the spark. But, it clearly IMO created an environment where people-with-that-spark cultivated it and leveled up at it.
I’ve heard grad school successful training people in the ineffable domain of research (or, the “hard-to-eff” domain of research). The thing that seems off/unsatisfactory about it, from the perspective of the x-risk-landscape, is it doesn’t really train goal directed research, where you’re actually trying to accomplish a particular task, and notice when you might be confused about how to approach it.