My MATS program people just spent two days on an exercise to “train a shoulder-John”.
The core exercise: I sit at the front of the room, and have a conversation with someone about their research project idea. Whenever I’m about to say anything nontrivial, I pause, and everyone discusses with a partner what they think I’m going to say next. Then we continue.
Some bells and whistles which add to the core exercise:
Record guesses and actual things said on a whiteboard
Sometimes briefly discuss why I’m saying some things and not others
After the first few rounds establish some patterns, look specifically for ideas which will take us further out of distribution
Why this particular exercise? It’s a focused, rapid-feedback way of training the sort of usually-not-very-legible skills one typically absorbs via osmosis from a mentor. It’s focused specifically on choosing project ideas, which is where most of the value in a project is (yet also where little time is typically spent, and therefore one typically does not get very much data on project choice from a mentor). Also, it’s highly scalable: I could run the exercise in a 200-person lecture hall and still expect it to basically work.
It was, by all reports, exhausting for everyone but me, and we basically did this for two full days. But a majority of participants found it high-value, and marginal returns were still not dropping quickly after two days (though at that point people started to report that they expected marginal returns to drop off soon).
I’d be interested to see other people try this exercise—e.g. it seems like Eliezer doing this with a large audience for a day or two could generate a lot of value.
This was arguably the most useful part of the SERI MATS 2 Scholars program.
Later on, we actually did this exercise with Eliezer. It was less valuable. It seemed like John was mainly prodding the people who were presenting the ideas, such that their patterns of thought would carry them in a good direction. For example, John would point out that a person proposes a one-bit experiment and asks if there isn’t a better experiment that we could do that gives us lots of information all at once.
This was very useful because when you learn what kinds of things John will say, you can say them to yourself later on, and steer your own patterns of thought in a good direction on demand. When we did this exercise with Eliezer he was mainly explaining why a particular idea would not work. Often without explaining the generator behind his criticism. This can of course still be valuable as feedback for a particular idea. However, it is much harder to extract a general reasoning pattern out of this that you can then successfully apply later in different contexts.
For example, Eliezer would criticize an idea about trying to get a really good understanding of the scientific process such that we can then give this understanding to AI alignment researchers such that they can make a lot more progress than they otherwise would. He criticized this idea as basically being too hard to execute because it is too hard to successfully communicate how to be a good scientist, even if you are a good scientist.
Assuming the assertion is correct, hearing it, doesn’t necessarily tell you how to think in different contexts such that you would correctly identify if an idea would be too hard to execute or flawed in some other way. And I am not necessarily saying that you couldn’t extract a reasoning algorithm out of the feedback, but that if you could do this, then it would take you a lot more effort and time, compared to extracting a reasoning algorithm from the things that John was saying.
Now, all of this might have been mainly an issue of Eliezer not having a good model on how this workshop would have a positive influence on the people attending it. I would guess that if John had spent more time thinking about how to communicate what the workshop is doing and how to achieve its goal, then Eliezer could have probably done a much better job.
This suggests formulation of exercises about the author’s responses to various prompts, as part of technical exposition (or explicit delimitation of a narrative by choices of the direction of its continuation). When properly used, this doesn’t seem to lose much value compared to the exercise you describe, but it’s more convenient for everyone. Potentially this congeals into a style of writing with no explicit exercises or delimitation that admits easy formulation of such exercises by the reader. This already works for content of technical writing, but less well for choices of topics/points contrasted with alternative choices.
So possibly the way to do this is by habitually mentioning alternative responses (that are expected to be plausible for the reader, while decisively, if not legibly, rejected by the author), and leading with these rather than the preferred responses. Sounds jarring and verbose, a tradeoff that needs to be worth making rather than a straight improvement.
My MATS program people just spent two days on an exercise to “train a shoulder-John”.
The core exercise: I sit at the front of the room, and have a conversation with someone about their research project idea. Whenever I’m about to say anything nontrivial, I pause, and everyone discusses with a partner what they think I’m going to say next. Then we continue.
Some bells and whistles which add to the core exercise:
Record guesses and actual things said on a whiteboard
Sometimes briefly discuss why I’m saying some things and not others
After the first few rounds establish some patterns, look specifically for ideas which will take us further out of distribution
Why this particular exercise? It’s a focused, rapid-feedback way of training the sort of usually-not-very-legible skills one typically absorbs via osmosis from a mentor. It’s focused specifically on choosing project ideas, which is where most of the value in a project is (yet also where little time is typically spent, and therefore one typically does not get very much data on project choice from a mentor). Also, it’s highly scalable: I could run the exercise in a 200-person lecture hall and still expect it to basically work.
It was, by all reports, exhausting for everyone but me, and we basically did this for two full days. But a majority of participants found it high-value, and marginal returns were still not dropping quickly after two days (though at that point people started to report that they expected marginal returns to drop off soon).
I’d be interested to see other people try this exercise—e.g. it seems like Eliezer doing this with a large audience for a day or two could generate a lot of value.
This was arguably the most useful part of the SERI MATS 2 Scholars program.
Later on, we actually did this exercise with Eliezer. It was less valuable. It seemed like John was mainly prodding the people who were presenting the ideas, such that their patterns of thought would carry them in a good direction. For example, John would point out that a person proposes a one-bit experiment and asks if there isn’t a better experiment that we could do that gives us lots of information all at once.
This was very useful because when you learn what kinds of things John will say, you can say them to yourself later on, and steer your own patterns of thought in a good direction on demand. When we did this exercise with Eliezer he was mainly explaining why a particular idea would not work. Often without explaining the generator behind his criticism. This can of course still be valuable as feedback for a particular idea. However, it is much harder to extract a general reasoning pattern out of this that you can then successfully apply later in different contexts.
For example, Eliezer would criticize an idea about trying to get a really good understanding of the scientific process such that we can then give this understanding to AI alignment researchers such that they can make a lot more progress than they otherwise would. He criticized this idea as basically being too hard to execute because it is too hard to successfully communicate how to be a good scientist, even if you are a good scientist.
Assuming the assertion is correct, hearing it, doesn’t necessarily tell you how to think in different contexts such that you would correctly identify if an idea would be too hard to execute or flawed in some other way. And I am not necessarily saying that you couldn’t extract a reasoning algorithm out of the feedback, but that if you could do this, then it would take you a lot more effort and time, compared to extracting a reasoning algorithm from the things that John was saying.
Now, all of this might have been mainly an issue of Eliezer not having a good model on how this workshop would have a positive influence on the people attending it. I would guess that if John had spent more time thinking about how to communicate what the workshop is doing and how to achieve its goal, then Eliezer could have probably done a much better job.
Strong endorsement; this resonates with:
My own experiences running applied rationality workshops
My experiences trying to get people to pick up “ops skill” or “ops vision”
Explicit practice I’ve done with Nate off and on over the years
May try this next time I have a chance to teach pair debugging.
This suggests formulation of exercises about the author’s responses to various prompts, as part of technical exposition (or explicit delimitation of a narrative by choices of the direction of its continuation). When properly used, this doesn’t seem to lose much value compared to the exercise you describe, but it’s more convenient for everyone. Potentially this congeals into a style of writing with no explicit exercises or delimitation that admits easy formulation of such exercises by the reader. This already works for content of technical writing, but less well for choices of topics/points contrasted with alternative choices.
So possibly the way to do this is by habitually mentioning alternative responses (that are expected to be plausible for the reader, while decisively, if not legibly, rejected by the author), and leading with these rather than the preferred responses. Sounds jarring and verbose, a tradeoff that needs to be worth making rather than a straight improvement.