I agree with your claims that “I believe most people are bad at figuring out what they actually value and prefer” and “by adding this habit of reflection, you could become much happier than you are right now”, but I experience the scales you’re discussing as extremely handy self-exploration tools in a way that I’m not sure from the post whether you do.
I read in the post a description of a relationship with assessment scales where the scales serve as instructions to a person—“I want to attain this state because it’s at the top of the list”. I also read it as contrasting this scales-as-orders mindset against a paradigm where instructions are cherry-picked out of scales based on personal preferences and values that are both intrinsic and obvious to oneself.
I think there’s a third approach to scales or metrics: they can serve as a distillation of others’ research more like a library or a department store than a set of orders. When I look at scales like the ones you describe, I tend to treat them the way I treat an engaging book, asking questions like:
What would it be like to be at this stage? What would suck about being there?
If I had to pick just one of these stages to occupy indefinitely, which one would I actually like best, based on these descriptions?
What evidence have I seen that someone could be in several of these stages at once, or none?
What hypothetical people can I imagine who are technically at a “lower” stage but get more/better things done than those at a “higher” stage or vice versa?
In these ways, internalizing a well-researched scale gives me the feeling I get when I pick up an item in a video game that reveals a section of the world map which was hidden before.
Your bulleted self-inquiries are very useful. These seem like more playful questions that I would feel comfortable asking someone else if I felt they were being hijacked by a metric/scaling (where a more naive approach could come across as judgmental and untactful).
Not all of your questions fit every situation of course, but that’s not the point! Actually, I want to try out a few examples:
Long-distance running
What would it be like to be very skilled? I would be much fitter than I am now!, so less winded when doing other things. I feel like there’s a bragging angle, but who likes a bragger?
What would suck? The long practice hours, I will likely be more prone to injuries and joint problems.
What’s the good part of training to be a skilled runner? Consistently being outside would be nice. I think I would feel better after training.
What would be the bad part of training? That out-of-breath feeling and burning muscles is uncomfortable.
Are there people who aren’t skilled long distance runners, but are still better in a meaningful way? Swimmers are very fit, have greater upper body strength, and aren’t as prone to injuries (though looking it up, they do suffer shoulder injuries)
AI Alignment Researcher
What would it look like to be successful? Being paid to do research full time. Making meaningful contributions that reduce x-risk. Having lots of smart people who will listen and give you feedback. Have a good understanding of lots of different, interesting topics.
What would suck about it? Maybe being in an official position will cause counter-productive pressure/responsibility to make meaningful contributions. I will be open to more criticism. I may feel responsible and slightly helpless regarding people who want to work on alignment, but have trouble finding funding.
What would be great about the process of becoming successful? Learning interesting subjects and becoming better at working through ideas. Gaining new frames to view problems. Meeting new people to discuss interesting ideas and “iron sharpening iron”. Knowing I’m working on something that feels legitimately important.
What would suck about the process? The context-loading of math texts is something to get used to. There’s a chance of failure due to lack of skill or not knowing the right people. There is no road map to guarantee success, so there is a lot of uncertainty on what to do specifically.
Any people who also are great but not successful Alignment researchers? There’s people who are good at communicating these ideas with others (for persuasion and distillation), or work at machine learning jobs and will be in good positions of power for AI safety concerns. There are also other x-risks to work on out there and EA fields that also viscerally feel important.
I’ll leave it here due to time, but I think I would add “How could I make the process of getting good more enjoyable?” and making explicit what goals I actually care about.
I agree with your claims that “I believe most people are bad at figuring out what they actually value and prefer” and “by adding this habit of reflection, you could become much happier than you are right now”, but I experience the scales you’re discussing as extremely handy self-exploration tools in a way that I’m not sure from the post whether you do.
I read in the post a description of a relationship with assessment scales where the scales serve as instructions to a person—“I want to attain this state because it’s at the top of the list”. I also read it as contrasting this scales-as-orders mindset against a paradigm where instructions are cherry-picked out of scales based on personal preferences and values that are both intrinsic and obvious to oneself.
I think there’s a third approach to scales or metrics: they can serve as a distillation of others’ research more like a library or a department store than a set of orders. When I look at scales like the ones you describe, I tend to treat them the way I treat an engaging book, asking questions like:
What would it be like to be at this stage? What would suck about being there?
If I had to pick just one of these stages to occupy indefinitely, which one would I actually like best, based on these descriptions?
What evidence have I seen that someone could be in several of these stages at once, or none?
What hypothetical people can I imagine who are technically at a “lower” stage but get more/better things done than those at a “higher” stage or vice versa?
In these ways, internalizing a well-researched scale gives me the feeling I get when I pick up an item in a video game that reveals a section of the world map which was hidden before.
Your bulleted self-inquiries are very useful. These seem like more playful questions that I would feel comfortable asking someone else if I felt they were being hijacked by a metric/scaling (where a more naive approach could come across as judgmental and untactful).
Not all of your questions fit every situation of course, but that’s not the point! Actually, I want to try out a few examples:
Long-distance running
What would it be like to be very skilled? I would be much fitter than I am now!, so less winded when doing other things. I feel like there’s a bragging angle, but who likes a bragger?
What would suck? The long practice hours, I will likely be more prone to injuries and joint problems.
What’s the good part of training to be a skilled runner? Consistently being outside would be nice. I think I would feel better after training.
What would be the bad part of training? That out-of-breath feeling and burning muscles is uncomfortable.
Are there people who aren’t skilled long distance runners, but are still better in a meaningful way? Swimmers are very fit, have greater upper body strength, and aren’t as prone to injuries (though looking it up, they do suffer shoulder injuries)
AI Alignment Researcher
What would it look like to be successful? Being paid to do research full time. Making meaningful contributions that reduce x-risk. Having lots of smart people who will listen and give you feedback. Have a good understanding of lots of different, interesting topics.
What would suck about it? Maybe being in an official position will cause counter-productive pressure/responsibility to make meaningful contributions. I will be open to more criticism. I may feel responsible and slightly helpless regarding people who want to work on alignment, but have trouble finding funding.
What would be great about the process of becoming successful? Learning interesting subjects and becoming better at working through ideas. Gaining new frames to view problems. Meeting new people to discuss interesting ideas and “iron sharpening iron”. Knowing I’m working on something that feels legitimately important.
What would suck about the process? The context-loading of math texts is something to get used to. There’s a chance of failure due to lack of skill or not knowing the right people. There is no road map to guarantee success, so there is a lot of uncertainty on what to do specifically.
Any people who also are great but not successful Alignment researchers? There’s people who are good at communicating these ideas with others (for persuasion and distillation), or work at machine learning jobs and will be in good positions of power for AI safety concerns. There are also other x-risks to work on out there and EA fields that also viscerally feel important.
I’ll leave it here due to time, but I think I would add “How could I make the process of getting good more enjoyable?” and making explicit what goals I actually care about.