Your bulleted self-inquiries are very useful. These seem like more playful questions that I would feel comfortable asking someone else if I felt they were being hijacked by a metric/scaling (where a more naive approach could come across as judgmental and untactful).
Not all of your questions fit every situation of course, but that’s not the point! Actually, I want to try out a few examples:
Long-distance running
What would it be like to be very skilled? I would be much fitter than I am now!, so less winded when doing other things. I feel like there’s a bragging angle, but who likes a bragger?
What would suck? The long practice hours, I will likely be more prone to injuries and joint problems.
What’s the good part of training to be a skilled runner? Consistently being outside would be nice. I think I would feel better after training.
What would be the bad part of training? That out-of-breath feeling and burning muscles is uncomfortable.
Are there people who aren’t skilled long distance runners, but are still better in a meaningful way? Swimmers are very fit, have greater upper body strength, and aren’t as prone to injuries (though looking it up, they do suffer shoulder injuries)
AI Alignment Researcher
What would it look like to be successful? Being paid to do research full time. Making meaningful contributions that reduce x-risk. Having lots of smart people who will listen and give you feedback. Have a good understanding of lots of different, interesting topics.
What would suck about it? Maybe being in an official position will cause counter-productive pressure/responsibility to make meaningful contributions. I will be open to more criticism. I may feel responsible and slightly helpless regarding people who want to work on alignment, but have trouble finding funding.
What would be great about the process of becoming successful? Learning interesting subjects and becoming better at working through ideas. Gaining new frames to view problems. Meeting new people to discuss interesting ideas and “iron sharpening iron”. Knowing I’m working on something that feels legitimately important.
What would suck about the process? The context-loading of math texts is something to get used to. There’s a chance of failure due to lack of skill or not knowing the right people. There is no road map to guarantee success, so there is a lot of uncertainty on what to do specifically.
Any people who also are great but not successful Alignment researchers? There’s people who are good at communicating these ideas with others (for persuasion and distillation), or work at machine learning jobs and will be in good positions of power for AI safety concerns. There are also other x-risks to work on out there and EA fields that also viscerally feel important.
I’ll leave it here due to time, but I think I would add “How could I make the process of getting good more enjoyable?” and making explicit what goals I actually care about.
Your bulleted self-inquiries are very useful. These seem like more playful questions that I would feel comfortable asking someone else if I felt they were being hijacked by a metric/scaling (where a more naive approach could come across as judgmental and untactful).
Not all of your questions fit every situation of course, but that’s not the point! Actually, I want to try out a few examples:
Long-distance running
What would it be like to be very skilled? I would be much fitter than I am now!, so less winded when doing other things. I feel like there’s a bragging angle, but who likes a bragger?
What would suck? The long practice hours, I will likely be more prone to injuries and joint problems.
What’s the good part of training to be a skilled runner? Consistently being outside would be nice. I think I would feel better after training.
What would be the bad part of training? That out-of-breath feeling and burning muscles is uncomfortable.
Are there people who aren’t skilled long distance runners, but are still better in a meaningful way? Swimmers are very fit, have greater upper body strength, and aren’t as prone to injuries (though looking it up, they do suffer shoulder injuries)
AI Alignment Researcher
What would it look like to be successful? Being paid to do research full time. Making meaningful contributions that reduce x-risk. Having lots of smart people who will listen and give you feedback. Have a good understanding of lots of different, interesting topics.
What would suck about it? Maybe being in an official position will cause counter-productive pressure/responsibility to make meaningful contributions. I will be open to more criticism. I may feel responsible and slightly helpless regarding people who want to work on alignment, but have trouble finding funding.
What would be great about the process of becoming successful? Learning interesting subjects and becoming better at working through ideas. Gaining new frames to view problems. Meeting new people to discuss interesting ideas and “iron sharpening iron”. Knowing I’m working on something that feels legitimately important.
What would suck about the process? The context-loading of math texts is something to get used to. There’s a chance of failure due to lack of skill or not knowing the right people. There is no road map to guarantee success, so there is a lot of uncertainty on what to do specifically.
Any people who also are great but not successful Alignment researchers? There’s people who are good at communicating these ideas with others (for persuasion and distillation), or work at machine learning jobs and will be in good positions of power for AI safety concerns. There are also other x-risks to work on out there and EA fields that also viscerally feel important.
I’ll leave it here due to time, but I think I would add “How could I make the process of getting good more enjoyable?” and making explicit what goals I actually care about.