Liveblogging some thoughts as I look over this about “would it be useful for me to ‘get started with Naturalism’”
Some things I think I currently have a reasonable amount of curiosity about:
what’s up with big companies? why do I expect them to go off the rails, especially for building AI
...what’s up with small companies, for that matter? why does Lightcone, an 8 person company, also seem to go off the rails?
what sort of rationality content would be good for LessWrong (or the surrounding ecosystem) and why?
what’s up with the-thing-I-call-frame-control?
Things I don’t think I’m currently curious about exactly but feel like “I should be?”
idk a bunch of technical stuff about ML and alignment. I’d like to understand the object level of what it means to be making progress in the fields that seem really important, so that I can make good choices as a LW admin. When I give myself time to actually focus on this I don’t struggle very much to get curious about it, but it requires setting a lot of time aside and not getting distracted.
Things I’m maybe sort of curious about but also resistant to in some way:
Despite investing a lot in learning to grieve, learning to notice when I’m defensive or fighty, and learning to not get into kinda dumb fights about how to coordinate, I still seem to do all those things a lot.
(Separately, things that I obviously needed to grieve in a fairly ordinary way still took a really long time. This is maybe just ‘the system working as intended, grieving takes a long time and kinda sucks?’, but, I think there was something in fact kinda off about how I was processing it. But, this one seems to have mostly wrapped itself up by now)
This is great I love it. I’d also love it if you came back to this comment when you’re done reading the whole sequence, and told me how it looks and feels to you from that perspective.
Liveblogging some thoughts as I look over this about “would it be useful for me to ‘get started with Naturalism’”
Some things I think I currently have a reasonable amount of curiosity about:
what’s up with big companies? why do I expect them to go off the rails, especially for building AI
...what’s up with small companies, for that matter? why does Lightcone, an 8 person company, also seem to go off the rails?
what sort of rationality content would be good for LessWrong (or the surrounding ecosystem) and why?
what’s up with the-thing-I-call-frame-control?
Things I don’t think I’m currently curious about exactly but feel like “I should be?”
idk a bunch of technical stuff about ML and alignment. I’d like to understand the object level of what it means to be making progress in the fields that seem really important, so that I can make good choices as a LW admin. When I give myself time to actually focus on this I don’t struggle very much to get curious about it, but it requires setting a lot of time aside and not getting distracted.
Things I’m maybe sort of curious about but also resistant to in some way:
Despite investing a lot in learning to grieve, learning to notice when I’m defensive or fighty, and learning to not get into kinda dumb fights about how to coordinate, I still seem to do all those things a lot.
(Separately, things that I obviously needed to grieve in a fairly ordinary way still took a really long time. This is maybe just ‘the system working as intended, grieving takes a long time and kinda sucks?’, but, I think there was something in fact kinda off about how I was processing it. But, this one seems to have mostly wrapped itself up by now)
This is great I love it. I’d also love it if you came back to this comment when you’re done reading the whole sequence, and told me how it looks and feels to you from that perspective.