I like your comment and think it’s insightful about why/when to wirehead or not
Nitpick about your endorsed skills point: Not always do people have high overlap in what they know and what they wish they knew or endorse others knowing. I’ve had a lifelong obsession with learning, especially with acquiring skills. Unfortunately, my next-thing-to-learn selection is very unguided. It has thus been thematic struggle in my life to keep focused on learning the things I judge to objectively be valuable. I have a huge list of skills/hobbies I think are mostly or entirely impractical or useless (e.g. artistic woodworking, paleontology). And also lots of things I’ve been thinking for years that I ought to learn better (e.g. linear algebra). I’ve been wishing for years that I had a better way to reward myself for studying things I reflectively endorse knowing, rather than wasting time/energy studying unendorsed things.
In other words, I’d love a method (like Max Harms’ fictional Zen Helmets) to better align my system 1 motivations to my system 2 motivations. The hard part is figuring out how to implement this change without corrupting the system 2 values or its value-discovery-and-updating processes.
I like your comment and think it’s insightful about why/when to wirehead or not
Nitpick about your endorsed skills point: Not always do people have high overlap in what they know and what they wish they knew or endorse others knowing. I’ve had a lifelong obsession with learning, especially with acquiring skills. Unfortunately, my next-thing-to-learn selection is very unguided. It has thus been thematic struggle in my life to keep focused on learning the things I judge to objectively be valuable. I have a huge list of skills/hobbies I think are mostly or entirely impractical or useless (e.g. artistic woodworking, paleontology). And also lots of things I’ve been thinking for years that I ought to learn better (e.g. linear algebra). I’ve been wishing for years that I had a better way to reward myself for studying things I reflectively endorse knowing, rather than wasting time/energy studying unendorsed things. In other words, I’d love a method (like Max Harms’ fictional Zen Helmets) to better align my system 1 motivations to my system 2 motivations. The hard part is figuring out how to implement this change without corrupting the system 2 values or its value-discovery-and-updating processes.