I’m currently exploring modes of “focusing more on learning and/or thinking” and I found this post to be a useful set of hypothesis of things to try.
I’m curious about “forming an opinion about an existing thing in the world” vs “contributing thoughts to unsolved problems.”
I’m currently looking at the landscape of interpretability, and feeling a vague sense that it’s either not pointed in the right direction, but I’m lacking a lot of context on how modern ML interpretability works exactly and what’s been tried. I’m not sure if this is the sort of thing you think is amenable to this process.
I’m currently exploring modes of “focusing more on learning and/or thinking” and I found this post to be a useful set of hypothesis of things to try.
I’m curious about “forming an opinion about an existing thing in the world” vs “contributing thoughts to unsolved problems.”
I’m currently looking at the landscape of interpretability, and feeling a vague sense that it’s either not pointed in the right direction, but I’m lacking a lot of context on how modern ML interpretability works exactly and what’s been tried. I’m not sure if this is the sort of thing you think is amenable to this process.