It’s unfortunate but I agree with Ruby- your post is fine but a top-level lesswrong post isn’t really the place for it anymore. I’m not sure where the best place to get feedback on this kind of thing is (maybe publish here on LW but as a short-form or draft?) - but you’re always welcome to send stuff to me! (Although busy finishing master’s next couple of weeks)
I’m planning to continue publishing more details about this concept. I believe it will address many of the things mentioned in the post you linked.
Instead of posting it all at once, I’m posting it in smaller chunks that all connect.
I have something coming up about preventing instrumental convergence with formalized critical thinking, as well as a general problem solving algorithm. It’ll hopefully make sense once it’s all there!
Respect for thinking about this stuff yourself. You seem new to alignment (correct me if I’m wrong) - I think it might be helpful to view posting as primarily about getting feedback rather than contributing directly, unless you have read most of the other people’s thoughts on whichever topic you are thinking/writing about.
Absolutely, I’m here for the feedback! No solution should go without criticism, regardless of what authority posted the idea, or how much experience the author has. :)
I think you might also be interested in this: https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default In general John Wentworths alignment agenda is essentially extrapolating your thoughts here and dealing with the problems in it.
It’s unfortunate but I agree with Ruby- your post is fine but a top-level lesswrong post isn’t really the place for it anymore. I’m not sure where the best place to get feedback on this kind of thing is (maybe publish here on LW but as a short-form or draft?) - but you’re always welcome to send stuff to me! (Although busy finishing master’s next couple of weeks)
Thank you for the resource!
I’m planning to continue publishing more details about this concept. I believe it will address many of the things mentioned in the post you linked.
Instead of posting it all at once, I’m posting it in smaller chunks that all connect.
I have something coming up about preventing instrumental convergence with formalized critical thinking, as well as a general problem solving algorithm. It’ll hopefully make sense once it’s all there!
Respect for thinking about this stuff yourself. You seem new to alignment (correct me if I’m wrong) - I think it might be helpful to view posting as primarily about getting feedback rather than contributing directly, unless you have read most of the other people’s thoughts on whichever topic you are thinking/writing about.
Absolutely, I’m here for the feedback! No solution should go without criticism, regardless of what authority posted the idea, or how much experience the author has. :)
Oh or EA forum, I see it’s crossposted