I love this post, it’s a really healthy way of exploring assumptions about one’s goals and subagents. I think it’s really hard to come up with simple diagrams that communicate key info, and I am impressed by choices such as changing the color of the path over time. I also find it insightful in matters relating to what a distracted agent looks like, or how adding subgoals can improve things.
It’s the sort of thing I’d like to see more rationalists doing, and it’s a great read, and I feel very excited about more of this sort of work on LessWrong. I hope it inspires more LessWrongers to build on it. I expect to vote it at somewhere between +5 and +7.
I love this post, it’s a really healthy way of exploring assumptions about one’s goals and subagents. I think it’s really hard to come up with simple diagrams that communicate key info, and I am impressed by choices such as changing the color of the path over time. I also find it insightful in matters relating to what a distracted agent looks like, or how adding subgoals can improve things.
It’s the sort of thing I’d like to see more rationalists doing, and it’s a great read, and I feel very excited about more of this sort of work on LessWrong. I hope it inspires more LessWrongers to build on it. I expect to vote it at somewhere between +5 and +7.