The eigenvalues are a measure of how quickly the MDP reaches a steady state. This works when you know the goals of your network are to achieve a particular state in the MDP as fast as possible, and stay there.
Edit: I think this also works if your model has the goal to achieve a particular distribution of states too.
Right, I think this somewhat corresponds to the “how long it takes a policy to reach a stable loop” (the “distance to loop” metric), which we used in our experiments.
They are related, but time-to-loop fails when there are many loops a random policy is likely to access. For example, if a “do nothing” action is the default, your agent will immediately enter a loop, but the sum of the absolute values of the real parts of the eigenvales will be very high (the number of states in the environment).
The eigenvalues are a measure of how quickly the MDP reaches a steady state. This works when you know the goals of your network are to achieve a particular state in the MDP as fast as possible, and stay there.
Edit: I think this also works if your model has the goal to achieve a particular distribution of states too.
Right, I think this somewhat corresponds to the “how long it takes a policy to reach a stable loop” (the “distance to loop” metric), which we used in our experiments.
What did you use your coherence definition for?
Its a long story, but I wanted to see what the functional landscape of coherence looked like for goal misgeneralizing RL environments after doing essential dynamics. Results forthcoming.
They are related, but time-to-loop fails when there are many loops a random policy is likely to access. For example, if a “do nothing” action is the default, your agent will immediately enter a loop, but the sum of the absolute values of the real parts of the eigenvales will be very high (the number of states in the environment).