My suggestion would be to figure out why you think high IC is bad, and see if there’s some nice way to characterize the value systems that match that intuition.
That’s a good idea. My “final reason” for thinking that high IC is bad may be because high-IC systems are a pain in the ass when you’re building intelligent agents. They have a lot of interdependencies among their behaviors, get stuck waffling between different behaviors, and are hard to debug. But we (as designers and as intelligent agents) have mechanisms to deal with these problems; e.g., producing hysteresis by using nonlinear functions to sum activation from different goals.
My other final reason is that I consciously try to energy-minimize my own values, and I think other thoughtful people who aren’t nihilists do too. Probably nihilists do too, if only for their own convenience.
My other other final reason is that energy-minimization is what dynamic network concepts do. It’s how they develop, as e.g. for spin-glasses, economies, or ecologies.
That’s a good idea. My “final reason” for thinking that high IC is bad may be because high-IC systems are a pain in the ass when you’re building intelligent agents. They have a lot of interdependencies among their behaviors, get stuck waffling between different behaviors, and are hard to debug. But we (as designers and as intelligent agents) have mechanisms to deal with these problems; e.g., producing hysteresis by using nonlinear functions to sum activation from different goals.
My other final reason is that I consciously try to energy-minimize my own values, and I think other thoughtful people who aren’t nihilists do too. Probably nihilists do too, if only for their own convenience.
My other other final reason is that energy-minimization is what dynamic network concepts do. It’s how they develop, as e.g. for spin-glasses, economies, or ecologies.