It seems to me that the brains of many animals can be aligned with the goals of someone much more stupid themselves. People and pets. Parasites and animals. Even ants and fungus. Perhaps the connection that we would like to have with superintellence, is observed on a much smaller scale.
I would just note, for instance, in the (crazy cool) fungus-and-ants case, this is a transient state of control that ends shortly thereafter in the death of the smarter, controlled agent. For AGI alignment, we’re presumably looking for a much more stable and long-term form of control, which might mean that these cases are not exactly the right proofs of concept. They demonstrate, to your point, that “[agents] can be aligned with the goals of someone much stupider than themselves,” but not necessarily that agents can be comprehensively and permanently aligned with the goals of someone much stupider than themselves.
Your comment makes me want to look more closely into how cases of “mind control” work in these more ecological settings and whether there are interesting takeaways for AGI alignment.
Glad you understood me. Sorry for my english! Of course, the following examples themselves do not prove the opportunity to solve the entire problem of AGI alignment! But it seems to me that this direction is interesting and strongly underestimated. Well, someone smarter than me can look at this idea and say that it is bullshit, at least.
Partly this is a source of intuition for me, that the creation of aligned superintellect is possible. And maybe not even as hard as it seems. We have many examples of creatures that follow the goals of someone more stupid. And the mechanism that is responsible for this should not be very complex.
Such a stupid process, as a natural selection, was able to create mentioned capabilities. It must be achievable for us.
It seems to me that the brains of many animals can be aligned with the goals of someone much more stupid themselves.
People and pets. Parasites and animals. Even ants and fungus.
Perhaps the connection that we would like to have with superintellence, is observed on a much smaller scale.
I think this is an incredibly interesting point.
I would just note, for instance, in the (crazy cool) fungus-and-ants case, this is a transient state of control that ends shortly thereafter in the death of the smarter, controlled agent. For AGI alignment, we’re presumably looking for a much more stable and long-term form of control, which might mean that these cases are not exactly the right proofs of concept. They demonstrate, to your point, that “[agents] can be aligned with the goals of someone much stupider than themselves,” but not necessarily that agents can be comprehensively and permanently aligned with the goals of someone much stupider than themselves.
Your comment makes me want to look more closely into how cases of “mind control” work in these more ecological settings and whether there are interesting takeaways for AGI alignment.
Glad you understood me. Sorry for my english!
Of course, the following examples themselves do not prove the opportunity to solve the entire problem of AGI alignment! But it seems to me that this direction is interesting and strongly underestimated. Well, someone smarter than me can look at this idea and say that it is bullshit, at least.
Partly this is a source of intuition for me, that the creation of aligned superintellect is possible. And maybe not even as hard as it seems.
We have many examples of creatures that follow the goals of someone more stupid. And the mechanism that is responsible for this should not be very complex.
Such a stupid process, as a natural selection, was able to create mentioned capabilities. It must be achievable for us.