Glad you understood me. Sorry for my english! Of course, the following examples themselves do not prove the opportunity to solve the entire problem of AGI alignment! But it seems to me that this direction is interesting and strongly underestimated. Well, someone smarter than me can look at this idea and say that it is bullshit, at least.
Partly this is a source of intuition for me, that the creation of aligned superintellect is possible. And maybe not even as hard as it seems. We have many examples of creatures that follow the goals of someone more stupid. And the mechanism that is responsible for this should not be very complex.
Such a stupid process, as a natural selection, was able to create mentioned capabilities. It must be achievable for us.
Glad you understood me. Sorry for my english!
Of course, the following examples themselves do not prove the opportunity to solve the entire problem of AGI alignment! But it seems to me that this direction is interesting and strongly underestimated. Well, someone smarter than me can look at this idea and say that it is bullshit, at least.
Partly this is a source of intuition for me, that the creation of aligned superintellect is possible. And maybe not even as hard as it seems.
We have many examples of creatures that follow the goals of someone more stupid. And the mechanism that is responsible for this should not be very complex.
Such a stupid process, as a natural selection, was able to create mentioned capabilities. It must be achievable for us.