So if I read correctly, someone at SI (Eliezer, even) had an original insight into cutting-edge AGI research, one strong enough to be accepted by other cutting-edge AGI researchers, and instead of publishing a proof of it, which was trivial, simply gave it away and some students finally proved it?
Surely Hutter was aware of this issue back in 2003:
Another problem connected, but possibly not limited to embodied agents, especially
if they are rewarded by humans, is the following: Sufficiently intelligent agents
may increase their rewards by psychologically manipulating their human “teachers”,
or by threatening them. This is a general sociological problem which successful AI
will cause, which has nothing specifically to do with AIXI. Every intelligence superior
to humans is capable of manipulating the latter.
Surely Hutter was aware of this issue back in 2003:
http://www.hutter1.net/ai/aixigentle.pdf