The machine learning world is doing a lot of damage to society by confusing “is” with “ought” which, within AIXI, is equivalent to confusing its two unified components: Algorithmic Information Theory (compression) with Sequential Decision Theory (conditional decompression). This is a primary reason the machine learning world has failed to provide anything remotely approaching the level of funding for The Hutter Prize that would be required to attract talent away from grabbing all of the low hanging fruit in the matrix multiply hardware lottery branches, while failing to water the roots of the AGI tree. So the failure is in the machine learning world—not the Hutter Prize criteria. There is simply no greater potential risk adjusted return on investment to the machine learning world than is increasing the size of the prize purse for the Hutter Prize. To the extent that clearing up confusion about AGI in politics would benefit society, there is a good argument to be made that the same can be said for the world in general.
This is because 1) The judging criteria are completely objective (and probably should be automated) and 2) The judging criteria is closely tied to the ideal “loss function” for epistemology: the science of human knowledge.
The proper funding level would be at least 1% of the technology development investments in machine learning.
The machine learning world is doing a lot of damage to society by confusing “is” with “ought” which, within AIXI, is equivalent to confusing its two unified components: Algorithmic Information Theory (compression) with Sequential Decision Theory (conditional decompression). This is a primary reason the machine learning world has failed to provide anything remotely approaching the level of funding for The Hutter Prize that would be required to attract talent away from grabbing all of the low hanging fruit in the matrix multiply hardware lottery branches, while failing to water the roots of the AGI tree. So the failure is in the machine learning world—not the Hutter Prize criteria. There is simply no greater potential risk adjusted return on investment to the machine learning world than is increasing the size of the prize purse for the Hutter Prize. To the extent that clearing up confusion about AGI in politics would benefit society, there is a good argument to be made that the same can be said for the world in general.
This is because 1) The judging criteria are completely objective (and probably should be automated) and 2) The judging criteria is closely tied to the ideal “loss function” for epistemology: the science of human knowledge.
The proper funding level would be at least 1% of the technology development investments in machine learning.