Of all the conceivable way to arrange molecules so that they generate interesting unexpected novelties and complexity from which to learn new patterns, what are the odds that a low-impacted and flourishing society of happy humans is the very best one a superhuman intellect can devise?
Might it not do better with a human race pressed into servitude, toiling in the creativity salt mines? Or with a genetically engineered species of more compliant (but of course very complex) organisms? Or even by abandoning organics and deploying some carefully designed chaotic mechanism?
Interfering with the non-simulated complexity is contaminating the data set. It’s analogous to feeding the LLM with LLM generated content. Already GPT5 will be biased by GPT4 generated content
My main intuition is that non-simulated complexity is of higher value for learning than simulated complexity. Humans value more learning the patterns of nature than learning the patterns of simulated computer game worlds
Of all the conceivable way to arrange molecules so that they generate interesting unexpected novelties and complexity from which to learn new patterns, what are the odds that a low-impacted and flourishing society of happy humans is the very best one a superhuman intellect can devise?
Might it not do better with a human race pressed into servitude, toiling in the creativity salt mines? Or with a genetically engineered species of more compliant (but of course very complex) organisms? Or even by abandoning organics and deploying some carefully designed chaotic mechanism?
Interfering with the non-simulated complexity is contaminating the data set. It’s analogous to feeding the LLM with LLM generated content. Already GPT5 will be biased by GPT4 generated content
My main intuition is that non-simulated complexity is of higher value for learning than simulated complexity. Humans value more learning the patterns of nature than learning the patterns of simulated computer game worlds