adding execution/memory constraints penalizes all hypothesis
In reality these constraints do exist, so the question of “what happens if you don’t care about efficiency at all?” is really not important. In practice, efficiency is absolutely critical and everything that happens in AI is dominated by efficiency considerations.
I think that mesa-optimization will be a problem. It probably won’t look like aliens living in the Game of Life though.
It’ll look like an internal optimizer that just “decides” that the minds of the humans who created it are another part of the environment to be optimized for its not-correctly-aligned goal.
In reality these constraints do exist, so the question of “what happens if you don’t care about efficiency at all?” is really not important. In practice, efficiency is absolutely critical and everything that happens in AI is dominated by efficiency considerations.
I think that mesa-optimization will be a problem. It probably won’t look like aliens living in the Game of Life though.
It’ll look like an internal optimizer that just “decides” that the minds of the humans who created it are another part of the environment to be optimized for its not-correctly-aligned goal.