Or is that sentence meant to indicate that an instance running after training might figure out how to hack the computer running it so it can actually change it’s own weights?
I was thinking of a scenario where OpenAI deliberately gives it access to its own weights to see if it can self improve.
I agree that it would be more likely to just speed up normal ML research.
I was thinking of a scenario where OpenAI deliberately gives it access to its own weights to see if it can self improve.
I agree that it would be more likely to just speed up normal ML research.