Also suggest exploring what it may means we are unable to be able to solve the alignment problem for fully autonomous learning machinery.
There will be a [new AI Safety Camp project](https://docs.google.com/document/d/198HoQA600pttXZA8Awo7IQmYHpyHLT49U-pDHbH3LVI/edit) about formalising a model of AGI uncontainability.
Also suggest exploring what it may means we are unable to be able to solve the alignment problem for fully autonomous learning machinery.
There will be a [new AI Safety Camp project](https://docs.google.com/document/d/198HoQA600pttXZA8Awo7IQmYHpyHLT49U-pDHbH3LVI/edit) about formalising a model of AGI uncontainability.