He needs to taboo “adapive”, read and understand Bostroms AI-behaviour stuff, and comprehend the Superpowerful-Optimizer view, and then explain exactly why it is that an AI cannot have a fixed goal architecture.
If AI’s can’t have a fixed goal architecture, Wang needs to show that AI’s with unpredictable goals are somehow safe, or start speaking out against AI.
Damn right! These were my first thoughts as well. I know next to nothing about AI, but seriously, this is ordinary logic.
Damn right! These were my first thoughts as well. I know next to nothing about AI, but seriously, this is ordinary logic.