One big omission is Bengio’s new stuff, but the talk wasn’t very precise. Sounds like Russell:
With a causal and Bayesian model-based agent interpreting human expressions of rewards reflecting latent human preferences, as the amount of compute to approximate the exact Bayesian decisions increases, we increase the probability of safe decisions.
Another angle I couldn’t fit in is him wanting to make microscope AI, to decrease our incentive to build agents.
One big omission is Bengio’s new stuff, but the talk wasn’t very precise. Sounds like Russell:
Another angle I couldn’t fit in is him wanting to make microscope AI, to decrease our incentive to build agents.