I don’t think he’s at all claiming safety is trivial or that humans can expect to remain in charge. control-capture foom is very much permitted by his model and he says so directly; much bigger minds are allowed. But his model suggests that reflective algorithmic improvement is not the panacea that yudkowsky expected, nor that beating biology head to head is easy even for a very superintelligent system.
this does not change any claim I would make about safety; it should barely be an update for anyone who has already updated off of deep learning. but it should knock down yudkowsky’s view of capability scaling in algorithms thoroughly. this is relevant to prediction of which kinds of system are a threat to other systems and how.
I don’t think he’s at all claiming safety is trivial or that humans can expect to remain in charge. control-capture foom is very much permitted by his model and he says so directly; much bigger minds are allowed. But his model suggests that reflective algorithmic improvement is not the panacea that yudkowsky expected, nor that beating biology head to head is easy even for a very superintelligent system.
this does not change any claim I would make about safety; it should barely be an update for anyone who has already updated off of deep learning. but it should knock down yudkowsky’s view of capability scaling in algorithms thoroughly. this is relevant to prediction of which kinds of system are a threat to other systems and how.