and thus is motivated to find reasons for alignment not being possible.
I don’t get this sense.
More like Yudkowsky sees the rate at which AI labs are scaling up and deploying code and infrastructure of ML models, and recognises that there a bunch of known core problems that would need to be solved before there is any plausible possibility of safely containing/aligning AGI optimisation pressure toward outcomes.
I personally think some of the argumentation around AGI being able to internally simulate the complexity in the outside world and play it like a complicated chess game is unsound. But I would not attribute the reasoning in eg. the AGI Ruin piece to Yudkowsky’s cult of personality.
dangerous AI systems
I was gesturing back at “AGI” in the previous paragraph here, and something like precursor AI systems before “AGI”.
Thanks for making me look at that. I just rewrote it to “dangerous autonomous AI systems”.
I don’t get this sense.
More like Yudkowsky sees the rate at which AI labs are scaling up and deploying code and infrastructure of ML models, and recognises that there a bunch of known core problems that would need to be solved before there is any plausible possibility of safely containing/aligning AGI optimisation pressure toward outcomes.
I personally think some of the argumentation around AGI being able to internally simulate the complexity in the outside world and play it like a complicated chess game is unsound. But I would not attribute the reasoning in eg. the AGI Ruin piece to Yudkowsky’s cult of personality.
I was gesturing back at “AGI” in the previous paragraph here, and something like precursor AI systems before “AGI”.
Thanks for making me look at that. I just rewrote it to “dangerous autonomous AI systems”.