A basic one: Before I started reading Eliezer I used to think strong AI wouldn’t be dangerous by default, and had argued as such online. ie I thought that AI would systematically do bad things only if explicitly programmed to do so. Now I think strong AI would be dangerous by default, and that Friendliness matters.
edit: I think the relevant posts for this were first made on Overcoming Bias but presumably they still count.
A basic one: Before I started reading Eliezer I used to think strong AI wouldn’t be dangerous by default, and had argued as such online. ie I thought that AI would systematically do bad things only if explicitly programmed to do so. Now I think strong AI would be dangerous by default, and that Friendliness matters.
edit: I think the relevant posts for this were first made on Overcoming Bias but presumably they still count.