You’re right that an AGI being vastly smarter than humans is consistent with both good and bad outcomes for humanity. This video does not address any of the arguments that have been presented about why an AGI would by default have unaligned values with humanity, which I’d encourage you to engage with. It’s mentioned in bullet −3 in the list, under the names instrumental convergence and orthogonality thesis, with the former being probably what I’d recommend reading about first.
Hi Ben, thanks for this. We are not passive victims of the future, waiting trembling to see a future we cannot escape because of rigid features such as you mention. We can help shape and literally make the future. I have 1500+ videos so you will run out of material before I do! What do you think of the idea of machines suggesting better ways to cooperate which humans could never attain themselves? Do you listen to the news? If you don’t listen to the news isn’t that because you are disappointed with how humans cooperate left to their own devices? They need better ideas from machines! See:
Kim, you’re not addressing the points in the post. You can’t repeat catch phrases like ‘passive victims of the future’ and expect it to have ground here. MIRI created well funded research institution devoted to positively shaping the future, while you make silly YouTube videos with platitudes. This interest in AI seems like recreation to you.
My take on this: countering Eliezer Yudkowsky
You’re right that an AGI being vastly smarter than humans is consistent with both good and bad outcomes for humanity. This video does not address any of the arguments that have been presented about why an AGI would by default have unaligned values with humanity, which I’d encourage you to engage with. It’s mentioned in bullet −3 in the list, under the names instrumental convergence and orthogonality thesis, with the former being probably what I’d recommend reading about first.
Hi Ben, thanks for this. We are not passive victims of the future, waiting trembling to see a future we cannot escape because of rigid features such as you mention. We can help shape and literally make the future. I have 1500+ videos so you will run out of material before I do! What do you think of the idea of machines suggesting better ways to cooperate which humans could never attain themselves? Do you listen to the news? If you don’t listen to the news isn’t that because you are disappointed with how humans cooperate left to their own devices? They need better ideas from machines! See:
Kim, you’re not addressing the points in the post. You can’t repeat catch phrases like ‘passive victims of the future’ and expect it to have ground here. MIRI created well funded research institution devoted to positively shaping the future, while you make silly YouTube videos with platitudes. This interest in AI seems like recreation to you.