The Alignment Problem—Easily accessible, well written and full of interesting facts about the development of ML. Unfortunately somewhat light on actual AI x-risk, but in many cases is enough to encourage people to learn more.
Edit: Someone strong-downvoted this, I’d find it pretty useful to know why. To be clear, by ‘why’ I mean ‘why does this rec seem bad’, rather than ‘why downvote’. If it’s the lightness on x-risk stuff I mentioned, this is useful to know, if my description seems inaccurate, this is very useful for me to know, given that I am in a position to recommend books relatively often. Happy for the reasoning to be via DM if that’s easier for any reason.
I read this, and he spent a lot of time convincing me that AI might be racist and very little time convincing me that AI might kill me and everyone I know without any warning. It’s the second possibility that seems to be the one people have trouble with.
The Alignment Problem—Easily accessible, well written and full of interesting facts about the development of ML. Unfortunately somewhat light on actual AI x-risk, but in many cases is enough to encourage people to learn more.
Edit: Someone strong-downvoted this, I’d find it pretty useful to know why. To be clear, by ‘why’ I mean ‘why does this rec seem bad’, rather than ‘why downvote’. If it’s the lightness on x-risk stuff I mentioned, this is useful to know, if my description seems inaccurate, this is very useful for me to know, given that I am in a position to recommend books relatively often. Happy for the reasoning to be via DM if that’s easier for any reason.
I read this, and he spent a lot of time convincing me that AI might be racist and very little time convincing me that AI might kill me and everyone I know without any warning. It’s the second possibility that seems to be the one people have trouble with.