Mod here, heads up that I don’t think this is a great comment (For example, mods would have blocked it as a first comment.)
1) This feels out of context for this post. This post is about making predictable updates, not the basic question of whether one should be worried.
2) Your post feels like it doesn’t respond to a lot of things that have already been said on the topic. So while I think it’s legitimate to question concerns about AI, your questioning feels too shallow. For example, many many posts have been written on why “Therefore, we know that unless we specifically train them to harm humans, they will highly value human life.” isn’t true.
Hey Michael,
Mod here, heads up that I don’t think this is a great comment (For example, mods would have blocked it as a first comment.)
1) This feels out of context for this post. This post is about making predictable updates, not the basic question of whether one should be worried.
2) Your post feels like it doesn’t respond to a lot of things that have already been said on the topic. So while I think it’s legitimate to question concerns about AI, your questioning feels too shallow. For example, many many posts have been written on why “Therefore, we know that unless we specifically train them to harm humans, they will highly value human life.” isn’t true.
I’d recommend the AI Alignment Intro Material tag.
I’ve also blocked further replies to your comment, just to prevent further clutter on the comments thread. DM if you have questions.