I appreciate the comment, you keyed me in to a bunch of things I wasn’t aware of (The Guild of the Rose, NYC Megameetup, and more). I definitely agree that setting a good example in one’s own life is a great place to start. And yes, several established power structures do stand to lose if people become less easy to manipulate.
I’m still hopeful that there’s some way to make progress if we get enough good minds churning out ideas on how to enroll people into their own personal development. This makes me wonder, though—which is more difficult, human alignment or AI alignment?
I’m still hopeful that there’s some way to make progress if we get enough good minds churning out ideas on how to enroll people into their own personal development.
Me too! I hope my comment didn’t come through as cynical or thought-stopping. I think this is one of the highest goods people can produce. It just seems like this is one of those problems where even defining the problem is a wicked problem in the first place—but falling into analysis-paralysis is bad too.
Please do write more on this topic. Ill try to make a post around the same themes this weekend :)
I look forward to your post. One thing I’ll add at this point is that The Dignity Index group is working on rating politicians’ speech using machine learning, in hopes that this could help shift political dialogue. I’ve done something similar with a bit more complicated rating system I developed independently. If you’re interested, check out some ratings of politicians’ tweets here: twitter.com/DishonorP. I don’t feel that ratings systems by themselves will have a large impact on shifting behaviors, but seeing that some people put out actually non-partisan ratings may give others a tiny bit more hope in humanity.
I appreciate the comment, you keyed me in to a bunch of things I wasn’t aware of (The Guild of the Rose, NYC Megameetup, and more). I definitely agree that setting a good example in one’s own life is a great place to start. And yes, several established power structures do stand to lose if people become less easy to manipulate.
I’m still hopeful that there’s some way to make progress if we get enough good minds churning out ideas on how to enroll people into their own personal development. This makes me wonder, though—which is more difficult, human alignment or AI alignment?
Me too! I hope my comment didn’t come through as cynical or thought-stopping. I think this is one of the highest goods people can produce. It just seems like this is one of those problems where even defining the problem is a wicked problem in the first place—but falling into analysis-paralysis is bad too.
Please do write more on this topic. Ill try to make a post around the same themes this weekend :)
I look forward to your post. One thing I’ll add at this point is that The Dignity Index group is working on rating politicians’ speech using machine learning, in hopes that this could help shift political dialogue. I’ve done something similar with a bit more complicated rating system I developed independently. If you’re interested, check out some ratings of politicians’ tweets here: twitter.com/DishonorP. I don’t feel that ratings systems by themselves will have a large impact on shifting behaviors, but seeing that some people put out actually non-partisan ratings may give others a tiny bit more hope in humanity.