We chose the issue of lies specifically because it is something a bunch of people can get behind opposing, across the political spectrum. Otherwise, we have to choose political virtues, and it’s always a trade-off. So the two fundamental orientations of this project are utilitarianism and anti-lies.
FYI, we plan to tackle sloppy thinking too, as I did in this piece, but that’s more complex, and it’s important to start with simple messages first. Heck, if we can get people to realize the simple difference between truth and comfort, I’d be happy.
So the two fundamental orientations of this project are utilitarianism and anti-lies.
Utilitarianism is nice of course, but since you’re operating in a political context here, it’s important to go for a politically-mindful variety of utilitarianism, that treats other people’s existing political stances as representational, or at least as useful evidence for what they actually care about. Virtues like adaptation, compromise, conciliation—even humor, sometimes—can be seen as ways to operationalize this sort of utilitarianism in practice—and also promote it to average folks who generally don’t know what “utilitarianism” is actually about!
This is probably too complex to hash out in comments—lots of semantics issues and some strategic/tactical information that might be best to avoid discussing publicly. If you’re interested in getting involved in the project and want to chat on Skype, email me at gleb [at] intentionalinsights [dot] org
No worries—I trust you to get the strategic/tactical side right, and it’s quite promising to see that you’re aware of these issues as well. I now think that this can be a very promising project, since you’re clearly evading the obvious pitfalls I was concerned about when I read the initial announcement!
We chose the issue of lies specifically because it is something a bunch of people can get behind opposing, across the political spectrum. Otherwise, we have to choose political virtues, and it’s always a trade-off. So the two fundamental orientations of this project are utilitarianism and anti-lies.
FYI, we plan to tackle sloppy thinking too, as I did in this piece, but that’s more complex, and it’s important to start with simple messages first. Heck, if we can get people to realize the simple difference between truth and comfort, I’d be happy.
Utilitarianism is nice of course, but since you’re operating in a political context here, it’s important to go for a politically-mindful variety of utilitarianism, that treats other people’s existing political stances as representational, or at least as useful evidence for what they actually care about. Virtues like adaptation, compromise, conciliation—even humor, sometimes—can be seen as ways to operationalize this sort of utilitarianism in practice—and also promote it to average folks who generally don’t know what “utilitarianism” is actually about!
This is probably too complex to hash out in comments—lots of semantics issues and some strategic/tactical information that might be best to avoid discussing publicly. If you’re interested in getting involved in the project and want to chat on Skype, email me at gleb [at] intentionalinsights [dot] org
No worries—I trust you to get the strategic/tactical side right, and it’s quite promising to see that you’re aware of these issues as well. I now think that this can be a very promising project, since you’re clearly evading the obvious pitfalls I was concerned about when I read the initial announcement!
Thank you!