In general I think I’m on the same page as Jason here. Instead of saying a lot of words about how I think this is important and useful, I’ll instead just poke at the parts I think could possibly become stronger.
Progress is not a single thing
There’s a lot of people talking past each other with regards to progress. There are a lot of blanket claims that progress is happening (things are getting broadly better for people on average) as well as claims to the opposite.
I think the ‘yeah huh, nu uh’ back-and-forth between “things are getting better” and “things are getting worse” is quagmired at this point.
So it’s maybe worth admitting that there are metrics that people find important that are getting worse with time. Most of these are benchmarked to relativist reference points (e.g. income inequality can get worse even if everyone has access to strictly more value) or normative (if assaults are decreasing slower than your growing circle of consideration for what counts as assault, then your perceptions could be that assault is increasing).
I’m not sure what the right approach is here, but it seems that over the last little bit, reinforcing the “things are getting better” with lots of graphs and stats etc hasn’t actually changed public opinion all that much.
(I remain surprised that the reactions to “Better Angels of our Nature” when it came out — I still regard that as better than “Enlightenment Now”, despite the latter doing more work on the philosophical concepts of progress; what matters to me is the actual change)
Humanism as the source of value / Utilitarianism as the standard of value
I think your definition of humanism is laudable but vague. It weakly answers the question of “whence value?” but stops there.
I think your alternative source of value is better described as “naturalism” instead of “romanticism” — if only because the latter seems to suggest philosophical romanticism (https://en.wikipedia.org/wiki/Romanticism) instead of the conservatism you described. (This is mostly a minor nit about naming things, not actually a criticism of the point)
So I think “Humanism as the source of value” makes sense, but it doesn’t give us a metric or measurement or point of reference to compare to in terms of value.
I think that standard of value (or the system of utilizing that) is utilitarianism.
I think consequentialism and utilitarianism are not without their issues (hopefully writing up more of them soon) — but I think they make a strong standard, in particular by forcing a consistent set of preferences between alternatives of value.
What is the measure of Agency?
I ask because I don’t know it — and also because it seems critical to rectifying some liberalism (the John Stuart Mill kind = https://en.wikipedia.org/wiki/On_Liberty) with utilitarianism/consequentialism.
It seems reasonable to consider differently the expected harm of someone who directly put themselves in harms way, vs the person who was put into harms way by the state — though naive metrics for utility (e.g. expected QALYs, etc) would be the same.
The VNM utilitarians would claim that there is some term in the utility function for agency, but (so far as I know) have not produced actual numbers and metrics for how agency trades off with e.g. mortality.
Admittedly this is mostly a critique of utilitarianism and not of your point on agency.
I think the point about agency in the face of the future is essential, and the people that will change the future will probably be almost exclusively people who think they can change the future.
What should be in the core ideas for progress?
I am biased here, but I think a philosophy of progress necessarily must include a philosophy of risk.
Technological progress creates harms and downsides (or risks of downsides) in addition to benefits and upsides.
I think a philosophy of progress should have reified concepts for measuring these against each other. I think it should also have reified concepts for measuring the meta-effects of progress on these other metrics for progress.
Secret first point of progress
I would be a little bit remiss if I didn’t include to me what was the most surprising part of learning about progress so far: almost all human progress (in the sense of the moral imperative you gave at the end) is scientific and technological progress.
To the extent that this is not the case, I haven’t found strong evidence of it yet.
To the extent that it is the case, I think we should be more clearly specifying that the moral imperative is for scientific and technical progress.
Thanks for the detailed thoughts Alex! An incomplete reply:
I agree that “human well-being as the standard of value” leaves a lot open. That’s deliberate because I think that not everyone in this movement agrees on how exactly we should interpret, measure, etc. human well-being. Utilitarianism is one but not the only approach. It is an important topic for us to work out.
Agree with you about philosophy of risk / philosophy of safety. These are issues I am thinking about. For one preliminary, narrow case study see “How factories were made safe.”
I disagree that almost all progress is scientific/technological, if by that you mean that no significant moral/social progress has happened. The transition from monarchy to republics, the virtual end of slavery, and great progress in equal rights for women are three major points of moral/social progress that have occurred in the last ~250 years.
Reply to Progress, Humanism, Agency
In general I think I’m on the same page as Jason here. Instead of saying a lot of words about how I think this is important and useful, I’ll instead just poke at the parts I think could possibly become stronger.
Progress is not a single thing
There’s a lot of people talking past each other with regards to progress. There are a lot of blanket claims that progress is happening (things are getting broadly better for people on average) as well as claims to the opposite.
I think the ‘yeah huh, nu uh’ back-and-forth between “things are getting better” and “things are getting worse” is quagmired at this point.
So it’s maybe worth admitting that there are metrics that people find important that are getting worse with time. Most of these are benchmarked to relativist reference points (e.g. income inequality can get worse even if everyone has access to strictly more value) or normative (if assaults are decreasing slower than your growing circle of consideration for what counts as assault, then your perceptions could be that assault is increasing).
I’m not sure what the right approach is here, but it seems that over the last little bit, reinforcing the “things are getting better” with lots of graphs and stats etc hasn’t actually changed public opinion all that much.
(I remain surprised that the reactions to “Better Angels of our Nature” when it came out — I still regard that as better than “Enlightenment Now”, despite the latter doing more work on the philosophical concepts of progress; what matters to me is the actual change)
Humanism as the source of value / Utilitarianism as the standard of value
I think your definition of humanism is laudable but vague. It weakly answers the question of “whence value?” but stops there.
I think your alternative source of value is better described as “naturalism” instead of “romanticism” — if only because the latter seems to suggest philosophical romanticism (https://en.wikipedia.org/wiki/Romanticism) instead of the conservatism you described. (This is mostly a minor nit about naming things, not actually a criticism of the point)
So I think “Humanism as the source of value” makes sense, but it doesn’t give us a metric or measurement or point of reference to compare to in terms of value.
I think that standard of value (or the system of utilizing that) is utilitarianism.
I think consequentialism and utilitarianism are not without their issues (hopefully writing up more of them soon) — but I think they make a strong standard, in particular by forcing a consistent set of preferences between alternatives of value.
What is the measure of Agency?
I ask because I don’t know it — and also because it seems critical to rectifying some liberalism (the John Stuart Mill kind = https://en.wikipedia.org/wiki/On_Liberty) with utilitarianism/consequentialism.
It seems reasonable to consider differently the expected harm of someone who directly put themselves in harms way, vs the person who was put into harms way by the state — though naive metrics for utility (e.g. expected QALYs, etc) would be the same.
The VNM utilitarians would claim that there is some term in the utility function for agency, but (so far as I know) have not produced actual numbers and metrics for how agency trades off with e.g. mortality.
Admittedly this is mostly a critique of utilitarianism and not of your point on agency.
I think the point about agency in the face of the future is essential, and the people that will change the future will probably be almost exclusively people who think they can change the future.
What should be in the core ideas for progress?
I am biased here, but I think a philosophy of progress necessarily must include a philosophy of risk.
Technological progress creates harms and downsides (or risks of downsides) in addition to benefits and upsides.
I think a philosophy of progress should have reified concepts for measuring these against each other. I think it should also have reified concepts for measuring the meta-effects of progress on these other metrics for progress.
Secret first point of progress
I would be a little bit remiss if I didn’t include to me what was the most surprising part of learning about progress so far: almost all human progress (in the sense of the moral imperative you gave at the end) is scientific and technological progress.
To the extent that this is not the case, I haven’t found strong evidence of it yet.
To the extent that it is the case, I think we should be more clearly specifying that the moral imperative is for scientific and technical progress.
(P.S. - I have so far found the book of middling quality but full of interesting concepts by which to merge a philosophy of progress with things like utilitarianism and existential risk https://www.routledge.com/Risk-Philosophical-Perspectives/Lewens/p/book/9780415422840)
Thanks for the detailed thoughts Alex! An incomplete reply:
I agree that “human well-being as the standard of value” leaves a lot open. That’s deliberate because I think that not everyone in this movement agrees on how exactly we should interpret, measure, etc. human well-being. Utilitarianism is one but not the only approach. It is an important topic for us to work out.
Agree with you about philosophy of risk / philosophy of safety. These are issues I am thinking about. For one preliminary, narrow case study see “How factories were made safe.”
I disagree that almost all progress is scientific/technological, if by that you mean that no significant moral/social progress has happened. The transition from monarchy to republics, the virtual end of slavery, and great progress in equal rights for women are three major points of moral/social progress that have occurred in the last ~250 years.