I’m told that historians always view the last while as particularly meaningful and important, and the present is no different. Perhaps this is not the case?
The massive objective changes on the metrics of economics, technology, peace, etc. You just can’t generate groups of metrics at the same level of generality with plausibility to naive audiences and make the 13th century A.D. or the 8th century B.C. the most important and dramatically changing in history. And the ancient historians didn’t claim that their century was the most important in that way (Golden Ages, etc).
Would super-anti-depressant drugs or gene therapies that made people have very positive affect for extended periods cheaply, and without serious side effects or effective legal restrictions, count as fundamental where all the post-Rome changes don’t?
Certainly.
OK. This doesn’t seem to cut nature at the joints. Why on Earth would the question of whether we’ve invented a really good happy-drug take such primacy over energy, population, travel, communication, computation, cumulative literature, mathematics, material strengths, height, literacy, life expectancy, etc? Collectively those just seem to pack a lot more relevant info, particularly for the purpose of predicting:
No nuclear war (although if this is by the end of do-or-die wars between great powers that is itself a substantial change)
The path of R&D in AI
That there will not be sustained use of existing genomics and behavioral genetics knowledge for enhancement, and that enabling technologies will shortly begin to stagnate after strong progress
None of the low probability mega-scale natural catastrophes will happen this century (which we already knew with high probability, but not for this reason)
OK. This doesn’t seem to cut nature at the joints. Why on Earth would the question of whether we’ve invented a really good happy-drug take such primacy over energy, population, travel, communication, computation, cumulative literature, mathematics, material strengths, height, literacy, life expectancy, etc?
When it comes to the question of whether or not human experience has meaningfully changed in thousands of years? Why on Earth wouldn’t it?
This honestly seems to me like one of those situations where we’re sitting here staring at each other and just not understanding one another’s perspective. I’m not sure whether this is a matter of inferential distance, reference class tennis, or what, but I feel like something is definitely missing from this discussion.
I’m saying that the concept you’re using for ‘meaningful change’ is a light shade of grue, looking unusual and gerrymandered to exclude huge past changes while including things like good mood-elevating drugs that are quite natural extrapolations of our expanding biological knowledge.
When we do model combination with the many alternative ways we can slice up the world for outside viewish extrapolation, with penalties for ad hoc complexity, I think the specific view that ignores all past gains in wealth, life expectancy, energy use, population, and technology but responds hugely to mood-elevating drugs carries relatively little weight in prediction for the topics you mentioned.
So I disagree with this:
Reference class forecasting seems to indicate that the business-as-usual future is quite likely.
I’m saying that the concept you’re using for ‘meaningful change’ is a light shade of grue, looking unusual and gerrymandered to exclude huge past changes while including things like good mood-elevating drugs that are quite natural extrapolations of our expanding biological knowledge.
I understand what you are saying, but I don’t understand why you consider those things interesting or relevant. To me, a concept of the human experience that includes computation or material strengths seems unusual and gerrymandered.
At this point it really does seem like we’re just playing reference class tennis, though.
The massive objective changes on the metrics of economics, technology, peace, etc. You just can’t generate groups of metrics at the same level of generality with plausibility to naive audiences and make the 13th century A.D. or the 8th century B.C. the most important and dramatically changing in history. And the ancient historians didn’t claim that their century was the most important in that way (Golden Ages, etc).
OK. This doesn’t seem to cut nature at the joints. Why on Earth would the question of whether we’ve invented a really good happy-drug take such primacy over energy, population, travel, communication, computation, cumulative literature, mathematics, material strengths, height, literacy, life expectancy, etc? Collectively those just seem to pack a lot more relevant info, particularly for the purpose of predicting:
No nuclear war (although if this is by the end of do-or-die wars between great powers that is itself a substantial change)
The path of R&D in AI
That there will not be sustained use of existing genomics and behavioral genetics knowledge for enhancement, and that enabling technologies will shortly begin to stagnate after strong progress
None of the low probability mega-scale natural catastrophes will happen this century (which we already knew with high probability, but not for this reason)
When it comes to the question of whether or not human experience has meaningfully changed in thousands of years? Why on Earth wouldn’t it?
This honestly seems to me like one of those situations where we’re sitting here staring at each other and just not understanding one another’s perspective. I’m not sure whether this is a matter of inferential distance, reference class tennis, or what, but I feel like something is definitely missing from this discussion.
I’m saying that the concept you’re using for ‘meaningful change’ is a light shade of grue, looking unusual and gerrymandered to exclude huge past changes while including things like good mood-elevating drugs that are quite natural extrapolations of our expanding biological knowledge.
When we do model combination with the many alternative ways we can slice up the world for outside viewish extrapolation, with penalties for ad hoc complexity, I think the specific view that ignores all past gains in wealth, life expectancy, energy use, population, and technology but responds hugely to mood-elevating drugs carries relatively little weight in prediction for the topics you mentioned.
So I disagree with this:
I understand what you are saying, but I don’t understand why you consider those things interesting or relevant. To me, a concept of the human experience that includes computation or material strengths seems unusual and gerrymandered.
At this point it really does seem like we’re just playing reference class tennis, though.
Let’s leave it at that then.