Also, from a zoomed-out-all-of-history-view, the pace of technological progress and social change has been accelerating. The difference between 500 AD and 1500 AD is not 10x the difference between 1900 and 2000, it’s arguably less than 1x. So even without knowing anything about this time, we should be very open to the idea that this time is more significant than all previous times.
That’s so correct. But still so wrong—I’d like to argue.
Why?
Because replacing the brain is simply not the same as replacing just our muscles. In all the past we’ve just augmented our brain, with stronger muscle or calculation or writing power etc. using all sorts of dumb tools. But the brain remained the crucial all-central point for all action.
We will have now tools that are smarter, faster, more reliable than our brains. Probably even more empathic. Maybe more loving.
Statistics cannot be extrapolated when there’s a visible structural break. Yes, it may have been difficult to anticipate, 25 years ago, that computers that calculate so fast etc., don’t quickly change society all that fundamentally (although quite fundamentally) already, so the ‘this time is different’ guys 25 years ago were wrong. But in hindsight, it is not so surprising: As long as machines were not so truly smart, we could not change the world as fundamentally as we now foresee. But this time, we seem to be about to get the truly smart ones.
The future is a miracle, we cannot truly fathom how exactly it will look. So nothing is absolutely sure indeed. But merely looking back to the period where mainly muscles were replaceable but not brains, is simply not a way to extrapolate into the future, where something qualitatively entirely new is about to be born.
So you need sth more tangible, a more reliable to rebut the hypothesis underlying the article. And the article beautifully, concisely, explains why we’re awaiting sth rather unimaginably weird. If you have sth to show where specifically it seems wrong, it’d be great to read that.
Not sure about this, but to the extent it was so, often they were right that a lot of things they liked would be gone soon, and that that was sad. (Not necessarily on net, though maybe even on net for them and people like them.)
There was, but their arguments for it were terrible. If there are flaws in the superintelligence argument, please point them out. It’s hard to gauge when, but with GPT4 being smarter than a human for most things, it’s tough to imagine we won’t have its gaps (memory, using its intelligence to direct continuous learning) within a couple of decades.
Modern civilization is pretty OOD compared to the conditions that formed it over the last 100 years. Just look at the current US-China-Russia conflict, for example. Unlike the original Cold War, this current conflict was not started with intent to carpet bomb each other with nukes (carpet bombing was the standard with non-nuclear bombs during WW2, so when the Cold War started they assumed that they would do the carpet bombing with nukes instead).
I can only say there was probably someone in every rolling 100 year period that thought the same about the next 100 years
I think this time is different. The implications simply so much broader, so much more fundamental.
Also, from a zoomed-out-all-of-history-view, the pace of technological progress and social change has been accelerating. The difference between 500 AD and 1500 AD is not 10x the difference between 1900 and 2000, it’s arguably less than 1x. So even without knowing anything about this time, we should be very open to the idea that this time is more significant than all previous times.
That’s what people said last time too. And the time before that.
That’s so correct. But still so wrong—I’d like to argue.
Why?
Because replacing the brain is simply not the same as replacing just our muscles. In all the past we’ve just augmented our brain, with stronger muscle or calculation or writing power etc. using all sorts of dumb tools. But the brain remained the crucial all-central point for all action.
We will have now tools that are smarter, faster, more reliable than our brains. Probably even more empathic. Maybe more loving.
Statistics cannot be extrapolated when there’s a visible structural break. Yes, it may have been difficult to anticipate, 25 years ago, that computers that calculate so fast etc., don’t quickly change society all that fundamentally (although quite fundamentally) already, so the ‘this time is different’ guys 25 years ago were wrong. But in hindsight, it is not so surprising: As long as machines were not so truly smart, we could not change the world as fundamentally as we now foresee. But this time, we seem to be about to get the truly smart ones.
The future is a miracle, we cannot truly fathom how exactly it will look. So nothing is absolutely sure indeed. But merely looking back to the period where mainly muscles were replaceable but not brains, is simply not a way to extrapolate into the future, where something qualitatively entirely new is about to be born.
So you need sth more tangible, a more reliable to rebut the hypothesis underlying the article. And the article beautifully, concisely, explains why we’re awaiting sth rather unimaginably weird. If you have sth to show where specifically it seems wrong, it’d be great to read that.
Not sure about this, but to the extent it was so, often they were right that a lot of things they liked would be gone soon, and that that was sad. (Not necessarily on net, though maybe even on net for them and people like them.)
There was, but their arguments for it were terrible. If there are flaws in the superintelligence argument, please point them out. It’s hard to gauge when, but with GPT4 being smarter than a human for most things, it’s tough to imagine we won’t have its gaps (memory, using its intelligence to direct continuous learning) within a couple of decades.
Modern civilization is pretty OOD compared to the conditions that formed it over the last 100 years. Just look at the current US-China-Russia conflict, for example. Unlike the original Cold War, this current conflict was not started with intent to carpet bomb each other with nukes (carpet bombing was the standard with non-nuclear bombs during WW2, so when the Cold War started they assumed that they would do the carpet bombing with nukes instead).