If that change continues to accelerate, eventually it will reach a point where it moves beyond the limitations of existing tracking technology. At that point, it becomes purely a force. That force could result in positive impacts, but it could also result in negative ones
This is essentially a restatement of the accelerating change model of a technological singularity. I suspect that most of that model’s weak predictions kicked in several decades ago: aside from some very coarse-grained models along the lines of Moore’s Law, I don’t think we’ve been capable of making accurate predictions about the decade-scale future since at least the 1970s and arguably well before. If we can expect technological change to continue to accelerate (a proposition dependent on the drivers of technological change, and which I consider likely but not certain), we can expect effective planning horizons in contexts dependent on tech in general to shrink proportionally. (The accelerating change model also offers some stronger predictions, but I’m skeptical of most of them for various reasons, mainly having to do with the misleading definitivism I allude to in the grandparent.)
Very well; the next obvious question is should this worry me? To which I’d answer yes, a little, but not as much as the status quo should. With the arguable exception of weapons, the first-order effects of any new technology are generally positive. It’s second-order effects that worry people; in historical perspective, though, the second-order downsides of typical innovations don’t appear to have outweighed their first-order benefits. (They’re often more famous, but that’s just availability bias.) I don’t see any obvious reason why this would change under a regime of accelerating innovation; shrinking planning horizons are arguably worrisome given that they provide incentive to ignore long-term downsides, but there are ways around this. If I’m right, broad regulation aimed at slowing overall innovation rates is bound to prevent more beneficial changes than harmful; it’s also game-theoretically unstable, as faster-innovating regions gain an advantage over slower-innovating ones.
And the status quo? Well, as environmentalists are fond of pointing out, industrial society is inherently unsustainable. Unfortunately, the solutions they tend to propose are unlikely to be workable in the long run for the same game-theoretic reasons I outline above. Transformative technologies usually don’t have that problem.
This is essentially a restatement of the accelerating change model of a technological singularity.
I was not familiar with the theory of technological singularity, but from reading your link I feel that there is a big difference between it and what I am saying. Namely that it states, “Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive...” whereas I am saying that such prediction is impossible beyond a certain point. I would agree with you that we have already pasted that point (perhaps in the 70s).
Very well; the next obvious question is should this worry me? To which I’d answer yes, a little, but not as much as the status quo should. With the arguable exception of weapons, the first-order effects of any new technology are generally positive.
This I disagree with. If you continue reading my discussion with TimS you will see that I suggest (well Jean Baudrillard suggests) a shift in technological production from purely economic and function based production, to symbolic and sign based production. There are technologies where the first-order effects are generally positive, but I would argue that there are many novel technological innovations that provides no new functional benefit. At best, they work to superimpose symbolic or semiotic value upon existing functional properties; at worst, they create dysfunctional tools that are masked with illusionary social benefits. I agree that these second order effects as you call them are slower acting, but that is not an argument to ignore them, especially since, as you say, they have been building up since the 70s.
I agree that the status quo is a problem, but I do not see it as more of a problem than the subtle amassment of second order technological problems. I think both are serious dangers to our society that need to be addressed as soon as possible. The former is an open wound, the latter is a tumor. Treating the wound is necessary, but if one does not deal with the later as early as possible it will grow beyond the point of remedy.
This is essentially a restatement of the accelerating change model of a technological singularity. I suspect that most of that model’s weak predictions kicked in several decades ago: aside from some very coarse-grained models along the lines of Moore’s Law, I don’t think we’ve been capable of making accurate predictions about the decade-scale future since at least the 1970s and arguably well before. If we can expect technological change to continue to accelerate (a proposition dependent on the drivers of technological change, and which I consider likely but not certain), we can expect effective planning horizons in contexts dependent on tech in general to shrink proportionally. (The accelerating change model also offers some stronger predictions, but I’m skeptical of most of them for various reasons, mainly having to do with the misleading definitivism I allude to in the grandparent.)
Very well; the next obvious question is should this worry me? To which I’d answer yes, a little, but not as much as the status quo should. With the arguable exception of weapons, the first-order effects of any new technology are generally positive. It’s second-order effects that worry people; in historical perspective, though, the second-order downsides of typical innovations don’t appear to have outweighed their first-order benefits. (They’re often more famous, but that’s just availability bias.) I don’t see any obvious reason why this would change under a regime of accelerating innovation; shrinking planning horizons are arguably worrisome given that they provide incentive to ignore long-term downsides, but there are ways around this. If I’m right, broad regulation aimed at slowing overall innovation rates is bound to prevent more beneficial changes than harmful; it’s also game-theoretically unstable, as faster-innovating regions gain an advantage over slower-innovating ones.
And the status quo? Well, as environmentalists are fond of pointing out, industrial society is inherently unsustainable. Unfortunately, the solutions they tend to propose are unlikely to be workable in the long run for the same game-theoretic reasons I outline above. Transformative technologies usually don’t have that problem.
I was not familiar with the theory of technological singularity, but from reading your link I feel that there is a big difference between it and what I am saying. Namely that it states, “Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive...” whereas I am saying that such prediction is impossible beyond a certain point. I would agree with you that we have already pasted that point (perhaps in the 70s).
This I disagree with. If you continue reading my discussion with TimS you will see that I suggest (well Jean Baudrillard suggests) a shift in technological production from purely economic and function based production, to symbolic and sign based production. There are technologies where the first-order effects are generally positive, but I would argue that there are many novel technological innovations that provides no new functional benefit. At best, they work to superimpose symbolic or semiotic value upon existing functional properties; at worst, they create dysfunctional tools that are masked with illusionary social benefits. I agree that these second order effects as you call them are slower acting, but that is not an argument to ignore them, especially since, as you say, they have been building up since the 70s.
I agree that the status quo is a problem, but I do not see it as more of a problem than the subtle amassment of second order technological problems. I think both are serious dangers to our society that need to be addressed as soon as possible. The former is an open wound, the latter is a tumor. Treating the wound is necessary, but if one does not deal with the later as early as possible it will grow beyond the point of remedy.