Coffee isn’t such a good analogy. That’s got a certain finite set of effects on a well-known neurotransmitter system, and while not all of the secondary or more subtle effects are known we can take a pretty good stab at describing what levels are likely to be harmful given a certain set of parameters. Social change and technology don’t have a well-defined set of effects at all: they’re not definitive terms, they’re descriptive terms encompassing any deltas in our culture or technical capabilities respectively.
Speaking of technology as if it’s a thing with agency is obviously improper; I doubt we’d disagree on that point. But I’d actually go farther than that and say that speaking of technology as a well-defined force (and thus something with a direction that we can talk about precisely, or can or should be retarded or encouraged as a whole) isn’t much better. It may or may not be reasonable to accept a precautionary principle with regard to particular technologies; there’s a decent consensus here that we should adopt one for AGI, for example. But lumping all technology into a single category for that purpose is terribly overgeneral at best, and very likely actively destructive when you consider opportunity costs.
lumping all technology into a single category for that purpose is terribly over general at best, and very likely actively destructive when you consider opportunity costs.
When I talk about technology, what I am really talking about is a rate of technological innovation. Technological innovation is inevitably going to change the dynamics of a society in some way. The slower that change, the more predictable and manageable it is. If that change continues to accelerate, eventually it will reach a point where it moves beyond the limitations of existing tracking technology. At that point, it becomes purely a force. That force could result in positive impacts, but it could also result in negative ones, however, To determine or manage whether it is positive or negative is impossible for us since it moves beyond our capacity to track. Do you disagree with this idea?
If that change continues to accelerate, eventually it will reach a point where it moves beyond the limitations of existing tracking technology. At that point, it becomes purely a force. That force could result in positive impacts, but it could also result in negative ones
This is essentially a restatement of the accelerating change model of a technological singularity. I suspect that most of that model’s weak predictions kicked in several decades ago: aside from some very coarse-grained models along the lines of Moore’s Law, I don’t think we’ve been capable of making accurate predictions about the decade-scale future since at least the 1970s and arguably well before. If we can expect technological change to continue to accelerate (a proposition dependent on the drivers of technological change, and which I consider likely but not certain), we can expect effective planning horizons in contexts dependent on tech in general to shrink proportionally. (The accelerating change model also offers some stronger predictions, but I’m skeptical of most of them for various reasons, mainly having to do with the misleading definitivism I allude to in the grandparent.)
Very well; the next obvious question is should this worry me? To which I’d answer yes, a little, but not as much as the status quo should. With the arguable exception of weapons, the first-order effects of any new technology are generally positive. It’s second-order effects that worry people; in historical perspective, though, the second-order downsides of typical innovations don’t appear to have outweighed their first-order benefits. (They’re often more famous, but that’s just availability bias.) I don’t see any obvious reason why this would change under a regime of accelerating innovation; shrinking planning horizons are arguably worrisome given that they provide incentive to ignore long-term downsides, but there are ways around this. If I’m right, broad regulation aimed at slowing overall innovation rates is bound to prevent more beneficial changes than harmful; it’s also game-theoretically unstable, as faster-innovating regions gain an advantage over slower-innovating ones.
And the status quo? Well, as environmentalists are fond of pointing out, industrial society is inherently unsustainable. Unfortunately, the solutions they tend to propose are unlikely to be workable in the long run for the same game-theoretic reasons I outline above. Transformative technologies usually don’t have that problem.
This is essentially a restatement of the accelerating change model of a technological singularity.
I was not familiar with the theory of technological singularity, but from reading your link I feel that there is a big difference between it and what I am saying. Namely that it states, “Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive...” whereas I am saying that such prediction is impossible beyond a certain point. I would agree with you that we have already pasted that point (perhaps in the 70s).
Very well; the next obvious question is should this worry me? To which I’d answer yes, a little, but not as much as the status quo should. With the arguable exception of weapons, the first-order effects of any new technology are generally positive.
This I disagree with. If you continue reading my discussion with TimS you will see that I suggest (well Jean Baudrillard suggests) a shift in technological production from purely economic and function based production, to symbolic and sign based production. There are technologies where the first-order effects are generally positive, but I would argue that there are many novel technological innovations that provides no new functional benefit. At best, they work to superimpose symbolic or semiotic value upon existing functional properties; at worst, they create dysfunctional tools that are masked with illusionary social benefits. I agree that these second order effects as you call them are slower acting, but that is not an argument to ignore them, especially since, as you say, they have been building up since the 70s.
I agree that the status quo is a problem, but I do not see it as more of a problem than the subtle amassment of second order technological problems. I think both are serious dangers to our society that need to be addressed as soon as possible. The former is an open wound, the latter is a tumor. Treating the wound is necessary, but if one does not deal with the later as early as possible it will grow beyond the point of remedy.
Really nice post. I apologize about my analogy. Truthfully I picked it not for its accuracy, but its ability to make my point. After recently reading Eliezer’s essay about sneaking connotations I am afraid it is a bad habit I have. I completely agree it is a bad analogy.
As to your second point. It is a really interesting question that honestly I have never thought about. If you don’t mind I would like a little more time to think about it. I agree with it is improper to speak of technology as a thing with agency, but I am not sure if I agree that speaking of technology as a well-defined force is just as bad.
Coffee isn’t such a good analogy. That’s got a certain finite set of effects on a well-known neurotransmitter system, and while not all of the secondary or more subtle effects are known we can take a pretty good stab at describing what levels are likely to be harmful given a certain set of parameters. Social change and technology don’t have a well-defined set of effects at all: they’re not definitive terms, they’re descriptive terms encompassing any deltas in our culture or technical capabilities respectively.
Speaking of technology as if it’s a thing with agency is obviously improper; I doubt we’d disagree on that point. But I’d actually go farther than that and say that speaking of technology as a well-defined force (and thus something with a direction that we can talk about precisely, or can or should be retarded or encouraged as a whole) isn’t much better. It may or may not be reasonable to accept a precautionary principle with regard to particular technologies; there’s a decent consensus here that we should adopt one for AGI, for example. But lumping all technology into a single category for that purpose is terribly overgeneral at best, and very likely actively destructive when you consider opportunity costs.
When I talk about technology, what I am really talking about is a rate of technological innovation. Technological innovation is inevitably going to change the dynamics of a society in some way. The slower that change, the more predictable and manageable it is. If that change continues to accelerate, eventually it will reach a point where it moves beyond the limitations of existing tracking technology. At that point, it becomes purely a force. That force could result in positive impacts, but it could also result in negative ones, however, To determine or manage whether it is positive or negative is impossible for us since it moves beyond our capacity to track. Do you disagree with this idea?
This is essentially a restatement of the accelerating change model of a technological singularity. I suspect that most of that model’s weak predictions kicked in several decades ago: aside from some very coarse-grained models along the lines of Moore’s Law, I don’t think we’ve been capable of making accurate predictions about the decade-scale future since at least the 1970s and arguably well before. If we can expect technological change to continue to accelerate (a proposition dependent on the drivers of technological change, and which I consider likely but not certain), we can expect effective planning horizons in contexts dependent on tech in general to shrink proportionally. (The accelerating change model also offers some stronger predictions, but I’m skeptical of most of them for various reasons, mainly having to do with the misleading definitivism I allude to in the grandparent.)
Very well; the next obvious question is should this worry me? To which I’d answer yes, a little, but not as much as the status quo should. With the arguable exception of weapons, the first-order effects of any new technology are generally positive. It’s second-order effects that worry people; in historical perspective, though, the second-order downsides of typical innovations don’t appear to have outweighed their first-order benefits. (They’re often more famous, but that’s just availability bias.) I don’t see any obvious reason why this would change under a regime of accelerating innovation; shrinking planning horizons are arguably worrisome given that they provide incentive to ignore long-term downsides, but there are ways around this. If I’m right, broad regulation aimed at slowing overall innovation rates is bound to prevent more beneficial changes than harmful; it’s also game-theoretically unstable, as faster-innovating regions gain an advantage over slower-innovating ones.
And the status quo? Well, as environmentalists are fond of pointing out, industrial society is inherently unsustainable. Unfortunately, the solutions they tend to propose are unlikely to be workable in the long run for the same game-theoretic reasons I outline above. Transformative technologies usually don’t have that problem.
I was not familiar with the theory of technological singularity, but from reading your link I feel that there is a big difference between it and what I am saying. Namely that it states, “Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive...” whereas I am saying that such prediction is impossible beyond a certain point. I would agree with you that we have already pasted that point (perhaps in the 70s).
This I disagree with. If you continue reading my discussion with TimS you will see that I suggest (well Jean Baudrillard suggests) a shift in technological production from purely economic and function based production, to symbolic and sign based production. There are technologies where the first-order effects are generally positive, but I would argue that there are many novel technological innovations that provides no new functional benefit. At best, they work to superimpose symbolic or semiotic value upon existing functional properties; at worst, they create dysfunctional tools that are masked with illusionary social benefits. I agree that these second order effects as you call them are slower acting, but that is not an argument to ignore them, especially since, as you say, they have been building up since the 70s.
I agree that the status quo is a problem, but I do not see it as more of a problem than the subtle amassment of second order technological problems. I think both are serious dangers to our society that need to be addressed as soon as possible. The former is an open wound, the latter is a tumor. Treating the wound is necessary, but if one does not deal with the later as early as possible it will grow beyond the point of remedy.
Really nice post. I apologize about my analogy. Truthfully I picked it not for its accuracy, but its ability to make my point. After recently reading Eliezer’s essay about sneaking connotations I am afraid it is a bad habit I have. I completely agree it is a bad analogy.
As to your second point. It is a really interesting question that honestly I have never thought about. If you don’t mind I would like a little more time to think about it. I agree with it is improper to speak of technology as a thing with agency, but I am not sure if I agree that speaking of technology as a well-defined force is just as bad.