I think our disagreement is primarily a methodological one. You are aiming at the low hanging fruit, but I feel like if we don’t dig up the roots of the problem similar fruit will eventually grow again. I want a new tree.
I want a new tree, too. I think uprooting the current tree doesn’t guarantee that a better tree will grow in its place. In fact, I worry that the backlash from uprooting the tree will help plant the seeds for a worse tree.
Yeah that is a valid worry. I concede my point. The potential impact from uprooting the tree combined with the multitude of potential unknowns make strategic clipping of the fruit seem the better option. I never imagined that I would find an argument for moderate change in such a progress oriented community.
Yeah, this community’s meaning of progress doesn’t align well with a politically active feminist’s meaning of progress. For the most part, the majority of members of this community hope for scientific advances that make our questions moot. That’s not a totally unreasonable hope, in the abstract: many advances in female empowerment follow from the invention of The Pill—a reliable method for separating sex from procreation. Once that separation occurred, it became much more obvious how disconnected from physical fact many gender constructs really were.
That said, I think that the LessWrong community as a whole underestimates the impact of constructed social meaning. Part of that is unexamined traditionalism and part of it is the community’s well settled aversion to discussing practical social engineering.
I am not completely against change. I just think progress ceases to be progress if it accelerates to the point where humans are unable to acclimate to it.
A new technology is useful if it is serves a specific purpose for human manipulation of territory. The more unknown the technology the more dangerous it is to human survival, and thus can no longer be seen as progressive. Furthermore the introduction of new technology reshapes the social topography of a territory. If erosion/alteration of social topography happens at too fast a rate it becomes impossible to navigate based off the experiences of others. Just as if all the currents and depths of a channel suddenly changed the built up knowledge of generations of fishers would become irrelevant.
Whether technological/scientific advancement is progress or just impact depends on these two factors
1.) The degree of unknowns involved with the technology
2.) The extent to which social topography is eroded/altered.
If we look at cell phones and other types of information-technologies they have completely reconstructed the social topography of the world, and they continue to develop at an astonishing rate. As to the degree of unknowns, cell phones have already been completely integrated into everyday life, despite their relatively short lifespan. What happens when a person lives 70 years with a cellphone in their pocket, or an i-pad? We have no idea because they have not been around long enough to have any cases. There is still a huge degree of unknowns with these new technologies, yet we are already completely dependent on them.
I am not saying that this is not progress, it is not possible to say at this point; but I will say that we are walking a fine line between true progress and unrestrained impact.
I don’t think increasing our ability to control the world is an inherently good or bad thing (somewhat like how concepts like equality don’t have a particular political affiliation). The Spaniards did terrible things to the natives of the New World, but the proximate cause of their behavior was their extreme aversion to Otherness (like Orientalism, but worse). Spain’s technological superiority made their oppressive behavior possible, but it is insufficient to explain what happened.
To your specific point about cell phones, the data is pretty clear they are fairly safe. We have a good understanding of what radiation of various kinds can and can’t do. And social topography has nothing to do with this risk.
To your specific point about cell phones, the data is pretty clear they are fairly safe. We have a good understanding of what radiation of various kinds can and can’t do. And social topography has nothing to do with this risk.
I don’t think he means the biological effects of radiation, but the psychological/sociological effects of always being available for conversation. (Being unable to talk to me for one freakin’ day would bother the living crap out of my mother, for example. I’m not sure that’s a healthy thing.)
I agree that our ability to control the world is not inherently good or bad. What I am saying is that the rate at which we use this ability can be beneficial or harmful. In my mind it is analogous to a person running through a forest to win a race. There is no path, but they have a pretty good idea of the general direction they want to go. The faster the run the quicker they close the distance between themselves and their objective, but at the same time, if they run too fast they risk stumbling into a pitfall, shooting off a sudden drop, tripping, building up too much momentum on a downhill run. All these things are potentially dangerous.
The cellphones causing cancer was the wrong point to focus on. But it cannot be denied that cell phones in general have changed the structure of society at an alarming pace. Again, I am not saying this is inherently good or bad. It could be that our barreling through the forest brings us to our destination in the least possible time. I guess I am just a somewhat pessimistic person. I think rather than getting there faster, it would be better to minimize any chance of tragedy.
I agree that our ability to control the world is not inherently good or bad. What I am saying is that the rate at which we use this ability can be beneficial or harmful.
I think these two sentences are in quite a bit of tension. The speed at which we get better at controlling the world can best be judged by whether we should be trying to control the world at all.
But it cannot be denied that cell phones in general have changed the structure of society at an alarming pace.
I deny. Cell phones have changed the structure of society at a very high pace. Alarming? That’s a value judgment that needs a fair amount of justification. Even assuming that it isn’t possible to live “how things used to be” because of widespread expectations of cell phone usage (and I’m not sure this is true), why is this worse?
I don’t think there is a tension. It is kind of like I do not not think coffee is inherently good or bad. It is the rate of use that defines it as good or bad to me. Drinking 10 cups of day (a very high rate of use) I find to be bad for you; whereas if you have a cup of coffee a day (a slower rate of use) it is good for you. I think the same principle is true for technology. Developing too fast without regard for the societal impact or potential dangers of what you are creating is negative in my opinion.
The speed at which we get better at controlling the world can best be judged by whether we should be trying to control the world at all.
I don’t really understand this sentence could you explain it more. What I get from reading it is: “if it does not seem feasible it should be abandoned?”
Mobile phones have changed social interaction, how people think (through texting), the structure of business and economics, they have become a status symbol, do I need to keep going?
Coffee isn’t such a good analogy. That’s got a certain finite set of effects on a well-known neurotransmitter system, and while not all of the secondary or more subtle effects are known we can take a pretty good stab at describing what levels are likely to be harmful given a certain set of parameters. Social change and technology don’t have a well-defined set of effects at all: they’re not definitive terms, they’re descriptive terms encompassing any deltas in our culture or technical capabilities respectively.
Speaking of technology as if it’s a thing with agency is obviously improper; I doubt we’d disagree on that point. But I’d actually go farther than that and say that speaking of technology as a well-defined force (and thus something with a direction that we can talk about precisely, or can or should be retarded or encouraged as a whole) isn’t much better. It may or may not be reasonable to accept a precautionary principle with regard to particular technologies; there’s a decent consensus here that we should adopt one for AGI, for example. But lumping all technology into a single category for that purpose is terribly overgeneral at best, and very likely actively destructive when you consider opportunity costs.
lumping all technology into a single category for that purpose is terribly over general at best, and very likely actively destructive when you consider opportunity costs.
When I talk about technology, what I am really talking about is a rate of technological innovation. Technological innovation is inevitably going to change the dynamics of a society in some way. The slower that change, the more predictable and manageable it is. If that change continues to accelerate, eventually it will reach a point where it moves beyond the limitations of existing tracking technology. At that point, it becomes purely a force. That force could result in positive impacts, but it could also result in negative ones, however, To determine or manage whether it is positive or negative is impossible for us since it moves beyond our capacity to track. Do you disagree with this idea?
If that change continues to accelerate, eventually it will reach a point where it moves beyond the limitations of existing tracking technology. At that point, it becomes purely a force. That force could result in positive impacts, but it could also result in negative ones
This is essentially a restatement of the accelerating change model of a technological singularity. I suspect that most of that model’s weak predictions kicked in several decades ago: aside from some very coarse-grained models along the lines of Moore’s Law, I don’t think we’ve been capable of making accurate predictions about the decade-scale future since at least the 1970s and arguably well before. If we can expect technological change to continue to accelerate (a proposition dependent on the drivers of technological change, and which I consider likely but not certain), we can expect effective planning horizons in contexts dependent on tech in general to shrink proportionally. (The accelerating change model also offers some stronger predictions, but I’m skeptical of most of them for various reasons, mainly having to do with the misleading definitivism I allude to in the grandparent.)
Very well; the next obvious question is should this worry me? To which I’d answer yes, a little, but not as much as the status quo should. With the arguable exception of weapons, the first-order effects of any new technology are generally positive. It’s second-order effects that worry people; in historical perspective, though, the second-order downsides of typical innovations don’t appear to have outweighed their first-order benefits. (They’re often more famous, but that’s just availability bias.) I don’t see any obvious reason why this would change under a regime of accelerating innovation; shrinking planning horizons are arguably worrisome given that they provide incentive to ignore long-term downsides, but there are ways around this. If I’m right, broad regulation aimed at slowing overall innovation rates is bound to prevent more beneficial changes than harmful; it’s also game-theoretically unstable, as faster-innovating regions gain an advantage over slower-innovating ones.
And the status quo? Well, as environmentalists are fond of pointing out, industrial society is inherently unsustainable. Unfortunately, the solutions they tend to propose are unlikely to be workable in the long run for the same game-theoretic reasons I outline above. Transformative technologies usually don’t have that problem.
This is essentially a restatement of the accelerating change model of a technological singularity.
I was not familiar with the theory of technological singularity, but from reading your link I feel that there is a big difference between it and what I am saying. Namely that it states, “Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive...” whereas I am saying that such prediction is impossible beyond a certain point. I would agree with you that we have already pasted that point (perhaps in the 70s).
Very well; the next obvious question is should this worry me? To which I’d answer yes, a little, but not as much as the status quo should. With the arguable exception of weapons, the first-order effects of any new technology are generally positive.
This I disagree with. If you continue reading my discussion with TimS you will see that I suggest (well Jean Baudrillard suggests) a shift in technological production from purely economic and function based production, to symbolic and sign based production. There are technologies where the first-order effects are generally positive, but I would argue that there are many novel technological innovations that provides no new functional benefit. At best, they work to superimpose symbolic or semiotic value upon existing functional properties; at worst, they create dysfunctional tools that are masked with illusionary social benefits. I agree that these second order effects as you call them are slower acting, but that is not an argument to ignore them, especially since, as you say, they have been building up since the 70s.
I agree that the status quo is a problem, but I do not see it as more of a problem than the subtle amassment of second order technological problems. I think both are serious dangers to our society that need to be addressed as soon as possible. The former is an open wound, the latter is a tumor. Treating the wound is necessary, but if one does not deal with the later as early as possible it will grow beyond the point of remedy.
Really nice post. I apologize about my analogy. Truthfully I picked it not for its accuracy, but its ability to make my point. After recently reading Eliezer’s essay about sneaking connotations I am afraid it is a bad habit I have. I completely agree it is a bad analogy.
As to your second point. It is a really interesting question that honestly I have never thought about. If you don’t mind I would like a little more time to think about it. I agree with it is improper to speak of technology as a thing with agency, but I am not sure if I agree that speaking of technology as a well-defined force is just as bad.
My point is that the factors that are relevant to deciding how fast to research new technology are the same factors that are relevant in deciding whether to use technology at all.
Mobile phones have changed social interaction, how people think (through texting), the structure of business and economics, they have become a status symbol, do I need to keep going?
The word I was disputing in your prior post was alarming. Cell phones have caused and are causing massive social change.
My point is that the factors that are relevant to deciding how fast to research new technology are the same factors that are relevant in deciding whether to use technology at all.
What do you see as the primary factors determining how fast to research new technology?
Ideally technology would be driven by necessity or efficiency, but that is an idea. In my opinion the driving factor for new technologies is profit. For example, my uncle installs home entertainment systems for the rich. He tells me that he gets sent dozens of new types of wire, new routers, new systems for free that some engineer is hoping to make it big off of. The development of new mediums of audio/video, drugs, TVs, honestly I feel like in most fields there is a constant push for innovation for the sake of entrepreneurship alone, and I don’t think that is relevant to the actual use of the technology.
P.S When I say technology I am using it as a extremely broad term for any tool used to manipulate the physical world.
Efficiency has to do with the use of the tool being created. An efficient ax is sharp and will not break easily. Profit has to do with the producer maximizing their intake and minimizing their costs.
A more efficient tool maximizes intake (by working faster) and minimizes cost (by being replaced less frequently). I respectfully suggest that efficiency and profitability point to the same thing in concept-space.
I am willing to accept the idea that the efficiency of a tool can be categorized as a type of profit. Still, there needs to be a distinction between maximizing the tool’s capacity for profit and maximizing the profit from producing the tool.
Profit-1: Maximizing intake (by working faster and more precisely) and Minimizing cost (by being replaced less frequently)
Profit-2: Maximizing intake (by selling the most product) and Minimizing cost (by being made in a cheaper fashion)
Profit-1 and Profit-2 are not always mutually beneficial. From the producer’s perspective the greater the tool’s capacity for profit the quicker they will deplete their markets. If I make an refrigerator that has a shelf life of a century. That means I am probably only going to sell 1 refrigerator per family (possibly 2), per century. I could either have to continuously expand my markets (which is both risky and costly), or make my product with a short shelf life, or in other words, needing more frequent replacement.
Another fairly common situation is where the most profitable model of a tool already dominates a market. Let’s say cereal is a tool for the sake of augmenting human nutrition. There are only so many ways to reconstruct corn. Instead of abandoning the market, the market is reconstructed to serve the needs of producers. A famous sociologist Jean Baudrillard talks about this process as the development of a symbolic mode of production as opposed to a traditional mode. In a system of symbolic exchange an object has four potential values: functional value, economic value, symbolic value, and sign value. The later two distort the former two. Functional value relates to the use of a tool and its ability to do it. Economic value relates to the need of that use in a territory which affects how desired the object is. Symbolic value and sign value relate to what objects represent in a social system. In my example of cereal, there are fruit loops and fruit hoops. Fruit hoops are basically fruit loops. But they are cheaper. They do not have a little prize, they come in a box not a bag, and they do not have Toucan Sam. You would think everyone would buy Fruit hoops since they serve exactly the same function for a much cheaper cost. However, this is not the case. People want the box, they want Toucan Sam. It is the same when you buy designer vs. knock off clothes.
My point is that in the production of new tools, the producer is not just trying to secure a functional efficiency; they are trying to create a symbolic value. Sometimes what is symbolized is the degree of functional efficiency, but often times it is not. There are hundreds of new technologies developed yearly that create new needs, new functions (many trivial) for the sake of securing symbolic value.
Baudrillard argues that the result is a symbolic mode of production is not one where demand drives supply; but where new demands are created by producers to meet their supplies.
There’s a story that Ford learned that there was one part of the Model T that wore out much later than the rest of the car—I think it was the bumpers. That is, there were perfectly good bumpers sitting on otherwise useless Model Ts in the dump. Ford’s response? He decreased the quality (and cost) of the bumper so that it wore out when the rest of the car did.
You seem to think this was wrong of Ford, because he was maximizing his wealth without passing along any benefit to society as a whole. I suggest that you will be more analytically clear if you separate the terminology about wealth-maximizing from the terminology about normatively appropriate behavior. Profit is not generally understood to mean “bad wealth maximizing” in the general community, and you do yourself no favors in persuading others by trying to smuggle in a normative connotation into a descriptive term.
Profit is not generally understood to mean “bad wealth maximizing” in the general community, and you do yourself no favors in persuading others by trying to smuggle in a normative connotation into a descriptive term.
I don’t think I am smuggling anything. I clearly tried to explain what I meant. Also, In the grander scheme of things for was wrong for wanting to maximize individual profit without thinking of group profit. Ford began making cars that were designed to breakdown faster so they could sell more. Because of this, by the turn of the century Ford was not as trusted by consumers as Japanese or European models. Now Ford is desperately trying to reestablish a basis of trust with a larger consumer demographic.
Instead of profit-1 and profit-2 perhaps it would be simpler to say that there is individual profit and group profit; and that without a balance between the two the stability of both an individual and a society is threatened. I don’t see this as a personal bias, do you?
What’s more, I think your example side steps my original point. Do you not agree that individual profit is a driving force for a large portion of technological development; one that does not necessarily result in a profit of increasing efficiency? With all due respect, it seems to me that you, and perhaps this community in general, share a normative connotation that all technological development is universally beneficial, in the sense that it increases efficiency and decreases cost.
I am not arguing that there is no technological development that is beneficial; but not all technological develop meets the ideal. There is Technological development-1: That increases efficiency and decreases cost.
Technological development-2: That does not increase efficiency and might increase or decrease cost.
Technological development-3: That decrease efficiency and increases group cost.
What evidence do you have to deny the existence of the third category of technological development?
I’m not sure that TD2 is a coherent category. If efficiency does not change, how can equilibrium price change?
TD3 could happen, but often won’t (absent a monopolistic situation) because the entity that could cause the change wouldn’t gain more from the change than society as a whole would lose. As I said, monopolist situations, such as industry coordination, might make this change more likely, but modern legal regimes frown on violations of anti-trust laws.
More generally, I don’t know how to calculate “group profit” except as the sum of every person’s “individual profit.”
I’m not sure that TD2 is a coherent category. If efficiency does not change, how can equilibrium price change?
Adding a social or signified value to an existing tool can affect the demand for that tool and other tools of its type, even though there is no new functional innovation. I guess technically you could call that increasing or decreasing its marketable efficiency, but I feel that it is important to acknowledge that this can happen completely divorced from any type of functional improvement.
TD3 could happen, but often won’t (absent a monopolistic situation) because the entity that could cause the change wouldn’t gain more from the change than society as a whole would lose.
I find this statement to contradict the reality of markets. Take the medical industry. There are constantly dozens of new pills, prescriptions, and other types of drugs vying for FDA approval so that they may begin mass production for sale. Lots of these products turn out to be harmful to individuals in one way or another, even ones that slip through approval. I would say to your point that there are very few examples of TD3 where the damage is immediately visible, on a large scale, and publically promoted. Situations like these are often shutdown fairly fast. However, there are tons of TD3s that are not quite as visible and have more long term effects.
For example, Right now there is a new energy product that is a pure caffeine spray attempting to prove that it is no more dangerous than coffee. In a flat comparison between the two they come out to be almost equivalent in terms of caffeine dosage. Because of this, the caffeine sprays will probably be approved, just like 5 hour energy drinks were. The effect on an individual is the same, but I would argue the relational effect is very different. Coffee is a slower more social stimulant. Whether it is where you buy it, where you make it, or who you drink it with, it fosters social relationships that I argue both moderate and benefit the user, curbing in some way the development of negative habits. Whereas the implementation of 5 hour energy drinks and caffeine sprays are faster and psychologically tied to paradigms of medical implementation rather than sociality. Burst sprays and quick gulps are common methods of medical implementation. Medical use, traditionally, is culturally private as opposed to social. I doubt there is any research on this at the moment, but I would imagine that because the later use faster, less social methods of implementation, they promote more negative side effects than coffee in its users (just a hypothesis).
As to monopolies, I honestly don’t think monopolies have anything to do with what I am talking about. I see the system of checks and balances placed on TD3s is inherently flawed due to the degree of individualism coveted by our society. The system assesses damage much like you assess profit, individually, rather than relationally. There are many things that are individually neutral or beneficial, which are relationally harmful.
More generally, I don’t know how to calculate “group profit” except as the sum of every person’s “individual profit.”
The problem with measuring group profit by individual profit is
1.) Defining what constitutes profit.
2.) The emergent qualities of systems.
By emergent qualities, what I mean is that often times the sum worth of the system cannot be defined by the parts. For example, human bodies can be segregated into individual organs, but to calculate the overall benefit of the body by net benefit to each organ is not realistic; just as if you were to further segregate the body into cells, it would be unrealistic to calculate the health of the body by the health of every individual cell. Some parts of the body, some cells, are designed to b degenerate quicker, some are designed to be more expendable. It is idealistic to remove the possibility that a species, let alone a primarily social species such as humans, would not function in a similar manner.
In my opinion, the historical atrocities of the 20th century have left western academics so disgusted with the perversions of hierarchy that the overwhelming desire to avoid past mistakes causes most of the system to shun this idea through connotations alone. In truth I am afraid by even voicing this idea I have severely stigmatized myself in this community. I hope that is not the case.
If you were to ask me how to generate group profit, I would suggest that what is needed is
1.) An algorithm that measures homeostasis between social harmony and dissonance.
2.) A Bayesian approach to determining a desired ratio between social dissonance and harmony.
P.S sorry for being so longwinded, couldn’t figure out a shorter way to say all that.
Your organ analogy is very illuminating. I agree that net benefits to particular organs is a funky way of trying to assess the benefit to the body as a whole (although it is probably possible). But note how you analogize individual people to organs of the body. Organs need other organs in a way that might not be true of human beings.
More generally, treating that kind of interdependence as inherent to human experience is almost totally inconsistent with micro-economic concepts like Adam Smith’s invisible hand. Concepts like profit and efficiency are heavily embedded in the individualistic model. In short, I think you should avoid using them to try to explain non-individualistic concepts. I would have understood your point much more easily if you had come out and said, “I don’t believe individualistic rational-actor analysis (aka economics) is maximizing what should be maximized.”
As an aside, I would be careful using the word “emergent” in this community. There is a historical usage of that word that was highly confused and misleading, and one of the foundation sequences attacks that precise type of confused thinking. In brief, saying “Human life arises out of the interactions of the organs via emergence” is no better than saying “Human life arises out of the interactions of the organs via magic”. I don’t think you are making that mistake when you use emergence, but the word is a trigger in this community. More on this general idea here, with some follow-up here. The whole first sequence is very interesting, if you have the time to invest.
Organs need other organs in a way that might not be true of human beings.
Peter L. Berger is a fairly famous sociologist who suggests that the human body is an organ within an organism constituted by a social network, a specific environment, and a specific culture. He argues: language acquisition is fundamental to being “human”; the initial development of a language comes from interaction with a specific environment, its further growth is dependent on a network of other actors; thus since language is dependent on networked bodies, places, things, and ideas he argues that the human organism is defined by this network rather than simply by the individual body.
I don’t believe in the individualistic rational-actor period. I agree that traditional economics is heavily embedded in the individualistic model, but there are plenty of branches of economics as field that reject this assumption.
As an aside, I would be careful using the word “emergent” in this community. There is a historical usage of that word that was highly confused and misleading, and one of the foundation sequences attacks that precise type of confused thinking.
Thanks for telling me. I must admit I have recently been a fan of emergentism as a theory within academics, but the critique you provide of it is interesting. I will be sure to read those articles.
I think our disagreement is primarily a methodological one. You are aiming at the low hanging fruit, but I feel like if we don’t dig up the roots of the problem similar fruit will eventually grow again. I want a new tree.
I want a new tree, too. I think uprooting the current tree doesn’t guarantee that a better tree will grow in its place. In fact, I worry that the backlash from uprooting the tree will help plant the seeds for a worse tree.
Yeah that is a valid worry. I concede my point. The potential impact from uprooting the tree combined with the multitude of potential unknowns make strategic clipping of the fruit seem the better option. I never imagined that I would find an argument for moderate change in such a progress oriented community.
Yeah, this community’s meaning of progress doesn’t align well with a politically active feminist’s meaning of progress. For the most part, the majority of members of this community hope for scientific advances that make our questions moot. That’s not a totally unreasonable hope, in the abstract: many advances in female empowerment follow from the invention of The Pill—a reliable method for separating sex from procreation. Once that separation occurred, it became much more obvious how disconnected from physical fact many gender constructs really were.
That said, I think that the LessWrong community as a whole underestimates the impact of constructed social meaning. Part of that is unexamined traditionalism and part of it is the community’s well settled aversion to discussing practical social engineering.
I am not completely against change. I just think progress ceases to be progress if it accelerates to the point where humans are unable to acclimate to it.
I don’t think I understand.
A new technology is useful if it is serves a specific purpose for human manipulation of territory. The more unknown the technology the more dangerous it is to human survival, and thus can no longer be seen as progressive. Furthermore the introduction of new technology reshapes the social topography of a territory. If erosion/alteration of social topography happens at too fast a rate it becomes impossible to navigate based off the experiences of others. Just as if all the currents and depths of a channel suddenly changed the built up knowledge of generations of fishers would become irrelevant.
Whether technological/scientific advancement is progress or just impact depends on these two factors
1.) The degree of unknowns involved with the technology 2.) The extent to which social topography is eroded/altered.
If we look at cell phones and other types of information-technologies they have completely reconstructed the social topography of the world, and they continue to develop at an astonishing rate. As to the degree of unknowns, cell phones have already been completely integrated into everyday life, despite their relatively short lifespan. What happens when a person lives 70 years with a cellphone in their pocket, or an i-pad? We have no idea because they have not been around long enough to have any cases. There is still a huge degree of unknowns with these new technologies, yet we are already completely dependent on them.
I am not saying that this is not progress, it is not possible to say at this point; but I will say that we are walking a fine line between true progress and unrestrained impact.
Here is a genuine disagreement between us.
I don’t think increasing our ability to control the world is an inherently good or bad thing (somewhat like how concepts like equality don’t have a particular political affiliation). The Spaniards did terrible things to the natives of the New World, but the proximate cause of their behavior was their extreme aversion to Otherness (like Orientalism, but worse). Spain’s technological superiority made their oppressive behavior possible, but it is insufficient to explain what happened.
To your specific point about cell phones, the data is pretty clear they are fairly safe. We have a good understanding of what radiation of various kinds can and can’t do. And social topography has nothing to do with this risk.
I don’t think he means the biological effects of radiation, but the psychological/sociological effects of always being available for conversation. (Being unable to talk to me for one freakin’ day would bother the living crap out of my mother, for example. I’m not sure that’s a healthy thing.)
I didn’t thumbs down you, just saying.
I agree that our ability to control the world is not inherently good or bad. What I am saying is that the rate at which we use this ability can be beneficial or harmful. In my mind it is analogous to a person running through a forest to win a race. There is no path, but they have a pretty good idea of the general direction they want to go. The faster the run the quicker they close the distance between themselves and their objective, but at the same time, if they run too fast they risk stumbling into a pitfall, shooting off a sudden drop, tripping, building up too much momentum on a downhill run. All these things are potentially dangerous. The cellphones causing cancer was the wrong point to focus on. But it cannot be denied that cell phones in general have changed the structure of society at an alarming pace. Again, I am not saying this is inherently good or bad. It could be that our barreling through the forest brings us to our destination in the least possible time. I guess I am just a somewhat pessimistic person. I think rather than getting there faster, it would be better to minimize any chance of tragedy.
I think these two sentences are in quite a bit of tension. The speed at which we get better at controlling the world can best be judged by whether we should be trying to control the world at all.
I deny. Cell phones have changed the structure of society at a very high pace. Alarming? That’s a value judgment that needs a fair amount of justification. Even assuming that it isn’t possible to live “how things used to be” because of widespread expectations of cell phone usage (and I’m not sure this is true), why is this worse?
I don’t think there is a tension. It is kind of like I do not not think coffee is inherently good or bad. It is the rate of use that defines it as good or bad to me. Drinking 10 cups of day (a very high rate of use) I find to be bad for you; whereas if you have a cup of coffee a day (a slower rate of use) it is good for you. I think the same principle is true for technology. Developing too fast without regard for the societal impact or potential dangers of what you are creating is negative in my opinion.
I don’t really understand this sentence could you explain it more. What I get from reading it is: “if it does not seem feasible it should be abandoned?”
Mobile phones have changed social interaction, how people think (through texting), the structure of business and economics, they have become a status symbol, do I need to keep going?
Coffee isn’t such a good analogy. That’s got a certain finite set of effects on a well-known neurotransmitter system, and while not all of the secondary or more subtle effects are known we can take a pretty good stab at describing what levels are likely to be harmful given a certain set of parameters. Social change and technology don’t have a well-defined set of effects at all: they’re not definitive terms, they’re descriptive terms encompassing any deltas in our culture or technical capabilities respectively.
Speaking of technology as if it’s a thing with agency is obviously improper; I doubt we’d disagree on that point. But I’d actually go farther than that and say that speaking of technology as a well-defined force (and thus something with a direction that we can talk about precisely, or can or should be retarded or encouraged as a whole) isn’t much better. It may or may not be reasonable to accept a precautionary principle with regard to particular technologies; there’s a decent consensus here that we should adopt one for AGI, for example. But lumping all technology into a single category for that purpose is terribly overgeneral at best, and very likely actively destructive when you consider opportunity costs.
When I talk about technology, what I am really talking about is a rate of technological innovation. Technological innovation is inevitably going to change the dynamics of a society in some way. The slower that change, the more predictable and manageable it is. If that change continues to accelerate, eventually it will reach a point where it moves beyond the limitations of existing tracking technology. At that point, it becomes purely a force. That force could result in positive impacts, but it could also result in negative ones, however, To determine or manage whether it is positive or negative is impossible for us since it moves beyond our capacity to track. Do you disagree with this idea?
This is essentially a restatement of the accelerating change model of a technological singularity. I suspect that most of that model’s weak predictions kicked in several decades ago: aside from some very coarse-grained models along the lines of Moore’s Law, I don’t think we’ve been capable of making accurate predictions about the decade-scale future since at least the 1970s and arguably well before. If we can expect technological change to continue to accelerate (a proposition dependent on the drivers of technological change, and which I consider likely but not certain), we can expect effective planning horizons in contexts dependent on tech in general to shrink proportionally. (The accelerating change model also offers some stronger predictions, but I’m skeptical of most of them for various reasons, mainly having to do with the misleading definitivism I allude to in the grandparent.)
Very well; the next obvious question is should this worry me? To which I’d answer yes, a little, but not as much as the status quo should. With the arguable exception of weapons, the first-order effects of any new technology are generally positive. It’s second-order effects that worry people; in historical perspective, though, the second-order downsides of typical innovations don’t appear to have outweighed their first-order benefits. (They’re often more famous, but that’s just availability bias.) I don’t see any obvious reason why this would change under a regime of accelerating innovation; shrinking planning horizons are arguably worrisome given that they provide incentive to ignore long-term downsides, but there are ways around this. If I’m right, broad regulation aimed at slowing overall innovation rates is bound to prevent more beneficial changes than harmful; it’s also game-theoretically unstable, as faster-innovating regions gain an advantage over slower-innovating ones.
And the status quo? Well, as environmentalists are fond of pointing out, industrial society is inherently unsustainable. Unfortunately, the solutions they tend to propose are unlikely to be workable in the long run for the same game-theoretic reasons I outline above. Transformative technologies usually don’t have that problem.
I was not familiar with the theory of technological singularity, but from reading your link I feel that there is a big difference between it and what I am saying. Namely that it states, “Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive...” whereas I am saying that such prediction is impossible beyond a certain point. I would agree with you that we have already pasted that point (perhaps in the 70s).
This I disagree with. If you continue reading my discussion with TimS you will see that I suggest (well Jean Baudrillard suggests) a shift in technological production from purely economic and function based production, to symbolic and sign based production. There are technologies where the first-order effects are generally positive, but I would argue that there are many novel technological innovations that provides no new functional benefit. At best, they work to superimpose symbolic or semiotic value upon existing functional properties; at worst, they create dysfunctional tools that are masked with illusionary social benefits. I agree that these second order effects as you call them are slower acting, but that is not an argument to ignore them, especially since, as you say, they have been building up since the 70s.
I agree that the status quo is a problem, but I do not see it as more of a problem than the subtle amassment of second order technological problems. I think both are serious dangers to our society that need to be addressed as soon as possible. The former is an open wound, the latter is a tumor. Treating the wound is necessary, but if one does not deal with the later as early as possible it will grow beyond the point of remedy.
Really nice post. I apologize about my analogy. Truthfully I picked it not for its accuracy, but its ability to make my point. After recently reading Eliezer’s essay about sneaking connotations I am afraid it is a bad habit I have. I completely agree it is a bad analogy.
As to your second point. It is a really interesting question that honestly I have never thought about. If you don’t mind I would like a little more time to think about it. I agree with it is improper to speak of technology as a thing with agency, but I am not sure if I agree that speaking of technology as a well-defined force is just as bad.
My point is that the factors that are relevant to deciding how fast to research new technology are the same factors that are relevant in deciding whether to use technology at all.
The word I was disputing in your prior post was alarming. Cell phones have caused and are causing massive social change.
What do you see as the primary factors determining how fast to research new technology? Ideally technology would be driven by necessity or efficiency, but that is an idea. In my opinion the driving factor for new technologies is profit. For example, my uncle installs home entertainment systems for the rich. He tells me that he gets sent dozens of new types of wire, new routers, new systems for free that some engineer is hoping to make it big off of. The development of new mediums of audio/video, drugs, TVs, honestly I feel like in most fields there is a constant push for innovation for the sake of entrepreneurship alone, and I don’t think that is relevant to the actual use of the technology.
P.S When I say technology I am using it as a extremely broad term for any tool used to manipulate the physical world.
I’m not saying you are wrong (although I don’t agree with the normative implications), but what is the difference between efficiency and profit?
Efficiency has to do with the use of the tool being created. An efficient ax is sharp and will not break easily. Profit has to do with the producer maximizing their intake and minimizing their costs.
A more efficient tool maximizes intake (by working faster) and minimizes cost (by being replaced less frequently). I respectfully suggest that efficiency and profitability point to the same thing in concept-space.
I am willing to accept the idea that the efficiency of a tool can be categorized as a type of profit. Still, there needs to be a distinction between maximizing the tool’s capacity for profit and maximizing the profit from producing the tool.
Profit-1: Maximizing intake (by working faster and more precisely) and Minimizing cost (by being replaced less frequently)
Profit-2: Maximizing intake (by selling the most product) and Minimizing cost (by being made in a cheaper fashion)
Profit-1 and Profit-2 are not always mutually beneficial. From the producer’s perspective the greater the tool’s capacity for profit the quicker they will deplete their markets. If I make an refrigerator that has a shelf life of a century. That means I am probably only going to sell 1 refrigerator per family (possibly 2), per century. I could either have to continuously expand my markets (which is both risky and costly), or make my product with a short shelf life, or in other words, needing more frequent replacement.
Another fairly common situation is where the most profitable model of a tool already dominates a market. Let’s say cereal is a tool for the sake of augmenting human nutrition. There are only so many ways to reconstruct corn. Instead of abandoning the market, the market is reconstructed to serve the needs of producers. A famous sociologist Jean Baudrillard talks about this process as the development of a symbolic mode of production as opposed to a traditional mode. In a system of symbolic exchange an object has four potential values: functional value, economic value, symbolic value, and sign value. The later two distort the former two. Functional value relates to the use of a tool and its ability to do it. Economic value relates to the need of that use in a territory which affects how desired the object is. Symbolic value and sign value relate to what objects represent in a social system. In my example of cereal, there are fruit loops and fruit hoops. Fruit hoops are basically fruit loops. But they are cheaper. They do not have a little prize, they come in a box not a bag, and they do not have Toucan Sam. You would think everyone would buy Fruit hoops since they serve exactly the same function for a much cheaper cost. However, this is not the case. People want the box, they want Toucan Sam. It is the same when you buy designer vs. knock off clothes.
My point is that in the production of new tools, the producer is not just trying to secure a functional efficiency; they are trying to create a symbolic value. Sometimes what is symbolized is the degree of functional efficiency, but often times it is not. There are hundreds of new technologies developed yearly that create new needs, new functions (many trivial) for the sake of securing symbolic value.
Baudrillard argues that the result is a symbolic mode of production is not one where demand drives supply; but where new demands are created by producers to meet their supplies.
There’s a story that Ford learned that there was one part of the Model T that wore out much later than the rest of the car—I think it was the bumpers. That is, there were perfectly good bumpers sitting on otherwise useless Model Ts in the dump. Ford’s response? He decreased the quality (and cost) of the bumper so that it wore out when the rest of the car did.
You seem to think this was wrong of Ford, because he was maximizing his wealth without passing along any benefit to society as a whole. I suggest that you will be more analytically clear if you separate the terminology about wealth-maximizing from the terminology about normatively appropriate behavior. Profit is not generally understood to mean “bad wealth maximizing” in the general community, and you do yourself no favors in persuading others by trying to smuggle in a normative connotation into a descriptive term.
I don’t think I am smuggling anything. I clearly tried to explain what I meant. Also, In the grander scheme of things for was wrong for wanting to maximize individual profit without thinking of group profit. Ford began making cars that were designed to breakdown faster so they could sell more. Because of this, by the turn of the century Ford was not as trusted by consumers as Japanese or European models. Now Ford is desperately trying to reestablish a basis of trust with a larger consumer demographic. Instead of profit-1 and profit-2 perhaps it would be simpler to say that there is individual profit and group profit; and that without a balance between the two the stability of both an individual and a society is threatened. I don’t see this as a personal bias, do you?
What’s more, I think your example side steps my original point. Do you not agree that individual profit is a driving force for a large portion of technological development; one that does not necessarily result in a profit of increasing efficiency? With all due respect, it seems to me that you, and perhaps this community in general, share a normative connotation that all technological development is universally beneficial, in the sense that it increases efficiency and decreases cost. I am not arguing that there is no technological development that is beneficial; but not all technological develop meets the ideal. There is
Technological development-1: That increases efficiency and decreases cost. Technological development-2: That does not increase efficiency and might increase or decrease cost. Technological development-3: That decrease efficiency and increases group cost. What evidence do you have to deny the existence of the third category of technological development?
I’m not sure that TD2 is a coherent category. If efficiency does not change, how can equilibrium price change?
TD3 could happen, but often won’t (absent a monopolistic situation) because the entity that could cause the change wouldn’t gain more from the change than society as a whole would lose. As I said, monopolist situations, such as industry coordination, might make this change more likely, but modern legal regimes frown on violations of anti-trust laws.
More generally, I don’t know how to calculate “group profit” except as the sum of every person’s “individual profit.”
Adding a social or signified value to an existing tool can affect the demand for that tool and other tools of its type, even though there is no new functional innovation. I guess technically you could call that increasing or decreasing its marketable efficiency, but I feel that it is important to acknowledge that this can happen completely divorced from any type of functional improvement.
I find this statement to contradict the reality of markets. Take the medical industry. There are constantly dozens of new pills, prescriptions, and other types of drugs vying for FDA approval so that they may begin mass production for sale. Lots of these products turn out to be harmful to individuals in one way or another, even ones that slip through approval. I would say to your point that there are very few examples of TD3 where the damage is immediately visible, on a large scale, and publically promoted. Situations like these are often shutdown fairly fast. However, there are tons of TD3s that are not quite as visible and have more long term effects. For example, Right now there is a new energy product that is a pure caffeine spray attempting to prove that it is no more dangerous than coffee. In a flat comparison between the two they come out to be almost equivalent in terms of caffeine dosage. Because of this, the caffeine sprays will probably be approved, just like 5 hour energy drinks were. The effect on an individual is the same, but I would argue the relational effect is very different. Coffee is a slower more social stimulant. Whether it is where you buy it, where you make it, or who you drink it with, it fosters social relationships that I argue both moderate and benefit the user, curbing in some way the development of negative habits. Whereas the implementation of 5 hour energy drinks and caffeine sprays are faster and psychologically tied to paradigms of medical implementation rather than sociality. Burst sprays and quick gulps are common methods of medical implementation. Medical use, traditionally, is culturally private as opposed to social. I doubt there is any research on this at the moment, but I would imagine that because the later use faster, less social methods of implementation, they promote more negative side effects than coffee in its users (just a hypothesis).
As to monopolies, I honestly don’t think monopolies have anything to do with what I am talking about. I see the system of checks and balances placed on TD3s is inherently flawed due to the degree of individualism coveted by our society. The system assesses damage much like you assess profit, individually, rather than relationally. There are many things that are individually neutral or beneficial, which are relationally harmful.
The problem with measuring group profit by individual profit is 1.) Defining what constitutes profit. 2.) The emergent qualities of systems.
By emergent qualities, what I mean is that often times the sum worth of the system cannot be defined by the parts. For example, human bodies can be segregated into individual organs, but to calculate the overall benefit of the body by net benefit to each organ is not realistic; just as if you were to further segregate the body into cells, it would be unrealistic to calculate the health of the body by the health of every individual cell. Some parts of the body, some cells, are designed to b degenerate quicker, some are designed to be more expendable. It is idealistic to remove the possibility that a species, let alone a primarily social species such as humans, would not function in a similar manner. In my opinion, the historical atrocities of the 20th century have left western academics so disgusted with the perversions of hierarchy that the overwhelming desire to avoid past mistakes causes most of the system to shun this idea through connotations alone. In truth I am afraid by even voicing this idea I have severely stigmatized myself in this community. I hope that is not the case.
If you were to ask me how to generate group profit, I would suggest that what is needed is
1.) An algorithm that measures homeostasis between social harmony and dissonance. 2.) A Bayesian approach to determining a desired ratio between social dissonance and harmony.
P.S sorry for being so longwinded, couldn’t figure out a shorter way to say all that.
Your organ analogy is very illuminating. I agree that net benefits to particular organs is a funky way of trying to assess the benefit to the body as a whole (although it is probably possible). But note how you analogize individual people to organs of the body. Organs need other organs in a way that might not be true of human beings.
More generally, treating that kind of interdependence as inherent to human experience is almost totally inconsistent with micro-economic concepts like Adam Smith’s invisible hand. Concepts like profit and efficiency are heavily embedded in the individualistic model. In short, I think you should avoid using them to try to explain non-individualistic concepts. I would have understood your point much more easily if you had come out and said, “I don’t believe individualistic rational-actor analysis (aka economics) is maximizing what should be maximized.”
As an aside, I would be careful using the word “emergent” in this community. There is a historical usage of that word that was highly confused and misleading, and one of the foundation sequences attacks that precise type of confused thinking. In brief, saying “Human life arises out of the interactions of the organs via emergence” is no better than saying “Human life arises out of the interactions of the organs via magic”. I don’t think you are making that mistake when you use emergence, but the word is a trigger in this community. More on this general idea here, with some follow-up here. The whole first sequence is very interesting, if you have the time to invest.
I don’t believe in the individualistic rational-actor period. I agree that traditional economics is heavily embedded in the individualistic model, but there are plenty of branches of economics as field that reject this assumption.
Thanks for telling me. I must admit I have recently been a fan of emergentism as a theory within academics, but the critique you provide of it is interesting. I will be sure to read those articles.