Are the social sciences challenging because of fundamental difficulties or because of imposed ones?
Scholarship Status:
I’m not an expert in the social sciences. I’ve taken a few in-person and online courses and read several books, but I’m very much speculating here. I also haven’t done much investigation for this specific post. Take this post to be something like, “my quick thoughts of the week, based on existing world knowledge” rather than “the results of an extensive research effort.” I would be very curious to get the take of people with more experience in these areas.
TLDR:
The social sciences, in particular psychology, sociology, and anthropology, are often considered relatively ineffective compared to technical fields like math, engineering, and computer science. Contemporary progress in the social sciences seems to be less impactful, and college graduates in these fields face more challenging job prospects.
One reason for this discrepancy could be that the social sciences are fundamentally less tractable. An argument would go something like, “computers and machines can be deeply understood, so we can make a lot of progress around them, but humans and groups of humans are really messy and near impossible to make useful models of.”
I think one interesting take is that the social sciences aren’t fundamentally more difficult than technical fields, but rather that they undergo substantial limitations. Many of these limitations are intentional, and these might be the majority. By this I’m mainly referring to certain expectations of privacy and other ethical expectations in research, coupled with uncomfort around producing social science insights that are “too powerful.”
If this take is true, then it could change conversations around progress in the social sciences to focus on possibly uncomfortable trade-offs between research progress, experimentation ethics, and long term risks.
This is important to me because I could easily imagine advances in the social sciences being much more net-positive than advances in technical fields, outside of AI. I’d like flying cars and better self driving vehicles a great deal, but I’d like to live in a kind, coordinated, and intelligent world a whole lot more.
Introduction
When I went to college (2008-2012), it was accepted as common wisdom that human decision making was dramatically more complicated than machine behavior. The math, science, and engineering classes used highly specified and deterministic models. The psychology, anthropology, and marketing courses used what seemed like highly sketchy heuristics, big conclusions drawn from narrow experiments, and subjective ethnographic interpretations. Our reductionist and predictable models of machines allowed for the engineering of technical systems, but our vague intuitions of humans didn’t allow us to do much to influence them.
Perhaps it’s telling that engineers keep on refusing to enter the domains of human and social factors; we don’t yet really have a field of “cultural engineering” or “personality engineering” for instance. Arguably cyberneticians and technocrats in the 1900s made attempts but fell out of favor.
Up until recently, I assumed that this discrepancy was due to fundamental difficulties around humans. Even the most complex software setups were rather straightforward compared to humans. After all, human brains are monumentally powerful compared to software systems, so must be correspondingly challenging to deal with.
But recently I’ve been intrigued by a second hypothesis. That many aspects of the social sciences aren’t fundamentally more difficult to understand than technical systems, but rather that progress is deeply bottlenecked by ethical dilemmas and potential dangerous truths.
Intentional Limitations:
1. Political Agendas
Political agendas are probably the most obvious intentional challenge to the social sciences. Generally, no one gets upset about which conclusions nuclear physicists arrive at, but they complain on Twitter when new research is posted on sexual orientation. There’s already a fair bit of discussion of a left-bias in the social sciences and it’s something I hear many academics complain about. My impression is that this is a limitation, but that the issue is a lot more complicated than a simple “we just need more conservative scientists” answer. Conservatives on Twitter get very upset about things too, and having two sides complain about opposite things doesn’t cancel the corresponding problems out.
So one challenge with agendas is that they preclude certain kinds of research. But I think a deeper challenge is that they change incentives of researchers to actively focus on providing evidence for existing assumptions, rather than actively search for big truths. Think tanks are known for this; for a certain amount of money you could generally get some macroeconomic work that supports seemingly any side of an economic argument. My read is that many anthropologists and sociologists do their work as part of a project to properly appreciate diversities of cultures and lifestyles. There’s a fair amount of work to understand issues on oppression; typically this seems focused on defending the hypothesis that oppression has existed.
There’s a lot of value for agenda driven work, where agenda driven work is defined as, “there’s a small group of people who know something, and they need a lot of evidence to prove this to the many more people.” I partake in work like this myself, any writing to promote already discussed research around the validity of forecasting fits this description. However, this work seems very different from work finding totally new insights. Science done for the sake of convincing people of known things can be looked at as essentially a highly respectable kind of marketing. Agenda driven science uses the same tools as “innovation driven” science, but the change in goals seems likely to produce correspondingly different outcomes.
2. Subject Privacy Concerns
I’ve worked a fair bit with web application architectures. They can be a total mess. Client layers on top of client APIs, followed by the entire HTTP system, load balancers, tangled spaghetti of microservices, several different databases. And compared to the big players (Google, Facebook, Uber), this was all nice and simple.
One of the key things that makes it work is introspectability. If something is confusing, you can typically either SSH into it and mess around, or try it out in a different environment. There are hundreds of organized sets of logs for all of the interactions in and out. There’s an entire industry of companies that do nothing but help other tech companies set up sophisticated logging infrastructures. Splunk and Sumo Logic are both public, with a combined market cap of around $35 Billion.
Managing all the required complexity would be basically impossible without all of this.
Now, in human land, we don’t have any of this, mostly because it would invade privacy. Psychological experiments typically consist of tiny snapshots of highly homogenous clusters (college students, for example). Surveys can be given to more people, but the main ones are highly limited in scope and are often quite narrow. There’s typically little room to “debug”, which here would mean calling up particular participants and getting a lot more information from them.
What Facebook can do now is far more expansive and sophisticated than anything I know of in social science and survey design. They just have dramatically more and better data. However, they don’t seem to have that sophisticated of a team to generate academic insights using this data, and their attempts to do actual experimentation haven’t gone very well. My guess is that the story “Facebook hires a large team of psychologists to do investigation and testing on users” wouldn’t be received nicely, even if they took the absolute most prestigious and qualified psychologists.
As artificial intelligence improves, it becomes possible to infer important information from seemingly minor data. For example, it’s possible to infer much of one’s big 5 personality characteristics from their Facebook profile, or the discussion about inferring sexuality from profile photos. Over time we could expect both that more data be collected about people, and also that this data goes much further for analysis, because we can make superior inferences from it. So the Facebook of tomorrow won’t just have more data, but might be able to infer a whole lot about each user.
I know if I was in social science, I would expect to be able to discover dramatically more by working with the internal data of Facebook, NSA, or Chinese Government, especially if I had a team of engineers help prune and run advanced inference on the data.
This would be creepy on basically all current social standards. It could get very, very creepy.
There could be ways of accepting reasonable trade-offs somewhere. Perhaps we could have better systems of differential privacy so scientists could get valuable insights from large data sets without exposing any personal information. Maybe select groups of people could purposely opt-in to extended study, ideally being paid substantially for the privilege. Something like We Live in Public but more ordinary and on a larger scale. We may desire intensive monitoring and regulation of any groups handling this kind of information. Those doing monitoring probably should be the most monitored.
On this note, I’d mention that I could imagine the Chinese Government being in a position to spearhead social science research in a way not at all accepted in the United States. Arguably their propaganda sophistication is already quite good. I’m not sure if their improvement of propaganda has led to fundamental insights about human behavior in general, but I would expect things to go in that direction, especially if they made a significant effort.
I imagine that this section on “privacy” could really be reframed as “ethical procedures and limitations.” Psychology has a long history of highly malicious experiments and this has led to a heap of enforced polices and procedures. I imagine that the main restrictions though are still the ones that were almost too obvious to be listed as rules. Culture often produces more expansive and powerful rules than bureaucracy does (thinking about ideas from Critical Theory and Slavoj Žižek).
3. Power
Let’s step back a bit. It’s difficult to put a measure on progress of the social sciences, but I think one decent aggregate metric would be the ability to make predictions about individuals, organizations, and large societies. If we could predict them well, we could do huge amounts of good. We could recommend very specific therapeutic treatments to specific subpopulations and expect them to work. We could adjust the education system to promote compassion, creativity, and flourishing in ways that would have tangible benefits later on. We could quickly hone in on the most effective interventions to global paranoia, jingoism, and racism.
But you can bet that if we got far better at “predicting human behavior”, the proverbial “bad guys” would be paying close attention.
In Spent, Geoffrey Miller wrote about attempts to investigate and promote ideas around using evolutionary psychology to understand modern decision making. Originally himself and a few others tried to discuss the ideas with academic economists. They quickly realized that the people who were paying the closest attention were really the people in marketing research.
Research into cognitive biases and the power of nudges was turned into a “nudge unit” in the US, but I’m sure was more frequently used by growth hackers. I’m not quite sure how advancements in the last few years of Neuroscience have so far helped myself or my circle, but I do know they have been studied for Neuromarketing.
So I think that if we’re nervous about physical technological advances being used for destructive ends (environmental degradation and warfare come to mind), we should be doubly so about social ones.
Militarily, it’s interesting that there’s public support for bomber planes and “advanced interrogation”, but information warfare and propaganda get a bad rap. There seems to be a deeper stigma against propaganda than there is against nuclear weapons. Related, the totalitarian environment of Nineteen Eighty-Four focused on social technologies (an engineered language and cultural control) rather than the advanced use of machines.
Sigmund Freud was the uncle of Edward Bernays, an Australian-American pioneer of “public relations” who wrote the books Crystallizing Public Opinion and Propaganda, both between 1925 and 1930. Edward extended ideas in psychology for his work. In general the public seems to have had a positive association of propaganda at that time as a tool for good. That association changed with the visible and evident use of Propaganda during WWII very shortly afterwards. This seems to have been about as big a reversal as with the public stance on eugenics.
If the standard critiques of psychological study are that it’s not effective or useful, the critique of propaganda would be anything but. It’s too powerful. Perhaps it can be used for massive amounts of good, but it can clearly be and historically was used for catastrophic evil.
Unintentional Limitations:
There are, of course, many unintentional problems with the social sciences too. There’s the whole replication crisis for one. I imagine that better tooling could make a huge difference. Positly comes to mind. The Social Sciences also could use more of the basic stuff; money, encouragement, and talent. I’m really not sure how these compare to the above challenges.
What are the Social Sciences supposed to do?
I think this all raises a question. What are the social sciences really for? More specifically, what outputs are people in and around these fields hoping to accomplish in the next 10 to 100 years? What would radical success look like? It’s assumed that fundamental breakthroughs in chemistry will lead to improvements in consumer materials and that benefits in engineering will lead to the once promised flying cars. With psychology, anthropology, and sociology, I’m really not sure what successes would be both highly impactful and also socially tolerated.
If the path to impact is “improve the abilities of therapists and human resources professionals”, I imagine gains will be limited. I think that these professions are important, but don’t see changes in them improving the world by over 20% in the next 20 to 50 years. If the path is something like, “interesting knowledge that will be transmitted by books and educational materials directly to its users”, then the outputs would need to be highly condensed. Most people don’t have much extra time to study material directly.
If the path is, “help prove out cases against racial and gender inequality” (and similar), I could see this working to an extent, but this would seem like a fairly limited agenda. Agenda driven scientific work is often too scientific to be convincing to most people (who prefer short opinion pieces and fiction), and too narrow to be groundbreaking. It serves a useful function, but generally not a function of radical scientific progress.[1]
There are some research direction proposals that I could see being highly impactful, but these correlate strongly with those venturing on being dangerous. This is especially the case because many of them may require coordinated action, and it’s not clear which modern authorities are trusted enough to carry out large coordinated action.
Possible research directions:
Systems to predict and signal which people and activities will lead to good or bad things.
Support for aligning human intuitions with arbitrary traits. (Like, making children particularly patriotic or empathetic)
Cultural engineering to optimize cultures for arbitrary characteristics.
Sophisticated chat bots that would outperform current humans at delivering psychological help, friendship, and possibly romance.
Systems that would make near-optimal recommendations on what humans should do on near all aspects of their lives.
Here’s another way of looking at it. Instead of asking what people in the fields are expecting, ask what regular people think will happen. What do most people expect of the social sciences in 30, 60, 200 years? I’d guess that most people would assume that psychology will make minor advances for the future of therapy, and that maybe sociology and anthropology will provide more evidence in favor of contemporary liberal values. Maybe they’ll make a bunch of interesting National Geographic articles too.
If you ask people what they expect from engineering I imagine they’d start chatting about exciting science fiction scenarios. They might not know what a transistor is, but they could understand that cars could drive themselves and maybe fly too. This seems like a symptom of the focus on technology in science fiction, though that could be a symptom of more fundamental issues. Perhaps one should look to utopian literature instead. I’m not well read on utopian fiction, but generally from what I know there is a focus on “groups that seem well run” as opposed to “groups that are well run due to sophisticated advances in the social sciences.”
Predictions to anchor my views
Given the generality of the discussion above, it’s hard to make incredibly precise predictions that are meaningful. I’m mainly aiming for this piece to suggest a perspective rather than convince people of it.
Here are some slightly specific estimations I would make:
If all agenda driven social science incentives were removed, it would increase “fundamental innovations” by 5% to 40%, 90% credence interval (but lose a lot of other value).
If privacy concerns were totally removed, and social scientists could easily partner with governments and companies (a big if!), it would increase “fundamental innovations” by 20% to 1,000%.
If power concerns were totally removed, it would increase “fundamental innovations” by 5% to 10,000%.
If $1 Billion were effectively (think how it would be spent by an aligned tech company, not that this will happen) spent on great software tooling for social scientists, in the current climate, it would increase “fundamental innovations” by 2% to 80%. Note that I’d expect the government to be poor at spending, so if they were to attempt this, I would expect it to cost them $100 Billion for the equivalent impact.
If “fundamental innovations” in the social sciences were improved by 1000% over the next 50 years, it would slightly increase the chances of global totalitarianism, but it has a chance (>30%) of otherwise being dramatically positive.
Takeaways
Apologies for the meandering path through this article. I’m attempting to make broad-sweeping hypotheses of huge fields that I haven’t ever worked in, this is exactly the kind of analysis I typically am wary of.
At this point I’m curious to better understand if there are positive trajectories for the social sciences that are both highly impactful and acceptable to both the key decision makers and by society. I’m not sure if there really are.
Arguably the first challenge is not to make the social sciences more effective, but to first help clear up confusion over what they should be trying to accomplish in the first place. Perhaps the most exciting work is too hazardous or dangerous to attempt. Work further outlining the costs and benefits seems fairly tractable and important to me.
Many thanks to Elizabeth Van Nostrand, Sofia Davis-Fogel, Daniel Eth, and Nuño Sempere for comments on this post. Some of them pointed out some severe challenges that I haven’t totally addressed, so any faults are really my own.
[1] I realize that these fields are complicated collections of individuals and incentives that are probably optimizing for a large set of goals, which likely include some of the ones here mentioned. I’m not thinking there should be one universal goal, but I am thinking that myself and many readers would be interested in the social sciences through a consequentialist lens.
- Select Challenges with Criticism & Evaluation Around EA by 10 Feb 2023 23:36 UTC; 111 points) (EA Forum;
- 7 Apr 2021 15:09 UTC; 8 points) 's comment on What are the highest impact questions in the behavioral sciences? by (EA Forum;
- 16 Jan 2022 17:58 UTC; 3 points) 's comment on elifland’s Quick takes by (EA Forum;
Yeah. Both sides would agree that the answers are already known. Instead we need people looking for answers in the territory, doing experiments, and evaluating them impartially.
Okay, temporarily, adding people from underrepresented political groups would help to also make their standard answers considered. But what we need in long term is people without strong political agenda. (And yes, the predictable counter-argument is that people “without agenda” are actually proponents of status quo. But what I mean is not being for or against status quo, but the ability to change one’s mind after observing evidence.)
As long at they can keep the “it doesn’t matter whether a cat is black or white, as long as it catches mice” approach, it could work. But who knows how long before the pendulum swings in the opposite direction.
.
I have an idea—not sure whether it was tried before—to do research on people who are willing to pay for self-improvement. I mean, some people already participate in all kinds of programs that promise to fix their problems and improve their skills; they are even willing to spend a lot of money. So maybe we could popularize research on paying volunteers; which would simultaneously solve the problem of funding research.
Like, imagine that psychologists have 3 different ideas of how to… get rid of bad habits, or whatever, and they open a program for volunteers who will pay a relatively small sum to be randomly assigned to one of 3 groups, and given the therapy. For the client, they receive scientific treatment, for relatively small money. For the researcher, they get lot of volunteers. The disadvantage is that the volunteers are not representative of the population in general, but let’s not kid ourselves, neither are the subjects of most experiments. We could simply admit that we research what techniques work best for the research volunteers, and then do the research on them properly.
Thanks for the thoughts here!
I’m not sure about your specific proposal, but in general imagine there’s a lot of room to experiment on “paying people to be test subjects in interesting ways”. This can only go so far in the short term, but as technology improves it will probably get easier to manage.
I believe Vertias Genetics gives large discounts to people who are willing to give up their genetic data for scientific use, for instance.
https://www.cnbc.com/2019/07/01/for-600-veritas-genetics-sequences-6point4-billion-letters-of-your-dna.html
One way to think about what they are supposed to do is similar to other technologies. Think about something that currently it takes a lot of labor to do and project a future where it mostly happens in the background with occasional maintenance. Even the Amish use washing machines because the Amish womenfolk insisted. The energy delta is just too high. I see people spending a lot of time and labor on monitoring and maintaining social status and political affiliations. A world without such sounds unimaginable at first glance. But what if it’s possible?
I will elaborate on this later, but the social sciences have so so so many variables to consider. Our whole idea of social science is based on human social interaction and good luck understanding even the person next to you , or your partner let alone the other 7-8 billion people. Social science is hard. It’s not as black and white as some other sciences are seemingly. It’s a how could you possibly understand scenario...
Agreed that humans are complicated, but I still think there are a lot of reasons to suggest that we can get pretty far with relatively obtainable measures. We have the history of ~100 Billion people at this point to guide us. There are clear case studies of large groups being influenced in intentional predictable ways. Religious groups and wartime information efforts clearly worked. Marketing campaigns clearly seem to influence large numbers of people in relatively predictable ways. And much of that has been with what seems like relatively little scientific understanding at the scale that could be done with modern internet data.
We don’t need perfect models of individuals to be able to make big, deliberate changes.
The (meta-)field of Digital Humanities is fairly new. TODO: Estimating its success and its challenges would help me form a stronger opinion on this matter.
I think in a more effective world, “Digital Humanities” would just be called “Humanities” :)
Thanks for this interesting idea!
I’m curious how you think this explains historical scientific progress. It seems like Ptolemy 2,000 years ago could make better predictions about celestial mechanics than sociologists today can make about crime rates, for example. Modern pollsters have access to way more data points than Ptolemy, but I think his predictions about the position of a planet 10 years in the future are more accurate than pollsters’ predictions of elections 10 weeks in the future.
It seems hard to explain this without somehow describing celestial mechanics as “simpler”.
I think as far as science goes, much of old celestial mechanic findings were rather “simple”. Human systems definitely seem less predictable than those. However, there are many other technical things we can predict well that aren’t human. Computer infrastructures are far more complicated than many celestial mechanics, and we can predict their behavior decently enough (expect that computers won’t fail for certain duration, or expect very complex chains of procedures to continue to function).
It’s expected that we can predict the general population trends 10-50 years out in the future. There are definitely some human aggregate aspects that are fairly easy to predict. We can similarly predict with decent certainty that many things won’t happen. The US, for all of its volatility, seems very unlikely to become a radical Buddhist nation or split up into many parts any time soon. In many ways modern society is quite boring.
The US also has a somewhat homogeneous culture. Many people are taught very similar things, watch similar movies, etc. We can predict quite a bit already about people.
https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/?sh=4d09ff3a6668
(Sorry to focus on the US, but it’s one example that comes to mind, and easier to discuss than global populations at large)
There’s an inherent difficulty you don’t list. You might file it under “political agendas,” but the big problem isn’t the external constraint of conscious agendas, but of people fooling themselves.
I’m wondering if you’re making a mistake expecting engineering from a scientific field. It seems to me we do very much have some early facsimiles of those kinds of engineering: marketing, propaganda, therapy, and meditation. They’re imprecise, but (not in every case, but increasingly so) very much about using the insights of the social sciences to create real-world impact. Industrial and organizational psychology also comes to mind.
One of the problems, I think, is that most people outside chemistry don’t think they’re chemists, but most people think they understand humans well enough that they don’t believe that the social sciences offer meaningful, actionable insights, and feel comfortable ignoring even the most well-grounded advice from research in these fields. Sometimes these are (or should be) really obvious, with decades of research consistently pointing in the same directions (e.g. open offices cost more in productivity than they save in square footage and don’t increase collaboration). And yet, here we are.
I agree we do have things similar to engineering, but these fields seem to be done differently than if they were in the hands of engineers. Industrial engineering is thought to be a field of engineering, but operations research is often considered part of “applied mathematics” (I think). I find it quite interesting that information theory is typically taught as a “electrical engineering” class, but the applications are really just all over the place.
My honest guess is that the reasons why some things are considered “engineering”, and thus respected and organized as an option for “engineers”, and other areas that could be are not, is often due to cultural and historic factors. The lines seem quite arbitrary to me right now.