Public image is important to companies like Facebook and Google. I don’t think that they will charge for a user-aligned version, but I also don’t think there would be much cost to ad revenue from moving in this direction. E.g. I think they might cave on the fake news thing modulo the proposed fixes mostly being terrible ideas. Optimizing user preferences may be worth it in the interests of a positive public image alone.
I don’t think that Facebook ownership and engineers are entirely profit-focused, they will sometimes do things just because they feel like it makes the world better at modest cost. (I know more people in Google and am less informed about FB.)
Relating the two, if e.g. Google organized its services in this way, if the benefits were broadly and understood, and if Facebook publicly continued to optimize for things that its users don’t want optimized, I think it could be bad for the image of Facebook (with customers, and especially with hires).
I’d be quite surprised if any of these happened.
Does this bear on our other disagreements about how optimistic to be about humanity? Is it worth trying to find a precise statement and making a bet?
I’m probably willing to give > 50% on something like: “Within 5 years, there is a Google or Facebook service that conducts detailed surveys of user preferences about what content to display and explicitly optimizes for those preferences.” I could probably also make stronger statements re: scope of adoption.
And why isn’t it a bad sign that Facebook hasn’t already done what you suggested in your post?
I think these mechanisms probably weren’t nearly as feasible 5 years ago as they are today, based on gradual shifts in organization and culture at tech companies (especially concerning ML). And public appetite for more responsible optimization has been rapidly increasing. So I don’t think non-action so far is a very strong sign.
Also, Facebook seems to sometimes do things like survey users on how much they like content, and include ad hoc adjustments to their optimization in order to produce more-liked content (e.g. downweighting like-baiting posts). In in some sense this is just a formalization of that procedure. I expect in general that formalizing optimizations will become more common over the coming years, due to a combination of increasing usefulness of ML and cultural change to accommodate ML progress.
I’m curious if you occasionally unblock your Facebook newsfeed to check if things have gotten better or worse. I haven’t been using Facebook much until recently, but I’ve noticed a couple of very user-unfriendly “features” that seem to indicate that FB just doesn’t care much about its public image. One is suggested posts (e.g., “Popular Across Facebook”) that are hard to distinguish from posts from friends, and difficult to ad-block (due to looking just like regular posts in HTML). Another is fake instant message notifications on the mobile app whenever I “friend” someone new, that try to entice me into installing its instant messaging app (only to find out that the “notification” merely says I can now instant message that person). If I don’t install the IM app, I get more and more of these fake notifications (2 from one recent “friend” and 4 from another).
Has it always been this bad or even worse in the past? Does it seem to you that FB is becoming more user-aligned, or less?
ETA: I just saw this post near the top of Hacker News, pointing out a bunch of other FB features designed to increase user engagement at the expense of their actual interests. The author seems to think the problem has gotten a lot worse over time.
I think that Facebook’s behavior has probably gotten worse over time as part of general move towards cashing in / monetizing.
I don’t think I’ve looked at my feed in a few years.
On the original point: I think at equilibrium services like Facebook maximize total welfare, then take their cut in a socially efficient way (e.g. as payment). I think the only question is how long it takes to get there.
I think at equilibrium services like Facebook maximize total welfare, then take their cut in a socially efficient way (e.g. as payment). I think the only question is how long it takes to get there.
I wonder if you have changed your mind about this at all. Unless I’m misunderstanding you somehow, this seems like an important disagreement to resolve.
On the original point: I think at equilibrium services like Facebook maximize total welfare, then take their cut in a socially efficient way (e.g. as payment). I think the only question is how long it takes to get there.
Why? There are plenty of theoretical models in economics where at equilibrium total welfare does not get maximized. See this post and the standard monopoly model for some examples. The general impression I get from studying economics is that the conditions under which total welfare does get maximized tend to be quite specific and not easy to obtain in practice. Do you agree? In other words, do you generally expect markets to have socially efficient equilibria and expect Facebook to be an instance of that absent a reason to think otherwise, or do you think there’s something special about Facebook’s situation?
I’m probably willing to give > 50% on something like: “Within 5 years, there is a Google or Facebook service that conducts detailed surveys of user preferences about what content to display and explicitly optimizes for those preferences.”
The Slate article you linked to seems to suggest that Facebook already did something like that, and then backed off from it:
“Crucial as the feed quality panel has become to Facebook’s algorithm, the company has grown increasingly aware that no single source of data can tell it everything. It has responded by developing a sort of checks-and-balances system in which every news feed tweak must undergo a battery of tests among different types of audiences, and be judged on a variety of different metrics. …”
“At each step, the company collects data on the change’s effect on metrics ranging from user engagement to time spent on the site to ad revenue to page-load time. Diagnostic tools are set up to detect an abnormally large change on any one of these crucial metrics in real time, setting off a sort of internal alarm that automatically notifies key members of the news feed team.”
I think concern about public image can only push a company so far. Presumably all the complaints we’re seeing isn’t news to Facebook. They saw it coming or should have seen it coming years ago, and this is what they’ve done, which seems like the best predictor of what they’d be willing to do in the future.
If I understand correctly, what you’re proposing that’s different from what Facebook is already doing are 1) fully automated end-to-end machine learning optimizing only for user preferences and specifically not for engagement/ad revenue, 2) optimizing for preferences-upon-reflection instead of current preferences, and maybe 3) trying to predict and optimize for each user’s individual preferences instead of using aggregate surveyed preferences (which is what it sounds like Facebook is currently doing).
seems unlikely because Facebook ultimately still cares mostly about engagement/ad revenue and are willing to optimize for user preference only so far as it doesn’t significantly affect their bottom line. So they’ll want to either maintain manual control to override user preference when needed, or not purely target user preference, or both.
might happen to some greater extent. But presumably there are reasons why they haven’t done more in this direction already.
I think Facebook would be worried that doing this will make them even more vulnerable to charges of creating filter bubbles and undermining democracy, etc.
I think there are two mechanisms:
Public image is important to companies like Facebook and Google. I don’t think that they will charge for a user-aligned version, but I also don’t think there would be much cost to ad revenue from moving in this direction. E.g. I think they might cave on the fake news thing modulo the proposed fixes mostly being terrible ideas. Optimizing user preferences may be worth it in the interests of a positive public image alone.
I don’t think that Facebook ownership and engineers are entirely profit-focused, they will sometimes do things just because they feel like it makes the world better at modest cost. (I know more people in Google and am less informed about FB.)
Relating the two, if e.g. Google organized its services in this way, if the benefits were broadly and understood, and if Facebook publicly continued to optimize for things that its users don’t want optimized, I think it could be bad for the image of Facebook (with customers, and especially with hires).
Does this bear on our other disagreements about how optimistic to be about humanity? Is it worth trying to find a precise statement and making a bet?
I’m probably willing to give > 50% on something like: “Within 5 years, there is a Google or Facebook service that conducts detailed surveys of user preferences about what content to display and explicitly optimizes for those preferences.” I could probably also make stronger statements re: scope of adoption.
I think these mechanisms probably weren’t nearly as feasible 5 years ago as they are today, based on gradual shifts in organization and culture at tech companies (especially concerning ML). And public appetite for more responsible optimization has been rapidly increasing. So I don’t think non-action so far is a very strong sign.
Also, Facebook seems to sometimes do things like survey users on how much they like content, and include ad hoc adjustments to their optimization in order to produce more-liked content (e.g. downweighting like-baiting posts). In in some sense this is just a formalization of that procedure. I expect in general that formalizing optimizations will become more common over the coming years, due to a combination of increasing usefulness of ML and cultural change to accommodate ML progress.
I’m curious if you occasionally unblock your Facebook newsfeed to check if things have gotten better or worse. I haven’t been using Facebook much until recently, but I’ve noticed a couple of very user-unfriendly “features” that seem to indicate that FB just doesn’t care much about its public image. One is suggested posts (e.g., “Popular Across Facebook”) that are hard to distinguish from posts from friends, and difficult to ad-block (due to looking just like regular posts in HTML). Another is fake instant message notifications on the mobile app whenever I “friend” someone new, that try to entice me into installing its instant messaging app (only to find out that the “notification” merely says I can now instant message that person). If I don’t install the IM app, I get more and more of these fake notifications (2 from one recent “friend” and 4 from another).
Has it always been this bad or even worse in the past? Does it seem to you that FB is becoming more user-aligned, or less?
ETA: I just saw this post near the top of Hacker News, pointing out a bunch of other FB features designed to increase user engagement at the expense of their actual interests. The author seems to think the problem has gotten a lot worse over time.
I think that Facebook’s behavior has probably gotten worse over time as part of general move towards cashing in / monetizing.
I don’t think I’ve looked at my feed in a few years.
On the original point: I think at equilibrium services like Facebook maximize total welfare, then take their cut in a socially efficient way (e.g. as payment). I think the only question is how long it takes to get there.
I wonder if you have changed your mind about this at all. Unless I’m misunderstanding you somehow, this seems like an important disagreement to resolve.
Why? There are plenty of theoretical models in economics where at equilibrium total welfare does not get maximized. See this post and the standard monopoly model for some examples. The general impression I get from studying economics is that the conditions under which total welfare does get maximized tend to be quite specific and not easy to obtain in practice. Do you agree? In other words, do you generally expect markets to have socially efficient equilibria and expect Facebook to be an instance of that absent a reason to think otherwise, or do you think there’s something special about Facebook’s situation?
The Slate article you linked to seems to suggest that Facebook already did something like that, and then backed off from it:
“Crucial as the feed quality panel has become to Facebook’s algorithm, the company has grown increasingly aware that no single source of data can tell it everything. It has responded by developing a sort of checks-and-balances system in which every news feed tweak must undergo a battery of tests among different types of audiences, and be judged on a variety of different metrics. …”
“At each step, the company collects data on the change’s effect on metrics ranging from user engagement to time spent on the site to ad revenue to page-load time. Diagnostic tools are set up to detect an abnormally large change on any one of these crucial metrics in real time, setting off a sort of internal alarm that automatically notifies key members of the news feed team.”
I think concern about public image can only push a company so far. Presumably all the complaints we’re seeing isn’t news to Facebook. They saw it coming or should have seen it coming years ago, and this is what they’ve done, which seems like the best predictor of what they’d be willing to do in the future.
If I understand correctly, what you’re proposing that’s different from what Facebook is already doing are 1) fully automated end-to-end machine learning optimizing only for user preferences and specifically not for engagement/ad revenue, 2) optimizing for preferences-upon-reflection instead of current preferences, and maybe 3) trying to predict and optimize for each user’s individual preferences instead of using aggregate surveyed preferences (which is what it sounds like Facebook is currently doing).
seems unlikely because Facebook ultimately still cares mostly about engagement/ad revenue and are willing to optimize for user preference only so far as it doesn’t significantly affect their bottom line. So they’ll want to either maintain manual control to override user preference when needed, or not purely target user preference, or both.
might happen to some greater extent. But presumably there are reasons why they haven’t done more in this direction already.
I think Facebook would be worried that doing this will make them even more vulnerable to charges of creating filter bubbles and undermining democracy, etc.