I’m probably willing to give > 50% on something like: “Within 5 years, there is a Google or Facebook service that conducts detailed surveys of user preferences about what content to display and explicitly optimizes for those preferences.”
The Slate article you linked to seems to suggest that Facebook already did something like that, and then backed off from it:
“Crucial as the feed quality panel has become to Facebook’s algorithm, the company has grown increasingly aware that no single source of data can tell it everything. It has responded by developing a sort of checks-and-balances system in which every news feed tweak must undergo a battery of tests among different types of audiences, and be judged on a variety of different metrics. …”
“At each step, the company collects data on the change’s effect on metrics ranging from user engagement to time spent on the site to ad revenue to page-load time. Diagnostic tools are set up to detect an abnormally large change on any one of these crucial metrics in real time, setting off a sort of internal alarm that automatically notifies key members of the news feed team.”
I think concern about public image can only push a company so far. Presumably all the complaints we’re seeing isn’t news to Facebook. They saw it coming or should have seen it coming years ago, and this is what they’ve done, which seems like the best predictor of what they’d be willing to do in the future.
If I understand correctly, what you’re proposing that’s different from what Facebook is already doing are 1) fully automated end-to-end machine learning optimizing only for user preferences and specifically not for engagement/ad revenue, 2) optimizing for preferences-upon-reflection instead of current preferences, and maybe 3) trying to predict and optimize for each user’s individual preferences instead of using aggregate surveyed preferences (which is what it sounds like Facebook is currently doing).
seems unlikely because Facebook ultimately still cares mostly about engagement/ad revenue and are willing to optimize for user preference only so far as it doesn’t significantly affect their bottom line. So they’ll want to either maintain manual control to override user preference when needed, or not purely target user preference, or both.
might happen to some greater extent. But presumably there are reasons why they haven’t done more in this direction already.
I think Facebook would be worried that doing this will make them even more vulnerable to charges of creating filter bubbles and undermining democracy, etc.
The Slate article you linked to seems to suggest that Facebook already did something like that, and then backed off from it:
“Crucial as the feed quality panel has become to Facebook’s algorithm, the company has grown increasingly aware that no single source of data can tell it everything. It has responded by developing a sort of checks-and-balances system in which every news feed tweak must undergo a battery of tests among different types of audiences, and be judged on a variety of different metrics. …”
“At each step, the company collects data on the change’s effect on metrics ranging from user engagement to time spent on the site to ad revenue to page-load time. Diagnostic tools are set up to detect an abnormally large change on any one of these crucial metrics in real time, setting off a sort of internal alarm that automatically notifies key members of the news feed team.”
I think concern about public image can only push a company so far. Presumably all the complaints we’re seeing isn’t news to Facebook. They saw it coming or should have seen it coming years ago, and this is what they’ve done, which seems like the best predictor of what they’d be willing to do in the future.
If I understand correctly, what you’re proposing that’s different from what Facebook is already doing are 1) fully automated end-to-end machine learning optimizing only for user preferences and specifically not for engagement/ad revenue, 2) optimizing for preferences-upon-reflection instead of current preferences, and maybe 3) trying to predict and optimize for each user’s individual preferences instead of using aggregate surveyed preferences (which is what it sounds like Facebook is currently doing).
seems unlikely because Facebook ultimately still cares mostly about engagement/ad revenue and are willing to optimize for user preference only so far as it doesn’t significantly affect their bottom line. So they’ll want to either maintain manual control to override user preference when needed, or not purely target user preference, or both.
might happen to some greater extent. But presumably there are reasons why they haven’t done more in this direction already.
I think Facebook would be worried that doing this will make them even more vulnerable to charges of creating filter bubbles and undermining democracy, etc.