Killing Moloch with “Nudge tech”?
Context
I come from the SaaS startup world and I intend to help kill some inadequate equilibria in the next few years, specifically around the attention economy.
I’ll outline what I’m thinking, below, and would appreciate any help from the LW community:
Specific questions I hope to answer:
From a god’s-eye-view, where is the project likely to fail?
How might you approach accomplishing a similar goal? What would you change?
What should I read / who should I talk with to learn more?
Project Description
“Turn tech from addicting to revitalizing, with nudge tech”
A continually-improving “nudge tech” that digitally overlays computers/smartphones, and gently changes the quality of life for those individuals and populations who opt in.
It will lessen the pull of companies who legally but unethically get people to self-sabotage (e.g. phone addiction, sleep deprivation, empty content and empty calories)
It will minimize the tribalistic tendency of humans to ‘other’ populations by popping algorithmic bubbles
It will increase serendipity and innovation in its users by exposing them to relevant and timely diverse viewpoints and opportunities
It will be continually improved by open-source plugin contributions; plugins will be tested automatically and successful ones will be rolled out to those who opt in.
Nudge tech—definition
Nudge tech is technology that changes the probability of specific human behaviors by modifying 1+ of the following:
What behavior prompts a person gets
The difficulty of accomplishing a behavior
The reward/punishment from a given behavior
‘Nudge tech’, is used by tech companies like Facebook, Netflix, etc. to keep users engaged and therefore make money. But nudge tech can be used to defend against exploitation by these same companies.
That’s it! I’m looking forward to your thoughts.
Interesting idea, but sparse on how you’d actually achieve this. What is your vision for what an MVP would actually do?
And, If you succeed, what stops this from becoming evil after all? Writing “Don’t be evil” helps, but it’s not enough.
How are you going to make money off of this? Without money there is a real risk of something slowly dying out.
Good luck!
As for “Don’t Be Evil”—this is something I am concerned about.
Methods of monetization must be as closely aligned with positive outcomes for the end user as possible. From what I hear, Vanguard is a great model for doing this well. I haven’t yet studied the specifics.
There must be a moat that prevents less scrupulous companies from growing faster with a copycat product. One method is to be to be donation-driven, or funded by the government. Another would be solid branding that educates as to why other monetization models aren’t a good idea. This last one feels weaker, though.
Thoughts on any of this is welcome!
Thank you for the questions and feedback, Bastiaan!
I’ll answer your question about the MVP / and money together; here is a yet-untested problem to solve:
People are spending hours per day on their phones; falling asleep with them; this disrupts their sleep, productivity, and their relationships.
I haven’t yet looked closely at how to solve this but some approaches might be a combination of:
Disrupting push notifications from offending apps
Sending counter-push-notifications to disrupt people in flow on the offending apps
Selectively hiding or de-colorizing the most offending app icons
Gamifying or otherwise disincentivizing high phone engagement (e.g. opt-in monetary penalty, some sort of social/addiction score)
Teaching people skills to disengage from phones using CBT
Accountability partners who can see the other’s engagement
Not only does this need to be proven to work, but they need to be self-funding, as you mentioned.
I’ll have to do some research first, but I assume there are opportunities for this to be self-funded—people do pay money for diet apps, exercise apps, and certain types of specialized phone alarms.
Is that a typo? Those are in no way specific. A specific question would be to fill in the mad-lib of “I notice a common X behavior in Y situation, and I hypothesize that Z would interrupt the thoughtless process and lead to a different equilibrium”. What parts of this should I test first, and how?
Thank you for the comment, Dagon :).
I was/am looking for feedback from a high level; I want to use “nudge tech” to influence the behavior of large groups of people; I’m wondering where large projects like this might tend to fail—one example would be groups of people could get suspicious that their data is being collected if it’s not properly anonymized or provably kept on their phones.
That being said, most people I’ve talked with these last few days are hungry for specific examples—I haven’t done enough customer research yet to be sure, but I’ve shared a somewhat more specific example in another comment to Bastiaan!