As Gwern already said Cambridge Analytica did not produce large preference drift. Here, it’s worth understanding why you believed that it did. There are interests for who’s political goals it’s convenient when people believe Cambridge Analytica had a lot of influence. Overplaying the effects of their own actions made it easier for Cambridge Analytica to get commercial clients.
In both cases, nobody needed an accurate model of you to deceive you into that false belief. If you want to prepare for a future where you don’t get deceived, it would be good to spent a lot of effort into understanding why you were deceived here.
Additionally, we can remove even more of the temptation by favoring distributed storage (e.g IPFS)
IPFS is a data transfer protocol and not a storage system.
What has the first sentence here to do with the second? Facebook openly published an experiment with a relatively low effect size. Recommendation algorithms existed before the experiment just as they did afterward.
The campaign to attack Facebook over the experiment was largely one against the idea of open science. There are tons of internal A/B tests that companies run. The thing that Facebook was criticized for, was to run an experiment and publish scientific results.
Talking like this about the experiment is bad if your goal is a healthy epistemic environment.
The best you can hope for is it just gets removed from the public domain.
That’s a pretty strange sentence given that you can request data removal via the GDPR.
I am not (as) worried about 2FA, encryption, and about my data getting hacked away from Google servers. I am also not (as) worried about you selling my data. I am extremely worried about you having a central database of my data at all.
Why aren’t you worried about data being sold? That sounds pretty strange to me given how the US government speaks about the kind of data that they buy on the open market and that’s likely also available to well-resourced actors like the Chinese government who are actually interested in large-scale manipulation campaigns.
As Gwern already said Cambridge Analytica did not produce large preference drift. Here, it’s worth understanding why you believed that it did. There are interests for who’s political goals it’s convenient when people believe Cambridge Analytica had a lot of influence. Overplaying the effects of their own actions made it easier for Cambridge Analytica to get commercial clients.
In both cases, nobody needed an accurate model of you to deceive you into that false belief. If you want to prepare for a future where you don’t get deceived, it would be good to spent a lot of effort into understanding why you were deceived here.
IPFS is a data transfer protocol and not a storage system.
What has the first sentence here to do with the second? Facebook openly published an experiment with a relatively low effect size. Recommendation algorithms existed before the experiment just as they did afterward.
The campaign to attack Facebook over the experiment was largely one against the idea of open science. There are tons of internal A/B tests that companies run. The thing that Facebook was criticized for, was to run an experiment and publish scientific results.
Talking like this about the experiment is bad if your goal is a healthy epistemic environment.
That’s a pretty strange sentence given that you can request data removal via the GDPR.
Why aren’t you worried about data being sold? That sounds pretty strange to me given how the US government speaks about the kind of data that they buy on the open market and that’s likely also available to well-resourced actors like the Chinese government who are actually interested in large-scale manipulation campaigns.