So the questions I beg are: In what ways do our cognitive biases come into play when we surf the Internet and interact with others? Of which of these biases can actively we protect against, and how?
I don’t know how usefully I can contribute, but I hope that many on Less Wrong can.
Optimal User-End Internet Security (Or, Rational Internet Browsing)
Hacking and Cracking, Internet security, Cypherpunk. I find these topics fascinating as well as completely over my head.
Yet, there are still some things that can be said to a layman, especially by the ever-poignant Russel Monroe:
https://www.xkcd.com/936/
https://www.xkcd.com/792/
I’m guilty on both charges (reusing poorly formulated passwords, not stealing them).
These arguments may be just be the tip of the iceberg of a [much larger problem that needs optimizing](https://secure.wikimedia.org/wikipedia/en/wiki/Social_engineering_%28security%29): Social Engineering, or [mainly how it can be used against our interests](http://wiki.lesswrong.com/wiki/Dark_arts) (to quip [Person 2](http://yudkowsky.net/singularity/aibox), “It doesn’t matter how much security you put on the box. Humans are not secure.”). I get the feeling that I’m not managing my risks on the Internet as well as I should.
So the questions I beg are: In what ways do our cognitive biases come into play when we surf the Internet and interact with others? Of which of these biases can actively we protect against, and how?
I don’t know how usefully I can contribute, but I hope that many on Less Wrong can.