We pay enough attention to security that I think it’s unlikely we will accidentally dump our whole database to the internet.
I think any sophisticated attacker who is willing to spend something like 50-100 hours on trying to get into our system, can probably find some way to get access to most things in our database.
We do obvious things like encrypt passwords and salt hashes, so I don’t think there is any way I know of for an attacker to reverse-engineer secure user passwords (though having access to the hash is still often useful).
The javascript ecosystem in general, and the way we have been using it, has a lot of attack surface, and I expect that there are probably some vulnerabilities in libraries we are using on the backend, that allow some kind of code execution, though I do think it would require a good amount of work to find them.
I think it’s reasonable to assume that if any nation state or sophisticated group of attackers wants access to your PMs, votes and drafts, they can probably get it somehow, either by attacking the client, infecting an admin or moderator, or just directly hacking our server somehow.
This overall puts somewhat of an upper bound on how useful additional focus on security would be, since I currently think it would be very hard to harden our system against sophisticated attackers. I currently would not treat LW or the AI Alignment Forum as particularly secure, and if you want to have conversations with individuals that will not be accessible by potential sophisticated attackers, I would use something like Signal. In general, there isn’t a lot of private information on LessWrong (outside of PMs and votes, and maybe drafts), so I don’t tend to think of security as a top priority for the site (though I still think a good amount about security, and am also not treating it as super low priority, but it isn’t one of the things the whole site was architected around).
I am not currently very worried about astroturfing or trolling, mostly because we are currently still capable of reviewing all content posted to the site, and can individually review each, and I think that’s a pretty high barrier for language models to overcome. If even I can’t tell the difference between a bot and a person anymore, then yeah, I am not super sure what to do. We could do some kind of identity verification with passports and stuff, but I would prefer to avoid that.
We run some basic statistics on vote patterns and would notice if suddenly some new set of accounts were starting to vote a lot and influence site content a lot. I would also likely notice myself that something is off with that kinds of things are getting upvoted and downvoted and would likely investigate.
The internet archive tends to have backups of all publicly available pages.
We handle our own backups on AWS, and I tend to download one every few months to some hard drive somewhere.
Changing hosting providers would be annoying, but not super hard. My guess is it would take us less than three days to be back up and running if AWS no longer likes us.
Here is my current take on security for the site:
We pay enough attention to security that I think it’s unlikely we will accidentally dump our whole database to the internet.
I think any sophisticated attacker who is willing to spend something like 50-100 hours on trying to get into our system, can probably find some way to get access to most things in our database.
We do obvious things like encrypt passwords and salt hashes, so I don’t think there is any way I know of for an attacker to reverse-engineer secure user passwords (though having access to the hash is still often useful).
The javascript ecosystem in general, and the way we have been using it, has a lot of attack surface, and I expect that there are probably some vulnerabilities in libraries we are using on the backend, that allow some kind of code execution, though I do think it would require a good amount of work to find them.
I think it’s reasonable to assume that if any nation state or sophisticated group of attackers wants access to your PMs, votes and drafts, they can probably get it somehow, either by attacking the client, infecting an admin or moderator, or just directly hacking our server somehow.
This overall puts somewhat of an upper bound on how useful additional focus on security would be, since I currently think it would be very hard to harden our system against sophisticated attackers. I currently would not treat LW or the AI Alignment Forum as particularly secure, and if you want to have conversations with individuals that will not be accessible by potential sophisticated attackers, I would use something like Signal. In general, there isn’t a lot of private information on LessWrong (outside of PMs and votes, and maybe drafts), so I don’t tend to think of security as a top priority for the site (though I still think a good amount about security, and am also not treating it as super low priority, but it isn’t one of the things the whole site was architected around).
I am not currently very worried about astroturfing or trolling, mostly because we are currently still capable of reviewing all content posted to the site, and can individually review each, and I think that’s a pretty high barrier for language models to overcome. If even I can’t tell the difference between a bot and a person anymore, then yeah, I am not super sure what to do. We could do some kind of identity verification with passports and stuff, but I would prefer to avoid that.
We run some basic statistics on vote patterns and would notice if suddenly some new set of accounts were starting to vote a lot and influence site content a lot. I would also likely notice myself that something is off with that kinds of things are getting upvoted and downvoted and would likely investigate.
The internet archive tends to have backups of all publicly available pages.
We handle our own backups on AWS, and I tend to download one every few months to some hard drive somewhere.
Changing hosting providers would be annoying, but not super hard. My guess is it would take us less than three days to be back up and running if AWS no longer likes us.