I remember (from listening to a bunch of podcasts by German hackers from the mid 00s) a strong vibe that the security of
software systems at the time and earlier was definitely worse than what
would’ve been optimal for the people making the software (definitely
not safe enough for the users!).
I wonder whether that is (1) true and (if yes) (2) what led to this
happening!
Maybe companies were just myopic
when writing software then, and could’ve predicted the security problems
but didn’t care?
Or was it that the error predicting the importance of security was just
an outlier, that companies and industries on average correctly predict
the importance of safety & security, and this was just a bad draw from
the distribution.
Or is this a common occurrence? Then
one might chalk it up to (1) information asymmetries
(normal users don’t value the importance of software security,
let alone evaluate the quality of a given piece of software)
or (2) information problems in firms (managers had a personal incentive to cut corners on safety).
Another reason might be that lower-level software
usually can make any security issues a reputational
externality
for end-user software: sure, in the end Intel’s branch predictor is
responsible for Meltdown and
Spectre, and for setting cache
timeouts too high that
we can nicely rowhammer.js
it out, but what end-user will blame Intel and not “and then Chrome
crashed and they wanted my money”.
This is, of course, in the context of the development of AI, and the
common argument that “companies will care about single-single alignment”.
The possible counterexample of software security engineering until the
mid 00s seemed like a good test case to me, but on reflection I’m now
not so sure anymore.
I remember (from listening to a bunch of podcasts by German hackers from the mid 00s) a strong vibe that the security of software systems at the time and earlier was definitely worse than what would’ve been optimal for the people making the software (definitely not safe enough for the users!).
I wonder whether that is (1) true and (if yes) (2) what led to this happening!
Maybe companies were just myopic when writing software then, and could’ve predicted the security problems but didn’t care?
Or was it that the error predicting the importance of security was just an outlier, that companies and industries on average correctly predict the importance of safety & security, and this was just a bad draw from the distribution.
Or is this a common occurrence? Then one might chalk it up to (1) information asymmetries (normal users don’t value the importance of software security, let alone evaluate the quality of a given piece of software) or (2) information problems in firms (managers had a personal incentive to cut corners on safety).
Another reason might be that lower-level software usually can make any security issues a reputational externality for end-user software: sure, in the end Intel’s branch predictor is responsible for Meltdown and Spectre, and for setting cache timeouts too high that we can nicely rowhammer.js it out, but what end-user will blame Intel and not “and then Chrome crashed and they wanted my money”.
This is, of course, in the context of the development of AI, and the common argument that “companies will care about single-single alignment”.
The possible counterexample of software security engineering until the mid 00s seemed like a good test case to me, but on reflection I’m now not so sure anymore.