No, these are high-cost/low-return existential risk reduction techniques. Major corporations and governments already have very high incentive to protect their networks, but despite spending billions of dollars, they’re still being frequently penetrated by human attackers, who are not even necessarily professionals. Not to mention the hundreds of millions of computers on the Internet that are unprotected because their owners have no idea how to do so, or they don’t contain information that their owners consider especially valuable.
I got into cryptography partly because I thought it would help reduce the risk of a bad Singularity. But while cryptography turned out to work relatively well (against humans anyway), the rest of the field of computer security is in terrible shape, and I see little hope that the situation would improve substantially in the next few decades.
That’s outside my specialization of cryptography, so I don’t have too much to say about it. I do remember reading about the object-capability model and the E language years ago, and thinking that it sounded like a good idea, but I don’t know why it hasn’t been widely adopted yet. I don’t know if it’s just inertia, or whether there are some downsides that its proponents tend not to publicize.
In any case, it seems unlikely that any security solution can improve the situation enough to substantially reduce the risk of a bad Singularity at this point, without a huge cost. If the cause of existential-risk reduction had sufficient resources, one project ought to be to determine the actual costs and benefits of approaches like this and whether it would be feasible to implement (i.e., convince society to pay whatever costs are necessary to make our networks more secure), but given the current reality I think the priority of this is pretty low.
Thanks. I just wanted to know if this was the sort of thing you had in mind, and whether you knew any technical reasons why it wouldn’t do what you want.
This is one thing I keep a close-ish eye on. One of the major proponents of this sort of security has recently gone to work for Microsoft on their research operating systems. So it might come a long in a bit.
As to why it hasn’t caught on, it is partially inertia and partially it requires more user interaction/understanding of the systems than ambient authority. Good UI and metaphors can decrease that cost though.
The ideal would be to have a self-maintaining computer system with this sort of security system. However a good self-maintaining system might be dangerously close to a self-modifying AI.
There’s also a group of proponents of this style working on Caja at Google, including Mark Miller, the designer of E. And somepeople at HP.
Actually, all these people talk to one another regularly. They don’t have a unified plan or a single goal, but they collaborate with one another frequently. I’ve left out several other people who are also trying to find ways to push in the same direction. Just enough names and references to give a hint. There are several mailing lists where these issues are discussed. If you’re interested, this is probably the one to start with.
I meant it more as an indication that Microsoft are working in the direction of better secured OSes already, rather than his being a pivotal move. Coyotos might get revived when the open source world sees what MS produces and need to play catch up.
That assumes MS ever goes far enough that the FLOSS world feels any gap that could be caught up.
MS rarely does so; the chief fruit of 2 decades of Microsoft Research sponsorship of major functional language researchers like Simon Marlow or Simon Peyton-Jones seems to be… C# and F#. The former is your generic quasi-OO imperative language like Python or Java, with a few FPL features sprinkled in, and the latter is a warmed-over O’Caml: it can’t even make MLers feel like they need to catch up, much less Haskellers or FLOSS users in general.
The FPL OSS community is orders of magnitude more vibrant than the OSS secure operating system research. I don’t know of any living projects that use the object-capability model at the OS level (plenty of language level and higher level stuff going on).
No, these are high-cost/low-return existential risk reduction techniques. Major corporations and governments already have very high incentive to protect their networks, but despite spending billions of dollars, they’re still being frequently penetrated by human attackers, who are not even necessarily professionals. Not to mention the hundreds of millions of computers on the Internet that are unprotected because their owners have no idea how to do so, or they don’t contain information that their owners consider especially valuable.
I got into cryptography partly because I thought it would help reduce the risk of a bad Singularity. But while cryptography turned out to work relatively well (against humans anyway), the rest of the field of computer security is in terrible shape, and I see little hope that the situation would improve substantially in the next few decades.
What do you think of the object-capability model? And removing ambient authority in general.
That’s outside my specialization of cryptography, so I don’t have too much to say about it. I do remember reading about the object-capability model and the E language years ago, and thinking that it sounded like a good idea, but I don’t know why it hasn’t been widely adopted yet. I don’t know if it’s just inertia, or whether there are some downsides that its proponents tend not to publicize.
In any case, it seems unlikely that any security solution can improve the situation enough to substantially reduce the risk of a bad Singularity at this point, without a huge cost. If the cause of existential-risk reduction had sufficient resources, one project ought to be to determine the actual costs and benefits of approaches like this and whether it would be feasible to implement (i.e., convince society to pay whatever costs are necessary to make our networks more secure), but given the current reality I think the priority of this is pretty low.
Thanks. I just wanted to know if this was the sort of thing you had in mind, and whether you knew any technical reasons why it wouldn’t do what you want.
This is one thing I keep a close-ish eye on. One of the major proponents of this sort of security has recently gone to work for Microsoft on their research operating systems. So it might come a long in a bit.
As to why it hasn’t caught on, it is partially inertia and partially it requires more user interaction/understanding of the systems than ambient authority. Good UI and metaphors can decrease that cost though.
The ideal would be to have a self-maintaining computer system with this sort of security system. However a good self-maintaining system might be dangerously close to a self-modifying AI.
There’s also a group of proponents of this style working on Caja at Google, including Mark Miller, the designer of E. And some people at HP.
Actually, all these people talk to one another regularly. They don’t have a unified plan or a single goal, but they collaborate with one another frequently. I’ve left out several other people who are also trying to find ways to push in the same direction. Just enough names and references to give a hint. There are several mailing lists where these issues are discussed. If you’re interested, this is probably the one to start with.
Sadly, I suspect this moves things backwards rather than forwards. I was really hoping that we’d see Coyotos one day, which now seems very unlikely.
I meant it more as an indication that Microsoft are working in the direction of better secured OSes already, rather than his being a pivotal move. Coyotos might get revived when the open source world sees what MS produces and need to play catch up.
That assumes MS ever goes far enough that the FLOSS world feels any gap that could be caught up.
MS rarely does so; the chief fruit of 2 decades of Microsoft Research sponsorship of major functional language researchers like Simon Marlow or Simon Peyton-Jones seems to be… C# and F#. The former is your generic quasi-OO imperative language like Python or Java, with a few FPL features sprinkled in, and the latter is a warmed-over O’Caml: it can’t even make MLers feel like they need to catch up, much less Haskellers or FLOSS users in general.
The FPL OSS community is orders of magnitude more vibrant than the OSS secure operating system research. I don’t know of any living projects that use the object-capability model at the OS level (plenty of language level and higher level stuff going on).
For some of the background, Rob Pike wrote an old paper on the state of system level research.