I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact. Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects.
Trying to maintain secrecy within the organization like this (as contrasted to secrecy from the public) seems nuts to me. Certainly, if you have any clever ideas about how to build an AGI, you wouldn’t want to put them on the public internet, where they might inspire someone who doesn’t appreciate the difficulty of the alignment problem to do something dangerous.
But one would hope that the people working at MIRI do appreciate the difficulty of the alignment problem (as a real thing about the world, and not just something to temporarily believe because your current employer says so). If you want the alignment-savvy people to have an edge over the rest of the world (!), you should want them to be maximally intellectually productive, which naturally requires the ability to talk to each other without the overhead of seeking permission from a designated authority figure. (Where the standard practice of bottlenecking information and decisionmaking on a designated authority figure makes sense if you’re a government or a corporation trying to wrangle people into serving the needs of the organization against their own interests, but I didn’t think “we” were operating on that model.)
Secrecy is not about good trustworthy people who get to have all the secrets versus bad untrustworthy people who don’t get any. This frame may itself be part of the problem; a frame like that makes it incredibly socially difficult to implement standard practices.
Standard best practice in places like the military and intelligence organizations, where lives depend on secrecy being kept from outsiders—but not insiders—is to compartmentalize and maintain “need to know.” Similarly, in information security, the best practice is to only give being security access to what they need, and granularize access to different services / data, and well as differentiating read / write / delete access. Even in regular organizations, lots of information is need-to-know—HR complaints, future budgets, estimates of profitability of a publicly traded company before quarterly reports, and so on. This is normal, and even though it’s costly, those costs are needed.
This type of granular control isn’t intended to stop internal productivity, it is to limit the extent of failures in secrecy, and attempts to exploit the system by leveraging non-public information, both of which are inevitable, since costs to prevent failures grow very quickly as the risk of failure approaches zero. For all of these reasons, the ideal is to have trustworthy people who have low but non-zero probabilities of screwing up on secrecy. Then, you ask them not to share things that are not necessary for others’ work. You only allow limited exceptions and discretion where it is useful. The alternative, of “good trustworthy people [] get to have all the secrets versus bad untrustworthy people who don’t get any,” simply doesn’t work in practice.
Thanks for the explanation. (My comment was written from my idiosyncratic perspective of having been frequently intellectually stymied by speech restrictions, and not having given much careful thought to organizational design.)
I would imagine that most military and intelligence organziation do have psychiatrists and therapists on staff that employees can access when they run into psychological trouble due to their work projects where they can share information about their work project.
Especially, when operating in an envirioment that does get people in contact with issues that caused some people to be institutionalized having only a superior to share information but not anybody do deal with the psychological issues arrising from the work seems like a flawed system.
I agree that there is a real issue here that needs to be addressed, and I wasn’t claiming that there is no reason to have support—just that there is a reason to compartmentalize.
And yes, US military use of mental health resources is off-the-charts. But in the intelligence community there are some really screwed up incentives, in that having a mental health issue can get your clearance revoked—and you won’t necessarily lose your job, but the impact on a person’s career is a great reason to avoid mental health care, and my (second-hand, not reliable) understanding is that there is a real problem with this.
Seconding this: When I did classified work at a USA company, I got the strong impression that (1) If I have any financial problems or mental health problems, I need to tell the security office immediately; (2) If I do so, the security office would immediately tell the military, and then the military would potentially revoke my security clearance. Note that some people get immediately fired if they lose their clearance. That wasn’t true for me—but losing my clearance would have certainly hurt my future job prospects.
My strong impression was that neither the security office nor anyone else had any intention to help us employees with our financial or mental health problems. Nope, their only role was to exacerbate personal problems, not solve them. There’s an obvious incentive problem here; why would anyone disclose their incipient financial or mental health problems to the company, before they blow up? But I think from the company’s perspective, that’s a feature not a bug. :-P
(As it happens, neither myself nor any of my close colleagues had financial or mental health problems while I was working there. So it’s possible that my impressions are wrong.)
I don’t specifically know about mental health, but I do know specific stories about financial problems being treated as security concerns—and I don’t think I need to explain how incredibly horrific it is to have an employee say to their employer that they are in financial trouble, and be told that they lost their job and income because of it.
I think it’s actually quite hard to have everyone in an organization trust everyone else in an organization, or to only hire people who would be trusted by everyone in the organization. So you might want to have some sort of tiered system, where (perhaps) the researchers all trust each other, but only trust the engineers they work with, and don’t trust any of the ops staff, and this lets you only need one researcher to trust an engineer to hire them.
[On net I think the balance is probably still in favor of “internal transparency, gated primarily by time and interests instead of security clearance”, but it’s less obvious than it originally seems.]
The steelman that comes to mind is that by the time you actually know that you have a dangerous secret, it’s either too late or risky to set up a secrecy policy. So it’s useful to install secrecy policies in advance. The downsides that might be currently apparent are bugs that you still have the slack to resolve.
It depends. For example, if you have an intern program, then they probably aren’t especially trusted as these decision generally don’t receive the same degree of scrutiny as employment.
And ops people prob don’t need to know details of the technical research.
In case it becomes known to any of a few powerful intelligence agencies that MIRI works on an internal project that they believe is likely to create an AGI in one or two years, that intelligence agency will hack/surveil MIRI to get all the secrets.
To the extend that MIRI’s theory of change is that they are going to build an AGI on their own independent of any outside organization a high degree of secrecy is likely necessary for that plan to work.
I think it’s highly questionable that MIRI will be able to develop AGI faster (especially when researchers don’t talk to each other) then organizations like Deep Mind and thus it’s unclear to me whether the plan makes sense, but it seems hard to imagine that plan without secrecy.
Trying to maintain secrecy within the organization like this (as contrasted to secrecy from the public) seems nuts to me. Certainly, if you have any clever ideas about how to build an AGI, you wouldn’t want to put them on the public internet, where they might inspire someone who doesn’t appreciate the difficulty of the alignment problem to do something dangerous.
But one would hope that the people working at MIRI do appreciate the difficulty of the alignment problem (as a real thing about the world, and not just something to temporarily believe because your current employer says so). If you want the alignment-savvy people to have an edge over the rest of the world (!), you should want them to be maximally intellectually productive, which naturally requires the ability to talk to each other without the overhead of seeking permission from a designated authority figure. (Where the standard practice of bottlenecking information and decisionmaking on a designated authority figure makes sense if you’re a government or a corporation trying to wrangle people into serving the needs of the organization against their own interests, but I didn’t think “we” were operating on that model.)
Secrecy is not about good trustworthy people who get to have all the secrets versus bad untrustworthy people who don’t get any. This frame may itself be part of the problem; a frame like that makes it incredibly socially difficult to implement standard practices.
To attempt to make this point more legible:
Standard best practice in places like the military and intelligence organizations, where lives depend on secrecy being kept from outsiders—but not insiders—is to compartmentalize and maintain “need to know.” Similarly, in information security, the best practice is to only give being security access to what they need, and granularize access to different services / data, and well as differentiating read / write / delete access. Even in regular organizations, lots of information is need-to-know—HR complaints, future budgets, estimates of profitability of a publicly traded company before quarterly reports, and so on. This is normal, and even though it’s costly, those costs are needed.
This type of granular control isn’t intended to stop internal productivity, it is to limit the extent of failures in secrecy, and attempts to exploit the system by leveraging non-public information, both of which are inevitable, since costs to prevent failures grow very quickly as the risk of failure approaches zero. For all of these reasons, the ideal is to have trustworthy people who have low but non-zero probabilities of screwing up on secrecy. Then, you ask them not to share things that are not necessary for others’ work. You only allow limited exceptions and discretion where it is useful. The alternative, of “good trustworthy people [] get to have all the secrets versus bad untrustworthy people who don’t get any,” simply doesn’t work in practice.
Thanks for the explanation. (My comment was written from my idiosyncratic perspective of having been frequently intellectually stymied by speech restrictions, and not having given much careful thought to organizational design.)
I would imagine that most military and intelligence organziation do have psychiatrists and therapists on staff that employees can access when they run into psychological trouble due to their work projects where they can share information about their work project.
Especially, when operating in an envirioment that does get people in contact with issues that caused some people to be institutionalized having only a superior to share information but not anybody do deal with the psychological issues arrising from the work seems like a flawed system.
I agree that there is a real issue here that needs to be addressed, and I wasn’t claiming that there is no reason to have support—just that there is a reason to compartmentalize.
And yes, US military use of mental health resources is off-the-charts. But in the intelligence community there are some really screwed up incentives, in that having a mental health issue can get your clearance revoked—and you won’t necessarily lose your job, but the impact on a person’s career is a great reason to avoid mental health care, and my (second-hand, not reliable) understanding is that there is a real problem with this.
Seconding this: When I did classified work at a USA company, I got the strong impression that (1) If I have any financial problems or mental health problems, I need to tell the security office immediately; (2) If I do so, the security office would immediately tell the military, and then the military would potentially revoke my security clearance. Note that some people get immediately fired if they lose their clearance. That wasn’t true for me—but losing my clearance would have certainly hurt my future job prospects.
My strong impression was that neither the security office nor anyone else had any intention to help us employees with our financial or mental health problems. Nope, their only role was to exacerbate personal problems, not solve them. There’s an obvious incentive problem here; why would anyone disclose their incipient financial or mental health problems to the company, before they blow up? But I think from the company’s perspective, that’s a feature not a bug. :-P
(As it happens, neither myself nor any of my close colleagues had financial or mental health problems while I was working there. So it’s possible that my impressions are wrong.)
I don’t specifically know about mental health, but I do know specific stories about financial problems being treated as security concerns—and I don’t think I need to explain how incredibly horrific it is to have an employee say to their employer that they are in financial trouble, and be told that they lost their job and income because of it.
I think it’s actually quite hard to have everyone in an organization trust everyone else in an organization, or to only hire people who would be trusted by everyone in the organization. So you might want to have some sort of tiered system, where (perhaps) the researchers all trust each other, but only trust the engineers they work with, and don’t trust any of the ops staff, and this lets you only need one researcher to trust an engineer to hire them.
[On net I think the balance is probably still in favor of “internal transparency, gated primarily by time and interests instead of security clearance”, but it’s less obvious than it originally seems.]
The steelman that comes to mind is that by the time you actually know that you have a dangerous secret, it’s either too late or risky to set up a secrecy policy. So it’s useful to install secrecy policies in advance. The downsides that might be currently apparent are bugs that you still have the slack to resolve.
It depends. For example, if you have an intern program, then they probably aren’t especially trusted as these decision generally don’t receive the same degree of scrutiny as employment.
And ops people prob don’t need to know details of the technical research.
In case it becomes known to any of a few powerful intelligence agencies that MIRI works on an internal project that they believe is likely to create an AGI in one or two years, that intelligence agency will hack/surveil MIRI to get all the secrets.
To the extend that MIRI’s theory of change is that they are going to build an AGI on their own independent of any outside organization a high degree of secrecy is likely necessary for that plan to work.
I think it’s highly questionable that MIRI will be able to develop AGI faster (especially when researchers don’t talk to each other) then organizations like Deep Mind and thus it’s unclear to me whether the plan makes sense, but it seems hard to imagine that plan without secrecy.