all you’ll do by adopting a lot of secrecy is slow yourselves down radically, while making sure that people who are better than you are at secrecy, who are better than you are at penetrating secrecy, who have more resources than you do, and who are better at coordinated action than you are, will know nearly everything you do, and will also know many things that you don’t know.
So, there’s a few types of secrecy. Here’s three.
The sort of secrecy you have with friends when you gossip, which most of the time works fine.
The sort of secrecy where nobody really knows what is being worked on within companies like Apple and Facebook, whereas there’s way more openness about e.g. Google.
The sort of secrecy where you’re trying to protect yourself from foreign governments, which is way harder.
I’m pretty sure secrecy has been key for Apple’s ability to control its brand, and it’s not just slowed itself down, and I think that it’s plausible to achieve similar levels of secrecy, and that this has many uses. But what you’re talking about is secrecy from governmental groups actively trying to hack you.
I largely agree that when a major government wants your info, they can get it, though I’m not sure it’s not possible to keep secrets from them with a massive amount of work (I have not thought about it too much). I do question your assumption that governments will end up taking over the world, I think with deeply revolutionary tech like nanotech, AI, and others, different groups can end up taking over the world. So I don’t view things as clearly falling toward the outcome of Chinese/US/etc hegemony.
I don’t think Apple is a useful model here at all.
I’m pretty sure secrecy has been key for Apple’s ability to control its brand,
Well, Apple thinks so anyway. They may or may not be right, and “control of the brand” may or may not be important anyway. But anyway it’s true that Apple can keep secrets to some degree.
and it’s not just slowed itself down,
Apple is a unitary organization, though. It has a boundary. It’s small enough that you can find the person whose job it is to care about any given issue, and you are unlikely to miss anybody who needs to know. It has well-defined procedures and effective enforcement. Its secrets have a relatively short lifetime of maybe as much as 2 or 3 years.
Anybody who is spying on Apple is likely to be either a lot smaller, or heavily constrained in how they can safely use any secret they get. If I’m at Google and I steal something from Apple, I can’t publicize it internally, and in fact I run a very large risk of getting fired or turned in to law enforcement if I tell it to the wrong person internally.
Apple has no adversary with a disproportionate internal communication advantage, at least not with respect to any secrets that come from Apple.
The color of the next iPhone is never going to be as interesting to any adversary as an X-risk-level AI secret. And if, say, MIRI actually has a secret that is X-risk-level, then anybody who steals it, and who’s in a position to actually use it, is not likely to feel the least bit constrained by fear of MIRI’s retaliation in using it to do whatever X-risky thing they may be doing.
There’s also the sort of secrecy you have when you signed an NDA because you consult with a company. I would expect a person like Nick Bostrom to have access to information about what happens inside DeepMind that’s protected by promises of secrecy.
So, there’s a few types of secrecy. Here’s three.
The sort of secrecy you have with friends when you gossip, which most of the time works fine.
The sort of secrecy where nobody really knows what is being worked on within companies like Apple and Facebook, whereas there’s way more openness about e.g. Google.
The sort of secrecy where you’re trying to protect yourself from foreign governments, which is way harder.
I’m pretty sure secrecy has been key for Apple’s ability to control its brand, and it’s not just slowed itself down, and I think that it’s plausible to achieve similar levels of secrecy, and that this has many uses. But what you’re talking about is secrecy from governmental groups actively trying to hack you.
I largely agree that when a major government wants your info, they can get it, though I’m not sure it’s not possible to keep secrets from them with a massive amount of work (I have not thought about it too much). I do question your assumption that governments will end up taking over the world, I think with deeply revolutionary tech like nanotech, AI, and others, different groups can end up taking over the world. So I don’t view things as clearly falling toward the outcome of Chinese/US/etc hegemony.
I don’t think Apple is a useful model here at all.
Well, Apple thinks so anyway. They may or may not be right, and “control of the brand” may or may not be important anyway. But anyway it’s true that Apple can keep secrets to some degree.
Apple is a unitary organization, though. It has a boundary. It’s small enough that you can find the person whose job it is to care about any given issue, and you are unlikely to miss anybody who needs to know. It has well-defined procedures and effective enforcement. Its secrets have a relatively short lifetime of maybe as much as 2 or 3 years.
Anybody who is spying on Apple is likely to be either a lot smaller, or heavily constrained in how they can safely use any secret they get. If I’m at Google and I steal something from Apple, I can’t publicize it internally, and in fact I run a very large risk of getting fired or turned in to law enforcement if I tell it to the wrong person internally.
Apple has no adversary with a disproportionate internal communication advantage, at least not with respect to any secrets that come from Apple.
The color of the next iPhone is never going to be as interesting to any adversary as an X-risk-level AI secret. And if, say, MIRI actually has a secret that is X-risk-level, then anybody who steals it, and who’s in a position to actually use it, is not likely to feel the least bit constrained by fear of MIRI’s retaliation in using it to do whatever X-risky thing they may be doing.
There’s also the sort of secrecy you have when you signed an NDA because you consult with a company. I would expect a person like Nick Bostrom to have access to information about what happens inside DeepMind that’s protected by promises of secrecy.
I can tell you that if you just want to walk into DeepMind (i.e. past the security gate), you have to sign an NDA.