But the surprising thing, even to Hamilton, was that network availability went up, not down. And that is because AWS switches and routers only had features that AWS needed in its network. Commercial network operating systems have to cover all of the possible usage scenarios and protocols with tens of millions of lines of code that is difficult to maintain. “The reason our gear is more reliable is that we didn’t take on this harder problem. Any way that wins is a good way to win.”
Is there anything like a general theory of satisficing that tells you when it’s a good idea? It’s reasonably easy to decide in individual cases provided you’ve got a lot of specific quantitative information, but suppose you don’t have a lot of specific information and you only know qualitative facts about how something works within a system.
If not, should people just default to satisficing unless there are obvious reasons it will fall short of optimality, or should they do the opposite? I’m inclined to favor the former, but am interested to hear other people’s perspectives.
James Hamilton on how Amazon speeds up AWS networking by only implementing part of required networking tasks.
(emphasis mine)
Is there anything like a general theory of satisficing that tells you when it’s a good idea? It’s reasonably easy to decide in individual cases provided you’ve got a lot of specific quantitative information, but suppose you don’t have a lot of specific information and you only know qualitative facts about how something works within a system.
If not, should people just default to satisficing unless there are obvious reasons it will fall short of optimality, or should they do the opposite? I’m inclined to favor the former, but am interested to hear other people’s perspectives.