i largely agree in context, but i think it’s not an entirely accurate picture of reality.
there are definite, well known, documented methods for increasing available resources for the brain, as well as doing the equivalent of decompilation, debugging, etc… sure, the methods are a lot less reliable than what we have available for most simple computer programs.
also, once you get to debugging/adding resources to programming systems which even remotely approximate the complexity of the brain, though, that difference becomes much smaller than you’d expect. in theory you should be able to debug large, complex, computing systems—and figure out where to add which resource, or which portion to rewrite/replace; for most systems, though, i suspect the success rate is much lower than what we get for the brain.
try, for example, comparing success rates/timelines/etc… for psychotherapists helping broken brains rewrite themselves, vs. success rates for startups trying to correctly scale their computer systems without going bankrupt. and these rates are in the context of computer systems which are a lot less complex, in both implementation and function, than most brains. sure, the psychotherapy methods seem much more crude, and the rates are much lower than we’d like to admit them to be—but i wouldn’t be surprised if they easily compete with success rates for fixing broken computer systems, if not outperform.
try, for example, comparing success rates/timelines/etc… for psychotherapists helping broken brains rewrite themselves, vs. success rates for startups trying to correctly scale their computer systems without going bankrupt.
But startups seem to do that pretty routinely. One does not hear about the ‘Dodo bird verdict’ for startups trying to scale. Startups fail for many reasons, but I’m having a hard time thinking of any, ever, for which the explanation was insurmountable performance problems caused by scaling.
(Wait, I can think of one: Friendster’s demise is usually blamed on the social network being so slow due to perpetual performance problems. On the other hand, I can probably go through the last few months of Hacker News and find a number of post-mortems blaming business factors, a platform screwing them over, bad leadership, lack of investment at key points, people just plain not liking their product...)
in retrospect, that’s a highly in-field specific bit of information and difficult to obtain without significant exposure—it’s probably a bad example.
for context:
friendster failed at 100m+ users—that’s several orders of magnitude more attention than the vast majority of startups ever obtain before failing, and a very unusual point to fail due to scalability problems (with that much attention, and experience scaling, scaling should really be a function of adequate funding more than anything else).
there’s a selection effect for startups, at least the ones i’ve seen so far: ones that fail to adequately scale, almost never make it into the public eye. since failing to scale is a very embarrassing bit of information to admit publicly after the fact—the info is unlikely to be publicly known unless the problem gets independently, externally, publicized, for any startup.
i’d expect any startup that makes it past the O(1m active users) point and then proceeds to noticeably be impeded by performance problems to be unusual—maybe they make it there by cleverly pivoting around their scalability problems (or otherwise dancing around them/putting them off), with the hope of buying (or getting bought) out of the problems later on.
i largely agree in context, but i think it’s not an entirely accurate picture of reality.
there are definite, well known, documented methods for increasing available resources for the brain, as well as doing the equivalent of decompilation, debugging, etc… sure, the methods are a lot less reliable than what we have available for most simple computer programs.
also, once you get to debugging/adding resources to programming systems which even remotely approximate the complexity of the brain, though, that difference becomes much smaller than you’d expect. in theory you should be able to debug large, complex, computing systems—and figure out where to add which resource, or which portion to rewrite/replace; for most systems, though, i suspect the success rate is much lower than what we get for the brain.
try, for example, comparing success rates/timelines/etc… for psychotherapists helping broken brains rewrite themselves, vs. success rates for startups trying to correctly scale their computer systems without going bankrupt. and these rates are in the context of computer systems which are a lot less complex, in both implementation and function, than most brains. sure, the psychotherapy methods seem much more crude, and the rates are much lower than we’d like to admit them to be—but i wouldn’t be surprised if they easily compete with success rates for fixing broken computer systems, if not outperform.
But startups seem to do that pretty routinely. One does not hear about the ‘Dodo bird verdict’ for startups trying to scale. Startups fail for many reasons, but I’m having a hard time thinking of any, ever, for which the explanation was insurmountable performance problems caused by scaling.
(Wait, I can think of one: Friendster’s demise is usually blamed on the social network being so slow due to perpetual performance problems. On the other hand, I can probably go through the last few months of Hacker News and find a number of post-mortems blaming business factors, a platform screwing them over, bad leadership, lack of investment at key points, people just plain not liking their product...)
in retrospect, that’s a highly in-field specific bit of information and difficult to obtain without significant exposure—it’s probably a bad example.
for context:
friendster failed at 100m+ users—that’s several orders of magnitude more attention than the vast majority of startups ever obtain before failing, and a very unusual point to fail due to scalability problems (with that much attention, and experience scaling, scaling should really be a function of adequate funding more than anything else).
there’s a selection effect for startups, at least the ones i’ve seen so far: ones that fail to adequately scale, almost never make it into the public eye. since failing to scale is a very embarrassing bit of information to admit publicly after the fact—the info is unlikely to be publicly known unless the problem gets independently, externally, publicized, for any startup.
i’d expect any startup that makes it past the O(1m active users) point and then proceeds to noticeably be impeded by performance problems to be unusual—maybe they make it there by cleverly pivoting around their scalability problems (or otherwise dancing around them/putting them off), with the hope of buying (or getting bought) out of the problems later on.