The situation is far worse than that. At least a compiled program you can: add more memory or run it on a faster computer, disassemble the code and see at which step things go wrong, rewind if there’s a problem, interface with programs you’ve written, etc. If compiled programs really were that bad, hackers would have already won (as security researchers wouldn’t be able to take apart malware), drm would work, no emulators for undocumented devices would exist.
The state of the mind is many orders of magnitude worse.
Also, I’d quibble with “we don’t know why”. The word I’d use is how. We know why, perhaps not in detail (although we sort of know how, in even less detail.)
i largely agree in context, but i think it’s not an entirely accurate picture of reality.
there are definite, well known, documented methods for increasing available resources for the brain, as well as doing the equivalent of decompilation, debugging, etc… sure, the methods are a lot less reliable than what we have available for most simple computer programs.
also, once you get to debugging/adding resources to programming systems which even remotely approximate the complexity of the brain, though, that difference becomes much smaller than you’d expect. in theory you should be able to debug large, complex, computing systems—and figure out where to add which resource, or which portion to rewrite/replace; for most systems, though, i suspect the success rate is much lower than what we get for the brain.
try, for example, comparing success rates/timelines/etc… for psychotherapists helping broken brains rewrite themselves, vs. success rates for startups trying to correctly scale their computer systems without going bankrupt. and these rates are in the context of computer systems which are a lot less complex, in both implementation and function, than most brains. sure, the psychotherapy methods seem much more crude, and the rates are much lower than we’d like to admit them to be—but i wouldn’t be surprised if they easily compete with success rates for fixing broken computer systems, if not outperform.
try, for example, comparing success rates/timelines/etc… for psychotherapists helping broken brains rewrite themselves, vs. success rates for startups trying to correctly scale their computer systems without going bankrupt.
But startups seem to do that pretty routinely. One does not hear about the ‘Dodo bird verdict’ for startups trying to scale. Startups fail for many reasons, but I’m having a hard time thinking of any, ever, for which the explanation was insurmountable performance problems caused by scaling.
(Wait, I can think of one: Friendster’s demise is usually blamed on the social network being so slow due to perpetual performance problems. On the other hand, I can probably go through the last few months of Hacker News and find a number of post-mortems blaming business factors, a platform screwing them over, bad leadership, lack of investment at key points, people just plain not liking their product...)
in retrospect, that’s a highly in-field specific bit of information and difficult to obtain without significant exposure—it’s probably a bad example.
for context:
friendster failed at 100m+ users—that’s several orders of magnitude more attention than the vast majority of startups ever obtain before failing, and a very unusual point to fail due to scalability problems (with that much attention, and experience scaling, scaling should really be a function of adequate funding more than anything else).
there’s a selection effect for startups, at least the ones i’ve seen so far: ones that fail to adequately scale, almost never make it into the public eye. since failing to scale is a very embarrassing bit of information to admit publicly after the fact—the info is unlikely to be publicly known unless the problem gets independently, externally, publicized, for any startup.
i’d expect any startup that makes it past the O(1m active users) point and then proceeds to noticeably be impeded by performance problems to be unusual—maybe they make it there by cleverly pivoting around their scalability problems (or otherwise dancing around them/putting them off), with the hope of buying (or getting bought) out of the problems later on.
Ah.. a compiled program running on limited computing resources(memory, cpu etc..). I kinda think the metaphor assumes that implicitly. Perhaps it results in a leaky abstraction for most others(i.e: not working with computers), but i don’t really see it as a problem.
eventually the truth/reality/answer is indifferent to the phrasing of the question (as why/how). I do think phrasing it as how makes it easier to answer(in the instrumental sense) than why. Also what is the exception, am not aware of it, please point me.
“Why are you in the hospital?”—“Because I was injured when a car hit me.”
“Why did the car hit you?”—“Because the driver was drunk and I was standing at the intersection.”
“Why was the driver drunk?” and “Why were you standing at the intersection?” and so on and so forth.
Every “why” question about something occurring in the natural world is answered by going one (or more) levels down in the granularity, describing one high-level phenomenon via its components, typically lower-level phenomena.
This isn’t unlike deriving one corollary from another. You’re climbing back* the derivation tree towards the axioms, so to speak. It’s the same in any system, the math analogy would be if someone asked you “why does this corollary hold”, which you’d answer by tracing it back to the nearest theorem. Then “why does this theorem hold” would be answered by describing its lower-level* lemmata. Back we go, ever towards the axioms.
All these are more aptly described as “how”-questions, “how” is the scientific question, since what we’re doing is finding descriptions, not reasons, in some sense.
Of course you could just solve such distinctions via dictionary and then in daily usage use “why” and “how” interchangeably, which is fine. But it’s illuminating to notice the underlying logic.
Which leaves as the only truly distinct “why”-question the “why those axioms?”, which in the real world is typically phrased as “why anything at all?”. Krauss tries to reduce that to a “how” question in A Universe From Nothing, as does the Tegmark multiverse, which doesn’t work except snuggling in one more descriptive layer in front of the axioms.
There is a good case to be made that this one remaining true “why”-question, which does not reduce to merely some one-level-lower description, is actually ill-formed and doesn’t make sense. The territory just provides us with evidence, the model we build to compress that evidence implicitly surmises the existence of underlying axioms in the territory. But why bother with that single remaining “Why”-question when the answer is forever outside our reach?
*(We know real trees are upside down, unlike these strange biological things in that strange place outside our window.)
There is a good case to be made that this one remaining true “why”-question, which does not reduce to merely some one-level-lower description, is actually ill-formed and doesn’t make sense.
Am Douglas Adams on this one. 42 is the answer, we don’t know the question. Seriously, though I’ve gotten to a stage where I don’t wonder much about the one ‘why’ axiom anymore*. Thanks for the clarification though.
Paul Graham
The situation is far worse than that. At least a compiled program you can: add more memory or run it on a faster computer, disassemble the code and see at which step things go wrong, rewind if there’s a problem, interface with programs you’ve written, etc. If compiled programs really were that bad, hackers would have already won (as security researchers wouldn’t be able to take apart malware), drm would work, no emulators for undocumented devices would exist.
The state of the mind is many orders of magnitude worse.
Also, I’d quibble with “we don’t know why”. The word I’d use is how. We know why, perhaps not in detail (although we sort of know how, in even less detail.)
i largely agree in context, but i think it’s not an entirely accurate picture of reality.
there are definite, well known, documented methods for increasing available resources for the brain, as well as doing the equivalent of decompilation, debugging, etc… sure, the methods are a lot less reliable than what we have available for most simple computer programs.
also, once you get to debugging/adding resources to programming systems which even remotely approximate the complexity of the brain, though, that difference becomes much smaller than you’d expect. in theory you should be able to debug large, complex, computing systems—and figure out where to add which resource, or which portion to rewrite/replace; for most systems, though, i suspect the success rate is much lower than what we get for the brain.
try, for example, comparing success rates/timelines/etc… for psychotherapists helping broken brains rewrite themselves, vs. success rates for startups trying to correctly scale their computer systems without going bankrupt. and these rates are in the context of computer systems which are a lot less complex, in both implementation and function, than most brains. sure, the psychotherapy methods seem much more crude, and the rates are much lower than we’d like to admit them to be—but i wouldn’t be surprised if they easily compete with success rates for fixing broken computer systems, if not outperform.
But startups seem to do that pretty routinely. One does not hear about the ‘Dodo bird verdict’ for startups trying to scale. Startups fail for many reasons, but I’m having a hard time thinking of any, ever, for which the explanation was insurmountable performance problems caused by scaling.
(Wait, I can think of one: Friendster’s demise is usually blamed on the social network being so slow due to perpetual performance problems. On the other hand, I can probably go through the last few months of Hacker News and find a number of post-mortems blaming business factors, a platform screwing them over, bad leadership, lack of investment at key points, people just plain not liking their product...)
in retrospect, that’s a highly in-field specific bit of information and difficult to obtain without significant exposure—it’s probably a bad example.
for context:
friendster failed at 100m+ users—that’s several orders of magnitude more attention than the vast majority of startups ever obtain before failing, and a very unusual point to fail due to scalability problems (with that much attention, and experience scaling, scaling should really be a function of adequate funding more than anything else).
there’s a selection effect for startups, at least the ones i’ve seen so far: ones that fail to adequately scale, almost never make it into the public eye. since failing to scale is a very embarrassing bit of information to admit publicly after the fact—the info is unlikely to be publicly known unless the problem gets independently, externally, publicized, for any startup.
i’d expect any startup that makes it past the O(1m active users) point and then proceeds to noticeably be impeded by performance problems to be unusual—maybe they make it there by cleverly pivoting around their scalability problems (or otherwise dancing around them/putting them off), with the hope of buying (or getting bought) out of the problems later on.
Ah.. a compiled program running on limited computing resources(memory, cpu etc..). I kinda think the metaphor assumes that implicitly. Perhaps it results in a leaky abstraction for most others(i.e: not working with computers), but i don’t really see it as a problem.
Agree ‘how’ is more accurate than why.
“Why” usually resolves to “how” (if not always (in the physical world), with one notable exception).
eventually the truth/reality/answer is indifferent to the phrasing of the question (as why/how). I do think phrasing it as how makes it easier to answer(in the instrumental sense) than why. Also what is the exception, am not aware of it, please point me.
“Why are you in the hospital?”—“Because I was injured when a car hit me.”
“Why did the car hit you?”—“Because the driver was drunk and I was standing at the intersection.”
“Why was the driver drunk?” and “Why were you standing at the intersection?” and so on and so forth.
Every “why” question about something occurring in the natural world is answered by going one (or more) levels down in the granularity, describing one high-level phenomenon via its components, typically lower-level phenomena.
This isn’t unlike deriving one corollary from another. You’re climbing back* the derivation tree towards the axioms, so to speak. It’s the same in any system, the math analogy would be if someone asked you “why does this corollary hold”, which you’d answer by tracing it back to the nearest theorem. Then “why does this theorem hold” would be answered by describing its lower-level* lemmata. Back we go, ever towards the axioms.
All these are more aptly described as “how”-questions, “how” is the scientific question, since what we’re doing is finding descriptions, not reasons, in some sense.
Of course you could just solve such distinctions via dictionary and then in daily usage use “why” and “how” interchangeably, which is fine. But it’s illuminating to notice the underlying logic.
Which leaves as the only truly distinct “why”-question the “why those axioms?”, which in the real world is typically phrased as “why anything at all?”. Krauss tries to reduce that to a “how” question in A Universe From Nothing, as does the Tegmark multiverse, which doesn’t work except snuggling in one more descriptive layer in front of the axioms.
There is a good case to be made that this one remaining true “why”-question, which does not reduce to merely some one-level-lower description, is actually ill-formed and doesn’t make sense. The territory just provides us with evidence, the model we build to compress that evidence implicitly surmises the existence of underlying axioms in the territory. But why bother with that single remaining “Why”-question when the answer is forever outside our reach?
*(We know real trees are upside down, unlike these strange biological things in that strange place outside our window.)
Am Douglas Adams on this one. 42 is the answer, we don’t know the question. Seriously, though I’ve gotten to a stage where I don’t wonder much about the one ‘why’ axiom anymore*. Thanks for the clarification though.
*-- Used to wonder some 10 years ago though.