I delay Google’ing to the last possible moment on purpose. It’s by figuring out stuff by yourself that you really learn :).
TeMPOraL
This is often said to be done in the name of simplicity (the ‘user’ is treated as an inept, lazy moron), but I think an additional, more surreptitious reason, is to keep profit margins high.
There’s also one much more important reason. To quote A. Whitehead,
Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.
Humans (right now) just don’t have enough cognitive power to understand every technology in detail. If not for the black boxes, one couldn’t get anything done today.
The real issue is, whether we’re willing to peek inside the box when it misbehaves.
And so recommendations for more self-control regulation tend to be based on claims that we are biased to underestimate our problem.
There is something to those claims given that pretty much every addiction therapy (be it alcohol, food, porn or something else) starts from admitting to oneself that one has underestimated the problem.
That’s something that I think laypeople never realize about computer science—it’s all really simple things, but combined together at such a scale and pace that in a few decades we’ve done the equivalent of building a cat from scratch out of DNA. Big complex things really can be built out of extremely simple parts, and we’re doing it all the time, but for a lot of people our technology is indistinguishable from magic.
-- wtallis
Well, but it can also be interpreted as a recursive definition expanding to:
Luck is opportunity plus preparation plus opportunity plus preparation plus opportunity plus preparation plus …. ;).
If the way of thinking is so new, then why should we expect to find stories about it?
To quote from the guy this story was about, “there is nothing new under the sun”. At least nothing directly related to our wetware. So we should expect that every now and then people stumbled upon a “good way of thinking”, and when they did, the results were good. They might just not manage to identify what exactly made the method good, and to replicate it.
Also, as MaoShan said, this is kind of Proto-Bayes, 101 thinking. What we now have is this, but systematically improved over many iterations.
(that is, that it was known N years ago but didn’t take over the world)?
“Taking over the world” is a complex mix of effectiveness, popularity, luck and cultular factors. You can see this a lot in the domain of programming languages. With ways of thinking it is even more difficult, because—as opposed to programming languages—most people don’t learn them explicitly and don’t evaluate them based on results/”features”.
I like doing math that involves measuring the lengths of numbers written out on the page—which is really just a way of loosely estimating log_10 x. It works, but it feels so wrong.
It has been said that the past is a foreign country. Well, it is certainly inhabited by foreigners, people whose mindset was shaped by circumstances we shy from remembering. The mother of three children who gave birth eight times. The father of four children, the last of whom cost him his wife. Our minds are largely free of such horrors, and not inured to that kind of suffering. That is the progress of technology. That is what is improving the human race.
It is a long, long ladder, and sometimes we slip, but we’ve never actually fallen. That is our progress.
Sometimes I still marvel about how in most time-travel stories nobody thinks of this.
The alternate way of computing this is to not actually discard the future, but to split it off to a separate timeline
Or maybe also another one, somewhat related to the main post—let the universe compute, in it’s own meta-time, a fixed point [0] of reality (that is, the whole of time between the start and the destination of time travel gets recomputed into a form that allowed it to be internally consistent) and continue from there. You could imagine the universe computer simulating casually the same period of time again and again until a fixed point is reached, just like the iterative algorithms used to find it for functions.
[0] - http://en.wikipedia.org/wiki/Fixed_point_(mathematics)
This whole post strongly reminds me of “A New Kind of Science” [0], where Stephen Wolfram tries to explain the workings of the universe using simple computational structures like Cellular Automata, network systems, etc. I know that Wolfram is not highly regarded for many different reasons (mostly related to personal traits), but I got a very similar feeling when reading both NKS and this post—that there is something in the idea, that the fabric of the universe might actually be found to be best described by a simple computational model.
[0] - http://www.wolframscience.com/nksonline/toc.html
(exercising necromancy again to raise the thread from the dead)
We had this situation on CS studies in numerical methods class and in metrology class. In both cases, most of the students fudged the data in the reports and/or just plainly copied stuff from what the previous year did.
I’ve never seen or heard of such a school, at least not in my country. Maybe vocational schools grade like that, but in high schools I know, there’s no fitting togetger, sanding, or measuring anything. It’s just memorizing theory and solving exercises.
That’s the general algorithm of reading STL error messages. I still can’t get why people look at you as if you were a wizard, if all that you need to do is to quickly filter out irrelevant 90% of the message. Simple pattern matching exercise.