Cleo Scrolls
Richard Hamming:
In spite of the difficulty of predicting the future and that unforeseen technological inventions can completely upset the most careful predictions, you must try to foresee the future you will face. To illustrate the importance of this point of trying to foresee the future I often use a standard story.
It is well known the drunken sailor who staggers to the left or right with n independent random steps will, on the average, end up about √n steps from the origin. But if there is a pretty girl in one direction, then his steps will tend to go in that direction and he will go a distance proportional to n. In a lifetime of many, many independent choices, small and large, a career with a vision will get you a distance proportional to n, while no vision will get you only the distance √n. In a sense, the main difference between those who go far and those who do not is some people have a vision and the others do not and therefore can only react to the current events as they happen.
One of the main tasks of this course is to start you on the path of creating in some detail your vision of your future. If I fail in this I fail in the whole course. You will probably object that if you try to get a vision now it is likely to be wrong—and my reply is from observation I have seen the accuracy of the vision matters less than you might suppose, getting anywhere is better than drifting, there are potentially many paths to greatness for you, and just which path you go on, so long as it takes you to greatness, is none of my business. You must, as in the case of forging your personal style, find your vision of your future career, and then follow it as best you can. No vision, not much of a future.
This is also why HPMOR! Harry is worried about grey goo and wants to work in nanotech, is only vaguely interested in AI; I think those were Eliezer’s beliefs in about 1995 (he would be 16)
Gell-Mann checks
May I suggest updating the post name to 4.2 million?
That last paragraph seems important. There’s a type of person that doesn’t have an opinion yet in AI discourse, which is new, and will bounce off the “side” that appears most hostile to them—which, if they have misguided ideas, might be the truth-seeking side that gently criticizes. (Not saying that’s the case for the author of this post!)
It’s really hard to change the mind of someone who’s found their side in AI. But not to have them join one in the first place!