Also, I’ve discovered the CoqIDE theorem-proving assistant is about as addictive to me now as Legend of Zelda games used to be.
So, what you’re saying is that you’re addicted to Coq. :)
Also, I’ve discovered the CoqIDE theorem-proving assistant is about as addictive to me now as Legend of Zelda games used to be.
So, what you’re saying is that you’re addicted to Coq. :)
Another question I find interesting about animal consciousness I have is whether or not they can recognize cartoons. Cartoons are abstractions/analogies of the real-world. I’m curious if this abstract visual pattern recognition is possessed by animals, or if it requires human-level abstract pattern recognition. There are also some computer vision papers about classifying cartoons, and using artificially generated data-sets (since you mentioned it had to involve humans, animals, and robots).
I recently re-read Gwern’s Drug Heuristics, and this jumped out at me:
....In other words, from the starting point of those wormlike common ancestors in the environment of Earth, the resources of evolution independently produced complex learning, memory, and tool use both within and without the line of human ancestry....
...The obvious answer is that diminishing returns have kicked in for intelligence in primates and humans in particular5354. (Indeed, it’s apparently been argued that not only are humans not much smarter than primates55, but there is little overall intelligence differences in vertebrates56. Humans lose embarrassingly on even pure tests of statistical reasoning; we are outperformed on the Monty Hall problem by pigeons and to a lesser extent monkeys!) The last few millennia aside, humans have not done well and has apparently verged on extinction before...
...The human brain seems to be special only in being a scaled-up primate brain41, with close to the metabolic limit in its number of neurons...
If I had more time, I’d try to look more into the intelligence tests that are given to animals. Assuming animals are smarter (in some sense of the word), then why are humans dominant? I think the answer to this might be something like “Humans evolutionarily stumbled upon language, then encoded this in our genes, and language allows us to reason about the world, which is something raw animal intelligence/pattern-matching cannot do.”
I think it’s an interesting hypothesis, but I don’t know where I’d start trying to evaluate it, or how likely I think it’s true.
I expect that in vitro selection for IQ is an easier problem to solve and will have greater impact on the population’s IQ.
I overcame depression a few years ago and have been meaning to write about how I did it, but honestly, the current me is so different from the old me, that I don’t even remember how being depressed felt.
I do remember some of the things that got me out of the depression:
Coming independently to the insight that I should “Avoid Misinterpreting my Emotions”. One day, I was sitting there thinking the same old depressed thoughts I’d usually thought. Something like “what’s the purpose of doing anything.” But, I realized that when those words went through my head that day, I didn’t feel depressed thinking them. Then, I realized that whatever words were going through my head were not the cause of my emotions. In general, it’s true that we can unlike our emotions from our thoughts. By doing this we can optimize feeling better and resolving whatever epistemic issue you think is the cause of your emotions separately.
Discovering LW helped in a lot of ways.
Doing lots of mind mapping / writing therapy, using GTD for managing stress/productivity, and to a lesser extent CBT.
EDIT: Also, getting out of high-school.
Pickup basketball games require some coordination once you get to the gym (getting a game going can be somewhat difficult, but is usually pretty easy), but, you can just go whenever you want.
I’ve not finished reading either book, but Tanenbaum’s OS book seemed very dry to me compared to “Operating System Concepts” (which has just been delightful to read!).
See also: “The Perfect/Great is the enemy of the Good”
Thank you for writing this series Jonah. I’m don’t have the time now to think deeply about this topic, so I thought I’d add to the discussion by mentioning a few related interesting anecdotes.
I doubt what made the Polgar sisters great was innate intelligence.
Another interesting anecdote is von Neumann not (initially?) appreciating the importance of higher-level programming languages:
John von Neumann, when he first heard about FORTRAN in 1954, was unimpressed and asked “why would you want more than machine language?” One of von Neumann’s students at Princeton recalled that graduate students were being used to hand assemble programs into binary for their early machine. This student took time out to build an assembler, but when von Neumann found out about it he was very angry, saying that it was a waste of a valuable scientific computing instrument to use it to do clerical work. http://worrydream.com/#!/dbx
EDIT: Apparently, von Neumann’s attitude toward assembly was common among programmers of that era. http://worrydream.com/quotes/#richard-hamming-the-art-of-doing-science-and-engineering-2
I’m not qualified to say judge the accuracy of these claims, but I was speaking with a PhD in physics who said that he thought that only ~50 people in theoretical physics were doing anything important.
In general, they’re called continued fractions.
This is how I did it. My first instinct was to decompose the problem into the shapes {dots, circles, diamonds, square, +, X} and then plot which cells the shapes appear in. It’s pretty easy to see the rectangles after that. Though, I didn’t make the connection to XOR.
I recently found out that Feynmann only had an IQ of 125.
This is very surprising to me. How should I/you update?
Perhaps the IQ test was administered poorly.
I think that high g/IQ is still really important to success in various fields. (Stephen Hsu points out that more physicists have IQs of 150 than 140, etc. In other words, that marginal IQ matters even past 140.).
Stephen Hsu estimates that we’ll be able to have genetically enhanced children with IQs ~15 points higher in the next 10 years.
Bostrom and Carl Schulman’s paper on iterated embryo selection roughly agrees.
It seems almost too good to be true. The arguments/facts that lead us to believe that it will happen soon are:
we do pre-screening for other traits. The reason we can’t do it for intelligence at the moment is that we don’t know what genes to select for.
We will get that data soon, as the cost of genetic sequencing falls faster than Moore’s law.
I still “alieve” that it’s too good to be true. Does anybody have any reason to doubt the claims made above?
Also, the ~15 point estimate is based on the assumption that we don’t do iterated embryo selection (which can’t be done in humans yet).
I think this is relatively common. I was talking about this with a friend a while back.
How many gigabytes of text is LW? I guess it’d probably be under a terabyte, and therefore, fairly cheap for even a lay person to backup.
I think that I have the capacity to be genuinely happy on a day-to-day basis.
There are times when I’m generally on top of things. I’ve got my GTD system functioning, I’ve got an exercise/food/sleep routine that I like. I’ve “goal-factored” and feel like I know what I’m doing with my life. ETC. All that really remains for me to do in times like these is to DO things.
Though, I would say that I don’t feel like this too often. For the past few months, I’ve felt somewhat anxious/uncertain about what my life plans were. So, I wasn’t as happy on a day to day basis. But, I feel like in the long-run, I’ll be able to get into the “on top of things” state more consistently.
Actually, most modern AI applications don’t involve human input, so it’s not obvious that AGI will develop along Tool AI lines.
I’m not really sure what’s meant by this.
For example, in computer vision, you can input an image and get a classification as output. The input is supplied by a human. The computation doesn’t involve the human. The output is well defined. The same could be true of a tool AI that makes predictions.
Many leading AGI thinkers have their own pet idea about what AGI should do. Few to none endorse Tool AI. If it was obvious all the leading AGI thinkers would endorse it.
Both Andrew Ng and Jeff Hawkins think that tool AI is the most likely approach.
I did use it on my phone more than anything when I did use it. I just don’t have much information I want to memorize at the moment.