Selection pressure might be even weaker a lot of the time than a 3% fitness advantage having a 6% chance of becoming universal in the gene pool, or at least it’s more complicated—a lot of changes don’t offer a stable advantage over long periods.
I think natural selection and human intelligence at this point can’t really be compared for strength. Each is doing things that the other can’t—afaik, we don’t know how to deliberately create organisms which can outcompete their wild conspecifics. (Or is it just that there’s no reason to try and/or we have too much sense to do the experiments?)
And we certainly don’t know how to deliberately design a creature which could thrive in the wild, though some animals which have been selectively bred for human purposes do well as ferals.
This point may be a nitpick since it doesn’t address how far human intelligence can go.
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
To be fair, the races of Middle-Earth weren’t created by evolution, so the criticism isn’t fully valid. Ilúvatar gave the dwarves spirits but set them to sleep so that they wouldn’t awaken before the elves. It’s not unreasonable to assume that as he did so, he also made them admire elven beauty.
Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?
Most who think Moravec and Kurzweil got this about right think that supercomputer hardware could run something similar to a human brain today—if you had the dollars, were prepared for it to run a bit slow—and had the right software.
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
If I’m not mistaken, all those races were created, so they could reasonably have very similar standards of beauty, and the elves might have been created to match that.
In The Lord of the Rings Tolkien writes that they breed slowly, for no more than a third of them are female, and not all marry; also, female Dwarves look and sound (and dress, if journeying — which is rare) so alike to Dwarf-males that other folk cannot distinguish them, and thus others wrongly believe Dwarves grow out of stone. Tolkien names only one female, Dís. In The War of the Jewels Tolkien says both males and females have beards.[18]
On the other hand, I suppose it’s possible that if humans find Elves that much more beautiful than humans, maybe Dwarves would be affected the same way, though it seems less likely for them.
Yeah, as JamesAndrix alludes to (warning: extreme geekery), the Dwarves were created by Aulë (one of the Valar (Gods)) because he was impatient for the Firstborn Children of Iluvatar (i.e., the Elves) to awaken. So you might call the Dwarves Aulë′s attempt at creating the Elves; at least, he knew what the Elves would look like (from the Great Song), so it’s pretty plausible that he impressed in the Dwarves an aesthetic sense which would rank Elves very highly.
There are different kinds of plausibility. There’s plausibility for fiction, and there’s plausibility for culture. Both pull in the same direction for LOTR to have Absolute Beauty, which by some odd coincidence, is a good match for what most of its readers think is beautiful.
What might break your suspension of disbelief? The usual BEM behavior would probably mean that the Watcher at the Gate preferencially grabbing Galadriel if she were available would seem entirely reasonable, but what about Treebeard? Shelob?
for LOTR to have Absolute Beauty, which by some odd coincidence, is a good match for what most of its readers think is beautiful.
Particularly when referring to the movie versions, you could consider this simply a storytelling device, similar to all the characters speaking English even in movies set in non-English speaking countries (or planets). It’s not that the Absolute Beauty of Middle-Earth is necessarily a good match for our beauty standards, it’s that it makes it easier for us to relate to the characters and experience what they’re feeling.
You write “Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?”
I don’t know what argument Eliezer would’ve been using to reach that conclusion, but it’s the kind of conclusion people typically reach if they do a Fermi estimate. E.g., take some bit of nervous tissue whose function seems to be pretty well understood, like the early visual preprocessing (edge detection, motion detection...) in the retina. Now estimate how much it would cost to build conventional silicon computer hardware performing the same operations; then scale the estimated cost of the brain in proportion to the ratio of volume of nervous tissue.
See http://boingboing.net/2009/02/10/hans-moravecs-slide.html for the conclusion of one popular version of this kind of analysis. I’m pretty sure that the analysis behind that slide is in at least one of Moravec’s books (where the slide, or something similar to it, appears as an illustration), but I don’t know offhand which book.
The analysis could be grossly wrong if the foundations are wrong, perhaps because key neurons are doing much more than we think. E.g., if some kind of neuron is storing a huge number of memory bits per neuron (which I doubt: admittedly there is no fundamental reason I know of that this couldn’t be true, but there’s also no evidence for it that I know of) or if neurons are doing quantum calculation (which seems exceedingly unlikely to me; and it is also unclear that quantum calculation can even help much with general intelligence, as opposed to helping with a few special classes of problems related to number theory). I don’t know any particularly likely for way the foundations to be grossly wrong, though, so the conclusions seem pretty reasonable to me.
Note also that suitably specialized computer hardware tends to have something like an order of magnitude better price/performance than the general-purpose computer systems which appear on the graph. (E.g., it is much more cost-effective to render computer graphics using a specialized graphics board, rather than using software running on a general-purpose computer board.)
I find this line of argument pretty convincing, so I think it’s a pretty good bet that given the software, current technology could build human-comparable AI hardware in quantity 100 for less than a million dollars per AI; and that if the figure isn’t yet as low as one hundred thousand dollars per AI, it will be that low very soon.
Why would Gimli think that Galadriel is beautiful?
The dwarves were intelligently designed by some god or other. That a dwarf can find an elf more beautiful than dwarves could be an unfortunate design flaw.
(Elves were also intelligently designed, but their creator was perhaps more intelligent.)
Edit: The creator-god of dwarves probably imbued them with some of his own sense of beauty.
Selection pressure might be even weaker a lot of the time than a 3% fitness advantage having a 6% chance of becoming universal in the gene pool, or at least it’s more complicated—a lot of changes don’t offer a stable advantage over long periods.
I think natural selection and human intelligence at this point can’t really be compared for strength. Each is doing things that the other can’t—afaik, we don’t know how to deliberately create organisms which can outcompete their wild conspecifics. (Or is it just that there’s no reason to try and/or we have too much sense to do the experiments?)
And we certainly don’t know how to deliberately design a creature which could thrive in the wild, though some animals which have been selectively bred for human purposes do well as ferals.
This point may be a nitpick since it doesn’t address how far human intelligence can go.
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?
To be fair, the races of Middle-Earth weren’t created by evolution, so the criticism isn’t fully valid. Ilúvatar gave the dwarves spirits but set them to sleep so that they wouldn’t awaken before the elves. It’s not unreasonable to assume that as he did so, he also made them admire elven beauty.
Why do humans think dolphins are beautiful?
Is a human likely to think that one specific dolphin is so beautiful as to be almost worth fighting a duel about it being the most beautiful?
Well, it’s always possible that Gimli was a zoophile.
Yeah, I mean have you seen Dwarven women?
I’m a human and can easily imagine being attracted to Galadriel :) I can’t speak for dwarves.
Well, elves were intelligently designed to specifically be attractive to humans...
Most who think Moravec and Kurzweil got this about right think that supercomputer hardware could run something similar to a human brain today—if you had the dollars, were prepared for it to run a bit slow—and had the right software.
“Another example of attribution error: Why would Gimli think that Galadriel is beautiful?”
A waist:hip:thigh ratio between 0.6 & 0.8 & a highly symmetric fce.
But she doesn’t even have a beard!
but he did have a preoccupation with her hair...
If I’m not mistaken, all those races were created, so they could reasonably have very similar standards of beauty, and the elves might have been created to match that.
[From Wikipedia:}(http://en.wikipedia.org/wiki/Dwarf_%28Middle-earth%29)
On the other hand, I suppose it’s possible that if humans find Elves that much more beautiful than humans, maybe Dwarves would be affected the same way, though it seems less likely for them.
Also, perhaps dwarves don’t have their beauty-sense linked to their mating selection. They appreciate elves as beautiful but something else as sexy.
Yeah, as JamesAndrix alludes to (warning: extreme geekery), the Dwarves were created by Aulë (one of the Valar (Gods)) because he was impatient for the Firstborn Children of Iluvatar (i.e., the Elves) to awaken. So you might call the Dwarves Aulë′s attempt at creating the Elves; at least, he knew what the Elves would look like (from the Great Song), so it’s pretty plausible that he impressed in the Dwarves an aesthetic sense which would rank Elves very highly.
Yes this is definitively correct. Also, it’s a world with magic rings and dragons people.
There are different kinds of plausibility. There’s plausibility for fiction, and there’s plausibility for culture. Both pull in the same direction for LOTR to have Absolute Beauty, which by some odd coincidence, is a good match for what most of its readers think is beautiful.
What might break your suspension of disbelief? The usual BEM behavior would probably mean that the Watcher at the Gate preferencially grabbing Galadriel if she were available would seem entirely reasonable, but what about Treebeard? Shelob?
Particularly when referring to the movie versions, you could consider this simply a storytelling device, similar to all the characters speaking English even in movies set in non-English speaking countries (or planets). It’s not that the Absolute Beauty of Middle-Earth is necessarily a good match for our beauty standards, it’s that it makes it easier for us to relate to the characters and experience what they’re feeling.
You write “Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?”
I don’t know what argument Eliezer would’ve been using to reach that conclusion, but it’s the kind of conclusion people typically reach if they do a Fermi estimate. E.g., take some bit of nervous tissue whose function seems to be pretty well understood, like the early visual preprocessing (edge detection, motion detection...) in the retina. Now estimate how much it would cost to build conventional silicon computer hardware performing the same operations; then scale the estimated cost of the brain in proportion to the ratio of volume of nervous tissue.
See http://boingboing.net/2009/02/10/hans-moravecs-slide.html for the conclusion of one popular version of this kind of analysis. I’m pretty sure that the analysis behind that slide is in at least one of Moravec’s books (where the slide, or something similar to it, appears as an illustration), but I don’t know offhand which book.
The analysis could be grossly wrong if the foundations are wrong, perhaps because key neurons are doing much more than we think. E.g., if some kind of neuron is storing a huge number of memory bits per neuron (which I doubt: admittedly there is no fundamental reason I know of that this couldn’t be true, but there’s also no evidence for it that I know of) or if neurons are doing quantum calculation (which seems exceedingly unlikely to me; and it is also unclear that quantum calculation can even help much with general intelligence, as opposed to helping with a few special classes of problems related to number theory). I don’t know any particularly likely for way the foundations to be grossly wrong, though, so the conclusions seem pretty reasonable to me.
Note also that suitably specialized computer hardware tends to have something like an order of magnitude better price/performance than the general-purpose computer systems which appear on the graph. (E.g., it is much more cost-effective to render computer graphics using a specialized graphics board, rather than using software running on a general-purpose computer board.)
I find this line of argument pretty convincing, so I think it’s a pretty good bet that given the software, current technology could build human-comparable AI hardware in quantity 100 for less than a million dollars per AI; and that if the figure isn’t yet as low as one hundred thousand dollars per AI, it will be that low very soon.
Thanks. I’m not sure how much complexity is added by the dendrites making new connections.
The dwarves were intelligently designed by some god or other. That a dwarf can find an elf more beautiful than dwarves could be an unfortunate design flaw.
(Elves were also intelligently designed, but their creator was perhaps more intelligent.)
Edit: The creator-god of dwarves probably imbued them with some of his own sense of beauty.