That depends on what you’re trying to accomplish. If you’re not using your 200MHz machine because the things you want to work on require at least a gig of processing power, buying the new one might be very productive indeed. This doesn’t mean you can’t find a good purpose for your existing one, but if your needs are beyond its abilities, it’s reasonable to pursue additional resources.
Yeah I can see that applies much better to intelligence than to processing speed—one might think that a super-genius intelligence could achieve things that a human intelligence could not. Gladwell’s Outliers (embarrassing source) seems to refute this—his analysis seemed to show that IQ in excess of 130 did not contribute to success. Geoffrey Miller hypothesised that intelligence is actually an evolutionary signal of biological fitness—in this case, intellect is simply a sexual display. So my view is that a basic level of intelligence is useful, but excess intelligence is usually wasted.
I’m sure that’s true. The difference is that all that extra intelligence is tied up in a fallible meatsack; an AI, by definition, would not be. That was the flaw in my analogy—comparing apples to apples was not appropriate. It would have been more apt to compare a trowel to a backhoe. We can’t easily parallelize among the excess intelligence in all those human brains. An AI (of the type I presume singulatarians predict) could know more information and process it more quickly than any human or group of humans, regardless of how intelligent those humans were. So, yes, I don’t doubt that there’s tons of wasted human intelligence, but I find that unrelated to the question of AI.
I’m working from the assumption that folks who want FAI expect it to calculate, discover, and reason things which humans alone wouldn’t be able to accomplish for hundreds or thousands of years, and which benefit humanity. If that’s not the case I’ll have to rethink this. :)
I agree FAI should certainly be able to outclass human scientists in the creation of scientific theories and new technologies. This in itself has great value (at the very least we could spend happy years trying to follow the proofs).
I think my issue is that I think it will be insanely difficult to produce an AI and I do not believe it will produce a utopian “singularity”—where people would actually be happy. The same could be said of the industrial revolution. Regardless, my original post is borked. I concede the point.
That depends on what you’re trying to accomplish. If you’re not using your 200MHz machine because the things you want to work on require at least a gig of processing power, buying the new one might be very productive indeed. This doesn’t mean you can’t find a good purpose for your existing one, but if your needs are beyond its abilities, it’s reasonable to pursue additional resources.
Yeah I can see that applies much better to intelligence than to processing speed—one might think that a super-genius intelligence could achieve things that a human intelligence could not. Gladwell’s Outliers (embarrassing source) seems to refute this—his analysis seemed to show that IQ in excess of 130 did not contribute to success. Geoffrey Miller hypothesised that intelligence is actually an evolutionary signal of biological fitness—in this case, intellect is simply a sexual display. So my view is that a basic level of intelligence is useful, but excess intelligence is usually wasted.
I’m sure that’s true. The difference is that all that extra intelligence is tied up in a fallible meatsack; an AI, by definition, would not be. That was the flaw in my analogy—comparing apples to apples was not appropriate. It would have been more apt to compare a trowel to a backhoe. We can’t easily parallelize among the excess intelligence in all those human brains. An AI (of the type I presume singulatarians predict) could know more information and process it more quickly than any human or group of humans, regardless of how intelligent those humans were. So, yes, I don’t doubt that there’s tons of wasted human intelligence, but I find that unrelated to the question of AI.
I’m working from the assumption that folks who want FAI expect it to calculate, discover, and reason things which humans alone wouldn’t be able to accomplish for hundreds or thousands of years, and which benefit humanity. If that’s not the case I’ll have to rethink this. :)
I agree FAI should certainly be able to outclass human scientists in the creation of scientific theories and new technologies. This in itself has great value (at the very least we could spend happy years trying to follow the proofs).
I think my issue is that I think it will be insanely difficult to produce an AI and I do not believe it will produce a utopian “singularity”—where people would actually be happy. The same could be said of the industrial revolution. Regardless, my original post is borked. I concede the point.