I think a lot of this discussion becomes clearer if we taboo “intelligence” as something like “ability to search and select a high-ranked option from a large pool of strategies”.
Agree that the rate-limiting step for a superhuman intelligence trying to affect the world will probably be stuff that does not scale very well with intelligence, like large-scale transport, construction, smelting widgets, etc. However, I’m not sure it would be so severe a limitation as to produce situations like what you describe, where a superhuman intelligence sits around for a month waiting for more niobium. The more strategies you are able to search over, the more likely it is that you’ll hit on a faster way of getting niobium.
Agree that being able to maneuver in human society and simulate/manipulate humans socially would probably be much more difficult for a non-human intelligence than some other tasks humans might think of as equally difficult, since humans have a bunch of special-purpose mechanisms for that kind of thing. That being said, I’m not convinced it is so hard as to be practically impossible for any non-human to do. The amount of search power it took evolution to find those abilities isn’t so staggering that it could never be matched.
I’m pretty surprised by the position that “intelligence is [not] incredibly useful for, well, anything”. This seems much more extreme than the position that “intelligence won’t solve literally everything”, and like it requires an alternative explanation of the success of homo sapiens.
Thank you for posting this! There’s a lot of stuff I’m not mentioning because confirming agreements all the time makes for a lot of comment clutter, but there’s plenty of stuff to chew on here. In particular, the historical rate of scientific progress seems like a real puzzle that requires some explanation.
I’m pretty surprised by the position that “intelligence is [not] incredibly useful for, well, anything”. This seems much more extreme than the position that “intelligence won’t solve literally everything”, and like it requires an alternative explanation of the success of homo sapiens.
I guess it depends on how many “intelligence-driven issues” are yet to solve and how important they are, my intuition is that the answer is “not many” but I have very low trust in that intuition. It might also be just the fact that “useful” is fuzzy and my “not super useful” might be your “very useful”, and quantifying useful gets into the thorny issue of quantifying intuitions about progress.
The problem (even in humans) is rarely the ability to identify the right answer, or even the speed at which answers can be evaluated, but rather the ability to generate new possibilities. And that is a skill that is both hard and not well understood.
I think a lot of this discussion becomes clearer if we taboo “intelligence” as something like “ability to search and select a high-ranked option from a large pool of strategies”.
Agree that the rate-limiting step for a superhuman intelligence trying to affect the world will probably be stuff that does not scale very well with intelligence, like large-scale transport, construction, smelting widgets, etc. However, I’m not sure it would be so severe a limitation as to produce situations like what you describe, where a superhuman intelligence sits around for a month waiting for more niobium. The more strategies you are able to search over, the more likely it is that you’ll hit on a faster way of getting niobium.
Agree that being able to maneuver in human society and simulate/manipulate humans socially would probably be much more difficult for a non-human intelligence than some other tasks humans might think of as equally difficult, since humans have a bunch of special-purpose mechanisms for that kind of thing. That being said, I’m not convinced it is so hard as to be practically impossible for any non-human to do. The amount of search power it took evolution to find those abilities isn’t so staggering that it could never be matched.
I’m pretty surprised by the position that “intelligence is [not] incredibly useful for, well, anything”. This seems much more extreme than the position that “intelligence won’t solve literally everything”, and like it requires an alternative explanation of the success of homo sapiens.
Thank you for posting this! There’s a lot of stuff I’m not mentioning because confirming agreements all the time makes for a lot of comment clutter, but there’s plenty of stuff to chew on here. In particular, the historical rate of scientific progress seems like a real puzzle that requires some explanation.
I guess it depends on how many “intelligence-driven issues” are yet to solve and how important they are, my intuition is that the answer is “not many” but I have very low trust in that intuition. It might also be just the fact that “useful” is fuzzy and my “not super useful” might be your “very useful”, and quantifying useful gets into the thorny issue of quantifying intuitions about progress.
The problem (even in humans) is rarely the ability to identify the right answer, or even the speed at which answers can be evaluated, but rather the ability to generate new possibilities. And that is a skill that is both hard and not well understood.