software will eventually run into optimality limits, which will eventually slow growth. That is right—but we can see that that is far off—and is far enough away to allow machine intelligences to zoom far past human ones in all domains worth mentioning.
How do we know this is far off? For some very useful processes we’re already close to optimal. For example, linear programming is close to the theoretical optimum already as are the improved versions of the Euclidean algorithm, and even the most efficient of those are not much more efficient than Euclid’s original which is around 2000 years old. And again, if it turns out that the complexity hierarchy strongly does not collapse then many algorithms we have today will turn out to be close to best possible. So what makes you so certain that we can see that reaching optimality limits is far off?
I was comparing with the human brain. That is far from optimal—due to 1-size-fits-all pattern, ancestral nutrient availability issues (now solved) - and other design constraints.
Machine intelligence algorithms are currently well behind human levels in many areas. They will eventually wind up far ahead—and so currently there is a big gap.
Comparing to the human brain is primarily connected to failure option 2, not option 3. We’ve had many years now to make computer systems and general algorithms that don’t rely on human architecture. We know that machine intelligence is behind humans in many areas but we also know that computers are well ahead of humans in other areas (I’m pretty sure that no human on the planet can factor 100 digit integers in a few seconds unaided). FOOMing would likely require not just an AI that is much better than humans at many of the tasks that humans are good at but also an AI that is very good at tasks like factoring that computers are already much better at than humans. So pointing out that the human brains are very suboptimal doesn’t make this a slamdunk case. So I still don’t see how you can label concerns about 3 as silly.
Cousin it’s point (gah, making the correct possessive there looks really annoying because it looks like one has typed “it’s” when one should have “its”) that the NP hard problems that an AI would need to deal with may be limited to instances which have high regularity seems like a much better critique.
Cousin it’s point (gah, making the correct possessive there looks really annoying because it looks like one has typed “it’s” when one should have “its”)
It feels a little better if I write cousin_it’s. Mind you I feel ‘gah’ whenever I write ‘its’. It’s a broken hack in English grammar syntax.
If linear programming is so close to the optimum, why did we see such massive speedups in it and integer programming over the past few decades? (Or are you saying those speedups brought us almost to the optimum?)
There are a variety of things going on here. One is that those speedups helped. A major part those is imprecision on my part. This is actually a good example of an interesting (and from the perspective of what is discussed above) potentially dangerous phenomenon. Most of the time when we discuss the efficiency of algorithms one is looking at big-O bounds. But those by nature have constants built into them. In the case of linear programming, it turned out that we could drastically improve the constants. This is related to what I discussed above where even if one has P != NP, one could have effective efficient ways of solving all instances of 3-SAT that have fewer than say 10^80 terms. That sort of situation could be about as bad from a FOOM perspective. To the immediate purpose being discussed above, the Euclidean algorithm example is probably better since in that case we actually know that there’s not much room for improvements in the constants.
How do we know this is far off? For some very useful processes we’re already close to optimal. For example, linear programming is close to the theoretical optimum already as are the improved versions of the Euclidean algorithm, and even the most efficient of those are not much more efficient than Euclid’s original which is around 2000 years old. And again, if it turns out that the complexity hierarchy strongly does not collapse then many algorithms we have today will turn out to be close to best possible. So what makes you so certain that we can see that reaching optimality limits is far off?
I was comparing with the human brain. That is far from optimal—due to 1-size-fits-all pattern, ancestral nutrient availability issues (now solved) - and other design constraints.
Machine intelligence algorithms are currently well behind human levels in many areas. They will eventually wind up far ahead—and so currently there is a big gap.
Comparing to the human brain is primarily connected to failure option 2, not option 3. We’ve had many years now to make computer systems and general algorithms that don’t rely on human architecture. We know that machine intelligence is behind humans in many areas but we also know that computers are well ahead of humans in other areas (I’m pretty sure that no human on the planet can factor 100 digit integers in a few seconds unaided). FOOMing would likely require not just an AI that is much better than humans at many of the tasks that humans are good at but also an AI that is very good at tasks like factoring that computers are already much better at than humans. So pointing out that the human brains are very suboptimal doesn’t make this a slamdunk case. So I still don’t see how you can label concerns about 3 as silly.
Cousin it’s point (gah, making the correct possessive there looks really annoying because it looks like one has typed “it’s” when one should have “its”) that the NP hard problems that an AI would need to deal with may be limited to instances which have high regularity seems like a much better critique.
Edit: Curious for reason for downvote.
It feels a little better if I write cousin_it’s. Mind you I feel ‘gah’ whenever I write ‘its’. It’s a broken hack in English grammar syntax.
If linear programming is so close to the optimum, why did we see such massive speedups in it and integer programming over the past few decades? (Or are you saying those speedups brought us almost to the optimum?)
There are a variety of things going on here. One is that those speedups helped. A major part those is imprecision on my part. This is actually a good example of an interesting (and from the perspective of what is discussed above) potentially dangerous phenomenon. Most of the time when we discuss the efficiency of algorithms one is looking at big-O bounds. But those by nature have constants built into them. In the case of linear programming, it turned out that we could drastically improve the constants. This is related to what I discussed above where even if one has P != NP, one could have effective efficient ways of solving all instances of 3-SAT that have fewer than say 10^80 terms. That sort of situation could be about as bad from a FOOM perspective. To the immediate purpose being discussed above, the Euclidean algorithm example is probably better since in that case we actually know that there’s not much room for improvements in the constants.