Well, you can point to several things current LLMs can’t do. Not just logical reasoning, but also long-term action, and remembering what you said to them yesterday. But ten years ago, you could make a much longer list of things AI couldn’t do. And most items of that list have fallen before the advance of technology. On what basis should we assume that the remaining list will last very long? There are lots of people working on all the things currently on the list, as well as an ever-growing mountain of computer power that can be applied to these problems. If we expect history to continue as it has done, all those problems will fall in the next decade.
Of course, it’s possible that AI will suddenly stop advancing; that happens to fields of engineering. For example aeronautical engineering stopped advancing very suddenly in 1972, and even regressed somewhat. That was a big surprise to everyone concerned. But that’s not a common phenomenon.
If the increase in speed had continued at the rate it did from 1820 to 1961, we’d be faster than the speed of light by 1982. This extrapolation is from an article by G. Harry Stine in Analog, in 1961. It was a pretty sloppy analysis by modern standards, but gives an idea of how people were thinking at the time.
These all happened in 1972 or close to it:
—Setting the air speed record, which stands to this day.
—End of flights to the Moon. —Cancellation of the American SST project.
—Cancellation of the NERVA nuclear rocket program.
—The Boeing 747 enters service as the largest passenger plane until 2003.
—Concorde enters service, turns out to be a bad idea.
In the ’80s, I found an old National Geographic from 1971 or 1972 about the “future of flight”. Essentially none of their predictions had come true. That’s why I think it was a surprise.
TBF, was Concorde inherently “a bad idea”? Technologies have a theoretical limit and a practical one. There’s deep reasons why we simply couldn’t reach even near speed of light by 1982 no matter how much money we poured into it, but Concorde seems more a case of “it can be done, but it’s too expensive to keep safe enough and most people won’t pay such exorbitant tickets just to shave a few hours off their transatlantic trip”. I don’t think we can imagine such things happening with AGI, partly because its economic returns are obvious and far greater, partly because many who are racing to it have more than just economic incentives to do so—some have an almost religious fervour. Pyramids can be built even if they’re not efficient.
Funny thing— your message seemed to be phrased as disagreeing, so I was all set to post a devastating reply. But after I tried to find points of actual disagreement, I couldn’t. So I will write a reply of violent agreement.
Your points about the dissimilarity between aerospace in 1972 and AI in 2024 are good ones. Note that my original message was about how close current technology is to AGI. The part about aerospace was just because my rationalist virtue required me to point out a case where an analogous argument would have failed. I don’t think it’s likely.
Was Concorde “inherently a bad idea”? No, but “inherently” is doing the work here. It lost money and didn’t lead anywhere, which is the criteria on which such an engineering project must be judged. It didn’t matter how glorious, beautiful or innovative it was. It’s a pyramid that was built even though it wasn’t efficient.
The impossibility of traveling faster than the speed of light was a lot less obvious in 1961.
The impossibility of traveling faster than the speed of light was a lot less obvious in 1961.
I would argue that’s questionable—they knew relativity very well in 1961 and all the physicists would have been able to roll out the obvious theoretical objections. But obvious the difficulties of approaching the speed of light (via e.g. ramscoop engine, solar sail, nuclear propulsion etc) are another story.
Was Concorde “inherently a bad idea”? No, but “inherently” is doing the work here. It lost money and didn’t lead anywhere, which is the criteria on which such an engineering project must be judged. It didn’t matter how glorious, beautiful or innovative it was. It’s a pyramid that was built even though it wasn’t efficient.
I guess my point is that there are objective limits and then there are cultural ones. We do most things only for the sake of making money, but as far as human cultures go we are perhaps more the exception than the rule. And in the end individuals often do the opposite—they make money to do things, things they like that play to their personal values but don’t necessarily turn out a profit all the time. A different culture could have concluded that the Concorde was a success because it was awesome, and we should do more of that. In such a culture in fact the Concorde might even have been a financial success, because people would have been more willing to pay more money to witness it first hand. Since here the argument involves more the inherent limits of technology and/or science, I’d say we should be careful to separate out cultural effects. Self-sustaining Mars colonies, for example, are probably a pipe dream with current technology. But the only reason why we don’t have a Moon base yet is that we don’t give enough of a shit. If we cared to build one, we probably could have by now.
Well, you can point to several things current LLMs can’t do. Not just logical reasoning, but also long-term action, and remembering what you said to them yesterday. But ten years ago, you could make a much longer list of things AI couldn’t do. And most items of that list have fallen before the advance of technology. On what basis should we assume that the remaining list will last very long? There are lots of people working on all the things currently on the list, as well as an ever-growing mountain of computer power that can be applied to these problems. If we expect history to continue as it has done, all those problems will fall in the next decade.
Of course, it’s possible that AI will suddenly stop advancing; that happens to fields of engineering. For example aeronautical engineering stopped advancing very suddenly in 1972, and even regressed somewhat. That was a big surprise to everyone concerned. But that’s not a common phenomenon.
What is this referring to? What happened in 1972…?
If the increase in speed had continued at the rate it did from 1820 to 1961, we’d be faster than the speed of light by 1982. This extrapolation is from an article by G. Harry Stine in Analog, in 1961. It was a pretty sloppy analysis by modern standards, but gives an idea of how people were thinking at the time.
These all happened in 1972 or close to it:
—Setting the air speed record, which stands to this day.
—End of flights to the Moon.
—Cancellation of the American SST project.
—Cancellation of the NERVA nuclear rocket program.
—The Boeing 747 enters service as the largest passenger plane until 2003.
—Concorde enters service, turns out to be a bad idea.
In the ’80s, I found an old National Geographic from 1971 or 1972 about the “future of flight”. Essentially none of their predictions had come true. That’s why I think it was a surprise.
TBF, was Concorde inherently “a bad idea”? Technologies have a theoretical limit and a practical one. There’s deep reasons why we simply couldn’t reach even near speed of light by 1982 no matter how much money we poured into it, but Concorde seems more a case of “it can be done, but it’s too expensive to keep safe enough and most people won’t pay such exorbitant tickets just to shave a few hours off their transatlantic trip”. I don’t think we can imagine such things happening with AGI, partly because its economic returns are obvious and far greater, partly because many who are racing to it have more than just economic incentives to do so—some have an almost religious fervour. Pyramids can be built even if they’re not efficient.
Funny thing— your message seemed to be phrased as disagreeing, so I was all set to post a devastating reply. But after I tried to find points of actual disagreement, I couldn’t. So I will write a reply of violent agreement.
Your points about the dissimilarity between aerospace in 1972 and AI in 2024 are good ones. Note that my original message was about how close current technology is to AGI. The part about aerospace was just because my rationalist virtue required me to point out a case where an analogous argument would have failed. I don’t think it’s likely.
Was Concorde “inherently a bad idea”? No, but “inherently” is doing the work here. It lost money and didn’t lead anywhere, which is the criteria on which such an engineering project must be judged. It didn’t matter how glorious, beautiful or innovative it was. It’s a pyramid that was built even though it wasn’t efficient.
The impossibility of traveling faster than the speed of light was a lot less obvious in 1961.
I would argue that’s questionable—they knew relativity very well in 1961 and all the physicists would have been able to roll out the obvious theoretical objections. But obvious the difficulties of approaching the speed of light (via e.g. ramscoop engine, solar sail, nuclear propulsion etc) are another story.
I guess my point is that there are objective limits and then there are cultural ones. We do most things only for the sake of making money, but as far as human cultures go we are perhaps more the exception than the rule. And in the end individuals often do the opposite—they make money to do things, things they like that play to their personal values but don’t necessarily turn out a profit all the time. A different culture could have concluded that the Concorde was a success because it was awesome, and we should do more of that. In such a culture in fact the Concorde might even have been a financial success, because people would have been more willing to pay more money to witness it first hand. Since here the argument involves more the inherent limits of technology and/or science, I’d say we should be careful to separate out cultural effects. Self-sustaining Mars colonies, for example, are probably a pipe dream with current technology. But the only reason why we don’t have a Moon base yet is that we don’t give enough of a shit. If we cared to build one, we probably could have by now.