I have a big peeve about that. When I try to model a flying car I see the tradeoffs of
(High fuel consumption, higher cost to build, higher skill to drive, noise, falling debris) vs (less time to reach work)
As long as the value per hour of a workers time is less than the cost per hour of the vtol + externalities, there isn’t ROI for most workers.
Less market size means higher cost and thus we just have helicopters for billionaires and everyone else drives.
Did this come up in the 1970s or after the oil shocks were over in the 80s? Because they just jump out at me as a doomed idea that doesn’t happen because it doesn’t make money.
Even now: electric vtols fix the fuel cost, using commodity parts makes them cheaper, automation makes them easier to fly, but you still have the negative externalities.
AI makes immediate money, gpt-4 seems to be 100+ percent annual ROI...(60 mil to train, 2 billion annual revenue after a year, assuming 10 percent profit margin)
I may have used too much shorthand here. I agree that flying cars are impractical for the reasons you suggest. I also agree that anybody who can justify it uses a helicopter, which is akin to a flying car.
According to Wikipedia, this is not a concept that first took off (hah!) in the 1970s—there have been working prototypes since at least the mid-1930s. The point of mentioning the idea is that it represents a cautionary tale about how hard it is to make predictions, especially about the future. When cars became widely used (certainly post-WWII), futurists started predicting what transportation tech would look like, and flying cars were one of the big topics. The fact that they’re impractical didn’t occur to many of the people making predictions.
I have a strong suspicion that there are flaws in current reasoning about the future, especially as it relates to the threat of AGI. Recall that there was a round of AI hype back in the 1980s that fizzled out when it became clear nothing much worked beyond the toy systems. I think there are good reasons to believe we’re in a very dangerous time, but I think there are also reasons to believe that we’ll figure it out before we all kill ourselves. Frankly, I’m more concerned about global warming, as that requires absolutely no new technology nor policy changes to be able to kill us or at least put a real dent in global human happiness.
My point is simply that deciding that we’re 95% likely to die in the next five years is probably wrong, and if you base your entire set of life choices on that prediction, you are going to be surprised when it turns out differently.
Also, I’m not strongly invested in convincing others of this fact, partly because I don’t think I have any special lock on predicting the future. I’m just suggesting you look back farther than 1980 for examples of how people expected things to turn out vs. how they actually did and factor that into your calculations.
[Small edit in the first paragraph for clearer wording]
The fact that they’re impractical didn’t occur to many of the people making predictions.
Right I am just trying to ask if you personally thought they were far fetched when you learned of them. Or were there serious predictions that this was going to happen. Flying cars don’t pencil in.
AGI financially does pencil in.
AGI killing everyone with 95 percent probability in 5 years doesn’t because it require several physically unlikely assumptions.
The two assumptions are
A. being able to optimize an algorithm to use many oom less compute than right now
B. The “utility gain” of superintelligence being so high it can just do things credentialed humans don’t think are possible at all. Like developing nanotechnology in a garage rather than needing a bunch of facilities that resemble IC fabs.
If you imagined you might be able to find a way to make flying cars like regular cars, and reach mpgs similar to that of regular cars, and the entire FAA drops dead...
Then yeah flying cars sound plausible but you made physically unlikely assumptions.
I have a big peeve about that. When I try to model a flying car I see the tradeoffs of
(High fuel consumption, higher cost to build, higher skill to drive, noise, falling debris) vs (less time to reach work)
As long as the value per hour of a workers time is less than the cost per hour of the vtol + externalities, there isn’t ROI for most workers.
Less market size means higher cost and thus we just have helicopters for billionaires and everyone else drives.
Did this come up in the 1970s or after the oil shocks were over in the 80s? Because they just jump out at me as a doomed idea that doesn’t happen because it doesn’t make money.
Even now: electric vtols fix the fuel cost, using commodity parts makes them cheaper, automation makes them easier to fly, but you still have the negative externalities.
AI makes immediate money, gpt-4 seems to be 100+ percent annual ROI...(60 mil to train, 2 billion annual revenue after a year, assuming 10 percent profit margin)
I may have used too much shorthand here. I agree that flying cars are impractical for the reasons you suggest. I also agree that anybody who can justify it uses a helicopter, which is akin to a flying car.
According to Wikipedia, this is not a concept that first took off (hah!) in the 1970s—there have been working prototypes since at least the mid-1930s. The point of mentioning the idea is that it represents a cautionary tale about how hard it is to make predictions, especially about the future. When cars became widely used (certainly post-WWII), futurists started predicting what transportation tech would look like, and flying cars were one of the big topics. The fact that they’re impractical didn’t occur to many of the people making predictions.
I have a strong suspicion that there are flaws in current reasoning about the future, especially as it relates to the threat of AGI. Recall that there was a round of AI hype back in the 1980s that fizzled out when it became clear nothing much worked beyond the toy systems. I think there are good reasons to believe we’re in a very dangerous time, but I think there are also reasons to believe that we’ll figure it out before we all kill ourselves. Frankly, I’m more concerned about global warming, as that requires absolutely no new technology nor policy changes to be able to kill us or at least put a real dent in global human happiness.
My point is simply that deciding that we’re 95% likely to die in the next five years is probably wrong, and if you base your entire set of life choices on that prediction, you are going to be surprised when it turns out differently.
Also, I’m not strongly invested in convincing others of this fact, partly because I don’t think I have any special lock on predicting the future. I’m just suggesting you look back farther than 1980 for examples of how people expected things to turn out vs. how they actually did and factor that into your calculations.
[Small edit in the first paragraph for clearer wording]
Right I am just trying to ask if you personally thought they were far fetched when you learned of them. Or were there serious predictions that this was going to happen. Flying cars don’t pencil in.
AGI financially does pencil in.
AGI killing everyone with 95 percent probability in 5 years doesn’t because it require several physically unlikely assumptions.
The two assumptions are
A. being able to optimize an algorithm to use many oom less compute than right now
B. The “utility gain” of superintelligence being so high it can just do things credentialed humans don’t think are possible at all. Like developing nanotechnology in a garage rather than needing a bunch of facilities that resemble IC fabs.
If you imagined you might be able to find a way to make flying cars like regular cars, and reach mpgs similar to that of regular cars, and the entire FAA drops dead...
Then yeah flying cars sound plausible but you made physically unlikely assumptions.