The fact that they’re impractical didn’t occur to many of the people making predictions.
Right I am just trying to ask if you personally thought they were far fetched when you learned of them. Or were there serious predictions that this was going to happen. Flying cars don’t pencil in.
AGI financially does pencil in.
AGI killing everyone with 95 percent probability in 5 years doesn’t because it require several physically unlikely assumptions.
The two assumptions are
A. being able to optimize an algorithm to use many oom less compute than right now
B. The “utility gain” of superintelligence being so high it can just do things credentialed humans don’t think are possible at all. Like developing nanotechnology in a garage rather than needing a bunch of facilities that resemble IC fabs.
If you imagined you might be able to find a way to make flying cars like regular cars, and reach mpgs similar to that of regular cars, and the entire FAA drops dead...
Then yeah flying cars sound plausible but you made physically unlikely assumptions.
Right I am just trying to ask if you personally thought they were far fetched when you learned of them. Or were there serious predictions that this was going to happen. Flying cars don’t pencil in.
AGI financially does pencil in.
AGI killing everyone with 95 percent probability in 5 years doesn’t because it require several physically unlikely assumptions.
The two assumptions are
A. being able to optimize an algorithm to use many oom less compute than right now
B. The “utility gain” of superintelligence being so high it can just do things credentialed humans don’t think are possible at all. Like developing nanotechnology in a garage rather than needing a bunch of facilities that resemble IC fabs.
If you imagined you might be able to find a way to make flying cars like regular cars, and reach mpgs similar to that of regular cars, and the entire FAA drops dead...
Then yeah flying cars sound plausible but you made physically unlikely assumptions.