I don’t disagree—unless that means you use your own prediction instead, which is the usual unconscious sequelae to ‘distrust the experts’. What epistemic state do you end up in after doing this ignoring?
Zeroth approximation: even the experts don’t know, I am not an expert, so I know even less, thus I should not make any of my decisions based on singularity-related arguments.
First approximation: find a reference class of predictions that were supposed to come true within 50 years or so, unchanged for decades, and see when (some of them) are resolved. This does not require an AI expert, but rather a historian of sorts. I am not one, and the only obvious predictions in this class are the Rapture/2nd coming and other religious end-of-the-world scares. Another standard example is the proverbial flying car. I’m sure there ought to be more examples, some of them are technological predictions that actually came true. Maybe someone here can suggest a few. Until then, I’m stuck with the zeroth approximation.
Putting smarter-than-human AI into the same class as the Rapture instead of the same class as, say, predictions for progress of space travel or energy or neuroscience, sounds to me suspiciously like reference class tennis. Your mind knows what it expects the answer to be, and picks a reference class accordingly. No doubt many of these experts did the same.
And so, once again, “distrust experts” ends up as “trust the invisible algorithm my brain just used or whatever argument I just made up, which of course isn’t going to go wrong the way those experts did”.
(The correct answer was to broaden confidence intervals in both/all directions.)
I do not believe that I was engaging in the reference class tennis. I tried hard to put AI into the same class as “predictions for progress of space travel or energy or neuroscience”, but it just didn’t fit. Space travel predictions (of the low-earth-orbit variety) slowly converged in the 40s and 50s with the development of rocket propulsion, ICBMs and later satellites. I am not familiar with the history of abundant energy predictions before and after the discovery of nuclear energy, maybe someone else is. Not sure what neuroscience predictions you are talking about, feel free to clarify.
I do not believe that I was engaging in the reference class tennis.
You weren’t, given the way Eliezer defines the term and the assumptions specified in your comment. I happen to disagree with you but your comment does not qualify as reference class tennis. Especially since you ended up assuming that the reference class is insufficiently populated to even be used unless people suggest things to include.
Zeroth approximation: even the experts don’t know, I am not an expert, so I know even less, thus I should not make any of my decisions based on singularity-related arguments.
That is making a decision based on singularity-related arguments. Denial isn’t a privileged case that you can invoke when convenient. An argument that arguments for or against X should be ignored is an argument relating to X.
First approximation: find a reference class of predictions that were supposed to come true within 50 years or so, unchanged for decades, and see when (some of them) are resolved.
That seems to be a suitable prior to adopt if you have reason to believe that the AI related predictions are representatively sampled from the class of 50 year predictions. It’s almost certainly better than going with “0.5”. In the absence of any evidence whatsoever about either AI itself or those making the predictions it would even be an appropriate prediction to use for decision making. Of course if I do happen to have any additional information and ignore it then I am just going to be assigning probabilities that are subjectively objectively false. It doesn’t matter how much I plead that I am just being careful.
Nuclear fusion is always 30-50 years in the future, so it seems very much AI-like in this respect.
I can’t find many dates for space colonization, but I’m under the impression that is typically predicted either in the relatively near future (20-25 years) or in the very far future (centuries, millennia).
“Magical” nanotechnology (that is, something capable of making grey goo) is now predicted in 30-50 years, but I don’t know how stable this prediction has been in the past.
AGI, fusion, space and nanotech also share the reference class of cornucopian predictions.
AGI, fusion, space and nanotech also share the reference class of cornucopian predictions.
Although there have been some pretty successful cornucopian predictions too: mass production, electricity, scientific agriculture (pesticides, modern crop breeding, artificial fertilizer), and audiovisual recording. By historical standards developed countries do have superabundant manufactured goods, food, music/movies/news/art, and household labor-saving devices.
Were those developments predicted decades in advance? I’m talking about relatively mainstream predictions, not predictions by some individual researcher who could have got it right by chance.
Reader’s Digest and the like. All you needed to do was straightforward trend extrapolation during a period of exceptionally fast change in everyday standards of living.
Flying cars may be a bad example. They’ve been possible for some time, but as it turns out, nobody wants a street-legal airplane bad enough to pay for it.
What people usually mean when they talk about flying cars is something as small, safe, convenient and cheap as a ground car.
There will never be any such thing. The basic problem with the skycar idea—the “Model T” airplane for the masses—is that skycars have inherent and substantial safety hazards compared to ground transport. If a groundcar goes wrong, you only have its kinetic energy to worry about, and it has brakes to deal with that. It can still kill people, and it does, tens of thousands every year, but there are far more minor accidents that do lesser damage, and an unmeasurable number of incidents where someone has avoided trouble by safely bringing the car to a stop.
For a skycar in flight, there is no such thing as a fender-bender. It not only travels faster (it has to, for lift, or the fuel consumption for hovering goes through the roof, and there goes the cheapness), but has in addition the gravitational energy to get rid of when it goes wrong. From just 400 feet up, it will crash at at least 100mph.
When it crashes, it could crash on anything. Nobody is safe from skycars. When a groundcar crashes, the danger zone is confined to the immediate vicinity of the road.
Controlling an aircraft is also far more difficult than controlling a car, taking far more training, partly because the task is inherently more complicated, and partly because the risks of a mistake are so much greater.
Optimistically, I can’t see the Moller skycars or anything like them ever being more than a niche within general aviation.
You say that “There will never be any such thing”, but your reasons tell only why the problem is hard and much harder than one might think at first, not why it is impossible. Surely the kind of tech needed for self-driving cars, perhaps an order of magnitude more complicated, would make it possible to have safe, convenient, cheap flying cars or their functional equivalent.
At worst, the reasons you state would make it AI-complete, and even that seems unreasonably pessimistic.
The safety issue is a showstopper right now, and will be until computer control reaches the point where cars and aircraft are routinely driven by computer, and air traffic control is also done by computer. Not before mid-century for this.
Then you have the problem of millions—hundreds of millions? -- of vehicles in the air travelling on independent journeys. That’s a problem that completely dwarfs present-day air traffic control. More computers needed.
They are also going to be using phenomenal amounts of fuel. Leaving aside sci-fi dreams of convenient new physics, those Moller craft have to be putting at least 100kW into just hovering. (Back of envelope calculation based on 1 ton weight and 25 m/s downdraft velocity, and ignoring gasoline-to-whirling-fan losses.) Where’s that coming from? Cold fusion?
“Never” turns into “not this century”, by my estimate.
Of course, if civilisation falls instead, “never” really does mean never, at least, never by humans.
It not only travels faster (it has to, for lift, or the fuel consumption for hovering goes through the roof, and there goes the cheapness), but has in addition the gravitational energy to get rid of when it goes wrong. From just 400 feet up, it will crash at at least 100mph.
Failure of imagination, based on postulating the currently existing means of propulsion (rocket or jet engines). Here are some zero energy (but progressively harder) alternatives: buoyant force, magnetic hovering, gravitational repulsion. Or consult your favorite hard sci-fi. Though I agree, finding an alternative to jet/prop/rocket propulsion is the main issue.
When it crashes, it could crash on anything. Nobody is safe from skycars.
If it doesn’t have to fall when there is an engine malfunction, it does not have to crash.
Controlling an aircraft is also far more difficult than controlling a car, taking far more training, partly because the task is inherently more complicated, and partly because the risks of a mistake are so much greater.
This is actually an easy problem. Most current planes use fly-by-wire, and newer fighter planes are computer-assisted already, since they are otherwise unstable. Even now it is possible to limit the user input to “car, get me there”. Learning to fly planes or drive cars will soon enough be limited to niche occupations, like racing horses.
Incidentally, computer control will also take care of the pilot/driver errors, making fender-benders and mid-air collisions a thing of the past.
Optimistically, I can’t see the Moller skycars or anything like them ever being more than a niche within general aviation.
Failure of imagination, based on postulating the currently existing means of propulsion (rocket or jet engines). Here are some zero energy (but progressively harder) alternatives: buoyant force, magnetic hovering, gravitational repulsion. Or consult your favorite hard sci-fi.
There will never be any such thing. The basic problem with the skycar idea—the “Model T” airplane for the masses—is that skycars have inherent and substantial safety hazards compared to ground transport.
Your certainty seems bizarre. There seems to assumption that the basic problem (“being up in the air is kinda dangerous”) is unsolvable as a technical problem. The engineering capability and experience behind the “Model T” was far inferior to the engineering capability and investment we are capable of now and in the near future. Moreover, one of the greatest risks involved with the Model T was that it was driven by humans. Flying cars need not be limited to human control.
There is no particularly good reason to assume that flying cars couldn’t be made as safe as the cars we drive on the ground today. Whether it happens is a question of economics, engineering and legislative pressure.
Are you sure they’re possible? I’m not an engineer, but I’d have guessed there are still some problems to solve, like finding a means of propulsion that doesn’t require huge landing pads, and controls that the average car driver could learn and master in a reasonably short period of time.
VTOLs are possible. Many UAVs are VTOL aircraft. Make a bigger one that can carry a person and a few grocery bags instead of a sensor battery, add some wheels for “Ground Mode”, and you’ve essentially got a flying car. An extremely impractical, high-maintenance, high-cost, airspace-constricted, inefficient, power-hungry flying car that almost no one will want to buy, but a flying car nonetheless.
I’m not an expert either, but it seems to me like the difference between “flying car” and “helicopter with wheels” is mostly a question of distance in empirical thingspace-of-stuff-we-could-build, which is a design and fitness-for-purpose issue.
How broad is your classification of a “car?” If it’s fairly broad, then helicopters can reasonably be said to be flying cars. They require landing pads, but not necessarily “huge” ones depending on the type of helicopter, and one can earn a helicopter license with 40 hours of practical training.
Most people’s models of “flying cars,” for whatever reason, seem to entail no visible method of attaining lift though. By that criterion we can still be said to have flying cars, but maybe only ones that are pretty lousy at flying.
I don’t disagree—unless that means you use your own prediction instead, which is the usual unconscious sequelae to ‘distrust the experts’. What epistemic state do you end up in after doing this ignoring?
Zeroth approximation: even the experts don’t know, I am not an expert, so I know even less, thus I should not make any of my decisions based on singularity-related arguments.
First approximation: find a reference class of predictions that were supposed to come true within 50 years or so, unchanged for decades, and see when (some of them) are resolved. This does not require an AI expert, but rather a historian of sorts. I am not one, and the only obvious predictions in this class are the Rapture/2nd coming and other religious end-of-the-world scares. Another standard example is the proverbial flying car. I’m sure there ought to be more examples, some of them are technological predictions that actually came true. Maybe someone here can suggest a few. Until then, I’m stuck with the zeroth approximation.
Putting smarter-than-human AI into the same class as the Rapture instead of the same class as, say, predictions for progress of space travel or energy or neuroscience, sounds to me suspiciously like reference class tennis. Your mind knows what it expects the answer to be, and picks a reference class accordingly. No doubt many of these experts did the same.
And so, once again, “distrust experts” ends up as “trust the invisible algorithm my brain just used or whatever argument I just made up, which of course isn’t going to go wrong the way those experts did”.
(The correct answer was to broaden confidence intervals in both/all directions.)
I do not believe that I was engaging in the reference class tennis. I tried hard to put AI into the same class as “predictions for progress of space travel or energy or neuroscience”, but it just didn’t fit. Space travel predictions (of the low-earth-orbit variety) slowly converged in the 40s and 50s with the development of rocket propulsion, ICBMs and later satellites. I am not familiar with the history of abundant energy predictions before and after the discovery of nuclear energy, maybe someone else is. Not sure what neuroscience predictions you are talking about, feel free to clarify.
You weren’t, given the way Eliezer defines the term and the assumptions specified in your comment. I happen to disagree with you but your comment does not qualify as reference class tennis. Especially since you ended up assuming that the reference class is insufficiently populated to even be used unless people suggest things to include.
That is making a decision based on singularity-related arguments. Denial isn’t a privileged case that you can invoke when convenient. An argument that arguments for or against X should be ignored is an argument relating to X.
That seems to be a suitable prior to adopt if you have reason to believe that the AI related predictions are representatively sampled from the class of 50 year predictions. It’s almost certainly better than going with “0.5”. In the absence of any evidence whatsoever about either AI itself or those making the predictions it would even be an appropriate prediction to use for decision making. Of course if I do happen to have any additional information and ignore it then I am just going to be assigning probabilities that are subjectively objectively false. It doesn’t matter how much I plead that I am just being careful.
Energy-efficient controlled nuclear fusion
Space colonization
Widespread use of videophones (to make actual videocalls)
More failed (just googled them, I didn’t check): top-30-failed-technology-predictions
And successful: Ten 100-year predictions that came true
No videocalls—what about the widespread skyping?
AFAIK it’s mostly used for audio calls.
The interesting part is looking back and figuring out if they are in the same reference class as AGI.
Nuclear fusion is always 30-50 years in the future, so it seems very much AI-like in this respect.
I can’t find many dates for space colonization, but I’m under the impression that is typically predicted either in the relatively near future (20-25 years) or in the very far future (centuries, millennia).
“Magical” nanotechnology (that is, something capable of making grey goo) is now predicted in 30-50 years, but I don’t know how stable this prediction has been in the past.
AGI, fusion, space and nanotech also share the reference class of cornucopian predictions.
Although there have been some pretty successful cornucopian predictions too: mass production, electricity, scientific agriculture (pesticides, modern crop breeding, artificial fertilizer), and audiovisual recording. By historical standards developed countries do have superabundant manufactured goods, food, music/movies/news/art, and household labor-saving devices.
Were those developments predicted decades in advance? I’m talking about relatively mainstream predictions, not predictions by some individual researcher who could have got it right by chance.
Reader’s Digest and the like. All you needed to do was straightforward trend extrapolation during a period of exceptionally fast change in everyday standards of living.
Flying cars may be a bad example. They’ve been possible for some time, but as it turns out, nobody wants a street-legal airplane bad enough to pay for it.
What people usually mean when they talk about flying cars is something as small, safe, convenient and cheap as a ground car.
There will never be any such thing. The basic problem with the skycar idea—the “Model T” airplane for the masses—is that skycars have inherent and substantial safety hazards compared to ground transport. If a groundcar goes wrong, you only have its kinetic energy to worry about, and it has brakes to deal with that. It can still kill people, and it does, tens of thousands every year, but there are far more minor accidents that do lesser damage, and an unmeasurable number of incidents where someone has avoided trouble by safely bringing the car to a stop.
For a skycar in flight, there is no such thing as a fender-bender. It not only travels faster (it has to, for lift, or the fuel consumption for hovering goes through the roof, and there goes the cheapness), but has in addition the gravitational energy to get rid of when it goes wrong. From just 400 feet up, it will crash at at least 100mph.
When it crashes, it could crash on anything. Nobody is safe from skycars. When a groundcar crashes, the danger zone is confined to the immediate vicinity of the road.
Controlling an aircraft is also far more difficult than controlling a car, taking far more training, partly because the task is inherently more complicated, and partly because the risks of a mistake are so much greater.
Optimistically, I can’t see the Moller skycars or anything like them ever being more than a niche within general aviation.
You say that “There will never be any such thing”, but your reasons tell only why the problem is hard and much harder than one might think at first, not why it is impossible. Surely the kind of tech needed for self-driving cars, perhaps an order of magnitude more complicated, would make it possible to have safe, convenient, cheap flying cars or their functional equivalent.
At worst, the reasons you state would make it AI-complete, and even that seems unreasonably pessimistic.
I’ll cop to “never” being an exaggeration.
The safety issue is a showstopper right now, and will be until computer control reaches the point where cars and aircraft are routinely driven by computer, and air traffic control is also done by computer. Not before mid-century for this.
Then you have the problem of millions—hundreds of millions? -- of vehicles in the air travelling on independent journeys. That’s a problem that completely dwarfs present-day air traffic control. More computers needed.
They are also going to be using phenomenal amounts of fuel. Leaving aside sci-fi dreams of convenient new physics, those Moller craft have to be putting at least 100kW into just hovering. (Back of envelope calculation based on 1 ton weight and 25 m/s downdraft velocity, and ignoring gasoline-to-whirling-fan losses.) Where’s that coming from? Cold fusion?
“Never” turns into “not this century”, by my estimate.
Of course, if civilisation falls instead, “never” really does mean never, at least, never by humans.
Or new technology relying on existing physics? Then yes, conventional jets and turbines are not going to cut it.
Failure of imagination, based on postulating the currently existing means of propulsion (rocket or jet engines). Here are some zero energy (but progressively harder) alternatives: buoyant force, magnetic hovering, gravitational repulsion. Or consult your favorite hard sci-fi. Though I agree, finding an alternative to jet/prop/rocket propulsion is the main issue.
If it doesn’t have to fall when there is an engine malfunction, it does not have to crash.
This is actually an easy problem. Most current planes use fly-by-wire, and newer fighter planes are computer-assisted already, since they are otherwise unstable. Even now it is possible to limit the user input to “car, get me there”. Learning to fly planes or drive cars will soon enough be limited to niche occupations, like racing horses.
Incidentally, computer control will also take care of the pilot/driver errors, making fender-benders and mid-air collisions a thing of the past.
Absolutely, this is a dead end.
I also can dream.
Your certainty seems bizarre. There seems to assumption that the basic problem (“being up in the air is kinda dangerous”) is unsolvable as a technical problem. The engineering capability and experience behind the “Model T” was far inferior to the engineering capability and investment we are capable of now and in the near future. Moreover, one of the greatest risks involved with the Model T was that it was driven by humans. Flying cars need not be limited to human control.
There is no particularly good reason to assume that flying cars couldn’t be made as safe as the cars we drive on the ground today. Whether it happens is a question of economics, engineering and legislative pressure.
Are you sure they’re possible? I’m not an engineer, but I’d have guessed there are still some problems to solve, like finding a means of propulsion that doesn’t require huge landing pads, and controls that the average car driver could learn and master in a reasonably short period of time.
VTOLs are possible. Many UAVs are VTOL aircraft. Make a bigger one that can carry a person and a few grocery bags instead of a sensor battery, add some wheels for “Ground Mode”, and you’ve essentially got a flying car. An extremely impractical, high-maintenance, high-cost, airspace-constricted, inefficient, power-hungry flying car that almost no one will want to buy, but a flying car nonetheless.
I’m not an expert either, but it seems to me like the difference between “flying car” and “helicopter with wheels” is mostly a question of distance in empirical thingspace-of-stuff-we-could-build, which is a design and fitness-for-purpose issue.
How broad is your classification of a “car?” If it’s fairly broad, then helicopters can reasonably be said to be flying cars. They require landing pads, but not necessarily “huge” ones depending on the type of helicopter, and one can earn a helicopter license with 40 hours of practical training.
Most people’s models of “flying cars,” for whatever reason, seem to entail no visible method of attaining lift though. By that criterion we can still be said to have flying cars, but maybe only ones that are pretty lousy at flying.
The moller skycar certainly exists, although it appears to be still very much a prototype.
http://en.wikipedia.org/wiki/Moller_M400_Skycar
Electric power from nuclear fusion springs to mind.