Humans can drive cars
There’s been a lot of fuss lately about Google’s gadgets. Computers can drive cars—pretty amazing, eh? I guess. But what amazed me as a child was that people can drive cars. I’d sit in the back seat while an adult controlled a machine taking us at insane speeds through a cluttered, seemingly quite unsafe environment. I distinctly remember thinking that something about this just doesn’t add up.
It looked to me like there was just no adequate mechanism to keep the car on the road. At the speeds cars travel, a tiny deviation from the correct course would take us flying off the road in just a couple of seconds. Yet the adults seemed pretty nonchalant about it—the adult in the driver’s seat could have relaxed conversations with other people in the car. But I knew that people were pretty clumsy. I was an ungainly kid but I knew even the adults would bump into stuff, drop things and generally fumble from time to time. Why didn’t that seem to happen in the car? I felt I was missing something. Maybe there were magnets in the road?
Now that I am a driving adult I could more or less explain this to a 12-year-old me:
1. Yes, the course needs to be controlled very exactly and you need to make constant tiny course corrections or you’re off to a serious accident in no time.
2. Fortunately, the steering wheel is a really good instrument for making small course corrections. The design is somewhat clumsiness-resistant.
3. Nevertheless, you really are just one misstep away from death and you need to focus intently. You can’t take your eyes off the road for even one second. Under good circumstances, you can have light conversations while driving but a big part of your mind is still tied up by the task.
4. People can drive cars—but only just barely. You can’t do it safely even while only mildly inebriated. That’s not just an arbitrary law—the hit to your reflexes substantially increases the risks. You can do pretty much all other normal tasks after a couple of drinks, but not this.
So my 12-year-old self was not completely mistaken but still ultimately wrong. There are no magnets in the road. The explanation for why driving works out is mostly that people are just somewhat more capable than I’d thought. In my more sunny moments I hope that I’m making similar errors when thinking about artificial intelligence. Maybe creating a safe AGI isn’t as impossible as it looks to me. Maybe it isn’t beyond human capabilities. Maybe.
Edit: I intended no real analogy between AGI design and driving or car design—just the general observation that people are sometimes more competent than I expect. I find it interesting that multiple commenters note that they have also been puzzled by the relative safety of traffic. I’m not sure what lesson to draw.
It seems interesting that people are just barely competent enough to drive. Maybe it’s just that they drive as fast as they can. If we were more competent, we’d drive fast enough that we’d crash if we were drunk. If we were less competent, we’d drive slow enough that we wouldn’t crash unless we were drunk.
Here’s an interesting contrast: When I first moved from a small town to a big city I was fascinated by the fact that people cannot perform the simple task of walking down the street. Their attention is constantly being drawn to other things, they apparently have no awareness of or concern for other people, etc. They’re constantly stopping dead in front of you, even though they’re certainly aware they’re on a busy street. They talk on their phones, text, play games, they even walk along reading novels. If they meet someone they know, they’ll stop and have a conversation without moving out of the way. When somebody approaches a bus stop, they’ll simply stop dead and won’t move to the side, even if they’re blocking the only way through. To be sure, people can navigate around other people, but as soon as they do something else (stop, answer their phone, meet someone they know, etc), the fact that they’re on a busy street apparently disappears from their consciousness. There’s a complete absence of vigilance (and courtesy).
If people drove cars the same way they walk on a busy street there’d be dozens of accidents per mile. I guess the lesson is that human beings are capable of being careful when they need to be but most of the time they don’t need to be.
Most things are easier than they look, but writing software that’s free of bugs seems to be an exception: people are terrible at it. So I don’t share your hope.
That’s a good point. Almost anyone can be a decent driver after 100 hours of practice. Almost everyone is a terrible programmer after 100 hours of practice. And I suspect that the number of bugs per 1000 LOC does not tend to zero even after decades of programming, there is still some residual level left, though dependent on the person, problem, programming language and framework. I wonder what the difference is. Is this because, unlike driving, programming cannot be made automatic (i.e. pushed wholly into the System 1)? If so, why not?
EDIT: commented on this separately.
I think it’s partly that code is strictly interpreted and thus small errors can have arbitrarily severe effects, whereas things like driving are analog and small deviations from optimal steering etc don’t tend to matter so much. Still, cars crash too and many people get into accidents even if they aren’t always fatal.
A person’s number of driving errors doesn’t tend to zero either. People are rarely rigorously assessed for their rate of errors in driving, but that doesn’t mean their error rates don’t exist.
Writing software that is bug free is certainly possible, but extremely expensive, and businesses wont pay the price (it simply isn’t worth it). So, generally no one except the people building Military systems / NASA have the incentive to build bug free software.
Building an AI has much greater potential risks, and for that reason I don’t believe it is ethical to keep this sort of research under wraps. I don’t have an alternative solution, but large scale peer review should be required.
The reality disagrees: the defect rate in bugs/1000 lines of code for high-quality business software is 10-15, for scientific computing is 1-5, and for military applications 0.2-3. See, for example, http://home.comcast.net/~gregorympope/published_papers/Measuring_Good2.doc . This is by no means “bug-free”.
That was the defect rate of software that meets current requirements and budgets.
There has been mathematically proven software and the space shuttle came close though that was not proven as such.
Well… If you know what you wish to prove then it’s possible that there exists a logical string that begins with a computer program and ends with it as a necessity. But that’s not really exciting. If you could code in the language of proof-theory, you already have the program. The mathematical proof of a real program is just a translation of the proof into machine code and then showing it goes both ways.
You can potentially prove a space shuttle program will never crash, but you can’t prove the space shuttle won’t crash. Source code is just source code, and bugs aren’t always known to be such without human reflection and real world testing. The translation from intent to code is what was broken in the first place, you actually have to keep applying more intent in order to fix it.
The problem with AGI is that the smartest people in the world write reams trying to say what we even wish to prove, and we’re still sort of unsure. Most utopias are dystopias, and it’s hard to prove a eutopia, because eutopias are scary.
That link’s 404 for me.
Corrected link (without a period at the end).
fixed.
It seems to me that we’re less interested in perfect programs and more interested in programs that are good enough, and there are plenty of those, e.g. some cryptographic software, the mars rover and the Apollo systems, life-critical systems generally, telecom stuff. Of course, there are many notable failures, too.
If this issue is any indication, there’s risk compensation at play and military seems to use far more brittle designs (of the sort that would not work at all for the commercial software; with 50x the bug rate, if you have that level of fragility the software will crash all the time).
For applications of fewer than 300 LOC, “Fewer than one defect expected” certainly meets a loose definition of “Bug-free”.
As an example of a program with no defects, I put forth the Overpower Scram Logic for a particular pressurized water reactor. It runs on specialized hardware which can be diagrammed fully in the documentation, but is normally shown as a ‘black box’ with several digital and one analog input, and one digital output.
IMHO there is obvious parallel here — “average person”-proficiency driving isn’t that difficult at all. As well as writing some average buggy code. The truth is both of them are utterly crap at what they do and both are easily forgiven for errors.
I think you’re glossing over important distinctions there. The average person can drive well enough to avoid major accidents (at least, for the overwhelming majority of the time). The average person can’t code at all, at least not well enough to produce something that works.
The biggest difference I see is that driving overloads (or extends) fairly deeply embedded/evolved neural pathways: motion, just with a different set of actuators.
Intelligence is yet lightly embedded, and the substrate is so different with software vs wetware.
Car crash is the first death cause among some demographics (15-25 years old). It’s almost half of total accidental death. It’s almost 2% of total death. And many more are badly wounded, often crippled for life.
So I would hardly say that humans can drive cars. At least, they can’t do it safely.
I got about the opposite change; when I was a 12 years old, driving looked normal to me. Something everyone does. And then I tried to do it, and realized how a split second of inattention can wreak your life or someone’s else. And I looked at statistics, of cars crash being a very significant cause of death or crippling injuries. Of people having more chance to die in a car accident between home and airport than flying in the plane. And I realized humans can’t drive safely—it’s just something we pretend because car crash being a daily occurrence, they don’t make it to the news.
Humans can walk. Pretty crazy huh? Constantly balancing tens of kilograms on top of constantly shifting upside-down pendulums, one (or okay, a few) misstep(s) from death (depending on where you are walking). We can even learn to do so on stilts, or ride bikes, or pilot aircraft, or ride a surfboard.
We are very complex adaptive learning systems that can internalize and automate a huge range of activities that would result in disaster if the feedbacks fall out of range. it’s a generalizeable ability.
Humans can hundred of thousand years of evolution to get walking right, I don’t consider it to be that exiting as something like operating machinery like cars, bikes and airplaines.
But do we come with pre-programmed methods for moving around—or do we just pick it up as we go along? I noticed that my two children used very different methods for moving around as babies. My daughter sat on her butt and pushed herself around. My son somehow jumped around on his knees. Both methods were surprisingly effective. There’s supposedly a “crawling stage” in development but neither of my kids did any crawling to speak of. I guess this isn’t as straightforwardly innate as one might think. Maybe Esther Thelen had it right.
Interesting point. I read at some point that primates, humans included could not swim “instinctively” but had to learn. Or if they didn’t learn would drown if they couldn’t walk out of the water. In contrast, most other animals I read are instinctive swimmers.
Then I looked at my dog in a pool. What he does is try to run while he is in the pool. The effect is he gets enough lift to keep his efficient-for-swimming head on a neck above body above water, and he gets forward thrust. My insight/guess was that it wasn’t so much that someone or something put something in the dog to make him swim “instinctively,” but that dog-ancestors who’s natural gaits did not translate to swimming when tried in water survived sufficiently less often that the marketplace which is evolution abandoned that product line. I wondered about primates: were we just better at not falling in water so often that having a gait that worked to get us out just wasn’t as important? Was our adaptability such that primates that grew up around water learned enough swimming to get by and primates who weren’t around water had insufficient value in swimming? Were the costs of finding a “natural” gait that worked in water for the primate just too much higher than finding gaits that worked for our four-legged friends?
So I think we are pre-programmed to walk, to talk, to run, not by some neurologic programming, but by the shapes and attachments of our muscles and bones. There are just so many possible solutions that yield useful motions, with walking and running in the standard way really quite good uses of the facilities available. But we see often, more with talking that walking, someone who learns things slightly non-optimally and if caught early is untrained and then retrained.
I think a lot of our “instinct” for walking and probably other physical things we do is stored in our muscles and bones, and almost invariably, our adaptive neural systems find them in there.
Apparently infants know how to kind of swim for the first few months.
I think that question is deeper than it seems at first glance.
Given that we can learn things like operating cars just as well as walking it doesn’t seem to be the case that evolution focused on giving us pre-programmed methods for moving around.
If we don’t come with pre-programmed methods for moving around, the question is why didn’t evolution give us those methods? Maybe not giving a species pre-programmed methods for dealing with some common problems gave us creativity. It might be the seed of our human intelligence.
cars, bikes and airplaines aren’t alien technology that we mysteriously happen to be able to operate. We designed them specifically to fit our physical and mental skills.
I find something like the change in time perception that we go through when driving on a highway pretty remarkable. When going from highway driving to normal road driving things seem very slow.
Given that there are no cases in prehistory where a human traveled 100 km/h it’s pretty interesting that we have useful adaptive behavior that triggers in those situations.
Occasionally I’ll take a video of myself driving on my phone or google glass and whenever I look at it I feel like “holy shit that’s fast” even though it’s usually city driving at 25-30 miles an hour.
The evolution of our bodies (from the shape of our bones to the particular sensitivities of our inner ears) such that the equilibrium most-efficient form of locomotion that we perfect by learning by age two is walking on two legs is different from the process by which an individual with that body actually learns to do it. Yes we are wired such that learning the repetitive limb-swinging walking-type of motion is easy, with some species-specific tweaks in tendencies, but all vertebrates are wired like that.
Yes, horses learn to walk in minutes but they have four legs, that’s quite a bit easier to figure out. They also aren’t born at our massive level of neurological incompleteness.
I’ve always felt similarly, and found myself thinking about how plastic we are with our own body sense—we seem to be very capable of remapping our motor functions into completely new devices, cars, video game characters, etc, and gaining a sense of body with them. This seem to be supported by how tied driving is to which part of your body performs the control—for me, going from a hand clutch on a motorcycle to a foot clutch completely failed to translate the skill. I have no idea if this is neurologically correct.
On the Cruelty of Really Teaching Computer Science by prof. dr. Edsger W. Dijkstra:
http://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD1036.html
I think the issue is that driving is a process of tiny course corrections, where if you’re slightly off course you don’t die. But, programs are fragile. One bit wrong and you die.
I just learned to ride a bicycle two months ago, and some part of my brain still refuses to believe it’s physically possible at all.
Here is my understanding of the issue.
Traffic works well because driving is one of the activities that is relatively easy to internalize and perform as a matter of habit, like walking or riding a bicycle. System 1 thinking is usually fast, predictable and reliable. You don’t need to “focus intently”. It is true that you cannot take your eyes off the road (or, more accurately, shift your passive attention (see http://www.cdl.org/resource-library/articles/attention2.php) away) for longer than a brief instant, otherwise the subconscious feedback loop which includes visual and auditory inputs, your brain, your extremities and the vehicle breaks down, not because you have to consciously “focus intently”. The “magnets in the road” are built into the area of your brain that controls your reflexes. That’s why learning safe driving habits is so important.
Not everyone is equally capable of internalizing the driving process, or put in enough time to do so. These people white-knuckle it, constantly engaging their full slow and unsuitable System 2 in the loop, and consequently they find the normal driving activity exhausting, rather than relaxing. It is often easy to notice these drivers by their slow reaction to traffic lights, overly careful driving style and by generally overcompensating for road hazards. After all, System 2 is taxing, slow and unreliable. Ironically, these people may be safer drivers overall, since they never let their attention wander.
This passive attention escalates to active when something unusual happens, like when you hear emergency vehicles and have to make decisions which are not internalized. Or when some other car wanders into your lane, or if the traffic stops unexpectedly. Anything that either breaks the passive attention loop or causes this escalation from passive to active attention to slip is dangerous: texting, arguing, listening to a child fussing in the back, concentrating on a phone call.
In contrast, there are activities which do not naturally lend themselves to internalizing. Programming is one of them. Even after doing it for decades, people are still as consciously engaged in it as they did in the beginning. Thus your hope for a safe AGI “by analogy with safe driving”, seems misplaced to me.
My experience disagrees with this. After about 20 years of experience with C/C++, I have internalized many of the aspects of programming in this language, which allows me to write complex software factors of magnitude faster than 20 years ago, and factors of magnitude more safely.
I notice how much I have internalized when I switch to a different language that isn’t “my own”, and find myself immediately bogged down in all sorts of details for which I don’t know how exactly they work, and what is the best way to approach them.
In my experience, programming skill, especially in a particular language, does get internalized, much like dancing.
Maybe “still as consciously engaged in it as [...] in the beginning” was too strong, but compare programming to driving. If you’re like most drivers you can do basically anything while driving (in normal traffic and weather conditions), as long as it doesn’t require you to take your limbs from the car’s controls or your eyes from the road.
The programming equivalent of this would be that you can write a program (let’s say a binary tree implementation in C for the sake of argument) while having a conversation, and
this would not make you take noticeably longer to write the program
nor would it mess up your conversational ability.
A test of messing up your conversational ability would be whether the friend you’re talking to could tell over the phone that you’re doing something else at the same time.
(I’m guessing you can’t do this but I’d be interested to hear otherwise.)
Sorry, I don’t check this place often.
To some extent, I think what you described does happen for snippets of code that are largely the same, and which one might write all the time. For example, I can write a “Hello world” program while maintaining conversation. However, as soon as you ask me to write something new, then I do have to start thinking about how to put pieces together, and can’t continue conversation.
But this also happens with driving. Speaking for myself at least, I can only maintain conversation while driving in a way that does not require me to make any decisions: (1) a route I’ve driven many times before, (2) a straight piece of road that might be unfamiliar, but does not require making any decisions.
If you put me in a new city where I don’t know where the streets are and how the traffic works, my conversational ability is much decreased (unless sitting at a red light, and perhaps even then, if I’m wondering where to turn next).
Programming tends to be like driving in new cities all the time. The difference we observe is really that we do most of our driving as a chore (same route, similar conditions each time) whereas we usually try to avoid that in programming (re-writing code we’ve already written several times, in similar conditions each time).
When you get good at driving, attention is freed for other things. When you get good at programming, the freed ressources you gain are used to speed up the programming. This speedup would, I think, not rollback without at least conscious effort and no other tasks can be attempted. The same should go for other skills whose effectiveness does/does not speed up with paid attention. If your desk job consisted of copying printed numbers into spreadsheets, you could probably divert attention after your input capacity is reached. If you were to juggle always as many balls as possible, you should never be distracted.
There’s some of that in me. I probably am an overcautious driver.
Fair enough. Your regularly scheduled doom and gloom will resume shortly.
Or, maybe creating any AGI (which isn’t an uploaded human mind) is beyond human capabilities.
Only for a suitably restrictive definition of AGI. After all, creating a human mind from scratch is certainly within human capabilities.
What do you mean by “from scratch”? Without “cheating” (uploading)? If so, why do you think it is within human capabilities?
When a man and a woman love each other very much...
This is a degenerate case of uploading :-) Using human DNA is not considered “from scratch”
Remind me never to ask you to make an apple pie from scratch.
Would an AI using it’s own source code to write a better AI also not qualify?
Qualify for what? I’m saying that we don’t know whether any of the following are within human ability:
Creating a human-level AI without “stealing” the design of h. sapiens
Creating a far-superhuman AI by any method The process of “creating” is allowed to involve writing an AI which writes a better AI and so on
Is there some kind of timescale assumption you are making? Atomic vapor has proven that it can form human-level intelligence, and human intelligence has shown that it can create smarter human intelligence. Creating an intelligence that runs on radically different hardware on a short timeframe is the only possibility that hasn’t already been proven.
Yes, I am making a timescale assumption. The thing is, the required timescale might be huge, much bigger than the age of universe as far as I know. Atomic vapor might have cheated. Imagine that evolution had an a priori miniscule probability of creating human-level intelligence. Of course the probability cannot be literally 0: even apes will type Shakespeare with some probability. Now, assuming the universe is infinite (e.g. eternal inflation scenario), human-level intelligence still appears in an infinite number of places with probability 1. We happen to be in one of these places courtesy of anthropic principle. In other words, there might be a complexity-theoretic barrier to creating human-level intelligence. That is, theoretically it is possible, but it’s impossible to do with a realistic amount of computing resources in a “short” time span, similarly to solving the traveling salesman problem for some random graph with 10^14 vertices.
The greatest feats can by done through the power of love… ;)
Or in some cases the power of lust, boredom, pity, and/or tequila.
Or money.
Yes, humans can drive, and that fact is literally remarkable. You have remarked upon it, I am remarking now upon it.
Can we drive well? Compared to what would be the interesting question. By all reports, self-driving cars will be much safer than human driven cars. By contrast, self-walking machines do not generally outperform humans or animals, machines for image recognition or voice recognition do not outperform humans. Perhaps it is going to be harder for us to beat, or even equal, with machines what we have spent millions of years evolving, while it may be much easier for us to beat with machines things we were not evolved to do. Presumably in self-driving cars, many aspects of other-vehicle and road situational awareness are hard to beat a human at, but the overall decision making algorithm of how to drive turns out to be easy to beat humans at, so that even early attempts are much safer than humans manage.
Note parenthetically that safer really does mean better if the cars are going the same speed as humans. Automated car system designers will have a tradeoff to make as time goes on between increasing the speed of automated car traffic at the expense of higher accident rates. There will be some optimum based on some implicit, or perhaps by then we will have matured enough to make it explicit, estimate of the value of a human life.
SO yeah, we can drive, but because we did not evolve to drive we do it poorly enough that even the earliest driving machines will be much better at it than us.
Somewhat off topic, but I feel the need to point out that traffic doesn’t really work all that well when you consider the space of possible transportation systems we could realistically implement.
First Google result says that “by car insurance industry estimates, you will file a claim for a collision about once every 17.9 years”.
Poisoning, Accidental Falls, and Car Accidents each make up about a quarter of all accidental deaths. Breaking down by age, driving is also the leading cause of death for ages 18-24 (and the risk of driving does fall a bit with increasing age—but not by that much. Mostly its severity is just eclipsed by other, health related things)
So… falls are presumably related to aging , but a good way to prevent large number of deaths among youth would be to create small-but-accident-preventing barriers to self-administration of potentially lethal substances, and replacing driving with something else.
I suspect that we’d all be much more wary of driving and painkillers (particularly for young people), if we rationally evaluated risk.
Yes, I have noticed this as well.
And also—evolution built us to deal with speeds of, say, 10 km/h; and not 100 km/h.
Also, evolution built us with a kinesiological sense to know where every part of our human body is at any moment; and not to know where a huge hunk of metal is at any moment. Yet we can park centimeters away from another car, and even drive 100 km/h only decimeters away from another car.
Selection bias. We’re using driving as an example because it turned out that humans are actually good enough at it. Lots of other things that humans aren’t good enough at simply weren’t done before automation and computers.
All this really tells us is that we’re good at some things that weren’t in our ancestral environment. Which we know already (we can do math!).
For things like building AGI, that no human has done, and that for which we don’t yet have a coherent theory or roadmap (other than ‘copy this hugely complicated black box’), we don’t know how easy or difficult they really are. We can get an outside view by comparing with other tasks that we once didn’t know how to do and then succeeded on some and failed on others despite a lot of effort. But I think there’s a lot of variation between cases and prediction is hard.
Plus, our example is specifically driving at the skill level that humans are capable of.
It feels to me like we could drive safely while a little drunk, if we stuck to 20mph and wide roads with shallow turns, and if everyone else did the same. (I haven’t driven in years, and never while drunk, so I might be wrong. Even if I’m right, other people do not do the same. Don’t do this.) If that was the normal difficulty level to drive at, we might say that humans are pretty good at driving even while drunk. But the level we normally drive it is approximately the best we can do, so when we get drunk, we can no longer do it at that level.
If we were used to a world where cars were mostly driven by computers, would we really say humans were good at it? A human compared to a computer could easily be worse than a drunk human compared to a sober human.
Or, in Malthusianism form, “Why is driving this dangerous? Because if it wasn’t, people would drive faster and it would be dangerous again.”
(I disagree with some of the Malthusianisms in that post.)
Once upon a time, dimly remembered, I heard that it had been found that sufficient practice with hand tools caused them to be mapped into the brain in the same ways body parts are. I have no citation, and I don’t know how you’d measure that, but this seems plausible based on my what-it-feels-like-on-the-inside.
That is, the hypothesis I’m proposing for this case is that the human brain is already deeply equipped for “accurately moving objects we are in control of”.
A related observation I have made as a youth was:
How does it come that nobody (me included) seems to make any small missteps bringing certain death or injury?
I mean its just one small step to fall before a car, a train, down a bridge, out of a window, … Its just one wrong grip and you gulp acid, poison, wrong medicine.
Sure it happens sometimes (when?). But it doesn’t seem to happen for me or most persons.
And the answer: There are lots of safety measures and control feedback cycles in the human brain for reducing the chance of exactly this to almost zero.
Obviously selected massively for by evolution. But the exact meachnism is nonetheless somewhat elusive.
Not stepping in front of a car, train, down a bridge or out of a window isn’t directly selected for by evolution. The fact that this works for pretty novel situations is amazing.
Not stepping in front of a car, train, down a bridge or out of a window isn’t directly selected for by evolution. The fact that this works for pretty novel situations is amazing.
I agree, it is not selected directly. It is highly selected indirectly.
First off, manmade systems that we cannot adapt to get redesigned until we can operate around them safely. SO bridges all have fences (if they are for walking). Windows are installed at a height that it is hard to fall out of them.
Second, Adults are evolved to protect children from natural dangers. It isn’t a big step to piggy back on that evolved tendency to select, to allow only human designed systems which allow for adults to protect children. Roadways with slow cars have boundary areas. Roadways with fast cars are fenced off. Sumps, mechanically or chemically unsafe things, are fenced off, walled off, locked away.
In some sense, all of these amazing things are somewhat less amazing when you consider that we have built ridiculously dangerous things which have killed lots of people, but we have simply selected not to allow them to continue, selecting instead safer and safer modifications, or selecting instead to band them if they could not be made safe.
Still amazes me. And I live up in the Pacific Northwest, where people drive in a sane fashion. But people are bat shit crazy on the NJ Turnpike, or LA freeways, on how closely they’ll tailgate at 70mph+.
Funny that this has occurred to a lot of us. I wonder if that’s another LW peculiarity, or if it’s generally widespread.
I’m occasionally still amazed that traffic works as well as it does. I must say I’m hesitant at using this example to claim that people are more capable than you might think. Driving is just something humans happen to be competent at. There are plenty of things roughly as complicated as driving a car that people aren’t surprisingly good at.
This also reminded my of something people said at the latest meetup. At least two people told me they had deliberately tried to get more scared of driving, because they had noticed they had less fear in a car than on a plane despite planes being safer.
I don’t think it is pure chance, since it was designed in iterations around human capabilites.
Indeed. The problem of driving has been set up by car designers so that it should be easy for us to solve it. By contrast, the problem of creating a safe AGI has not been set up so that it should be easy for us to solve (if you don’t believe that there is a God who has made this the best of all possible worlds that is...). So I don’t think the analogy works. If you want to make an argument making use of human past performance, it would be better to use examples of problems (that we’ve solved) that weren’t set up by us (e.g. moon-landing, relativity theory, computers, science in general, etc).
Or nuclear weapon design. Chicago Pile-1 did work. Trinity did work. Little Boy did work. Burster-Able failed—but not catastrophically. Who knows if whatever the North Koreans cobbled together worked as intended—but it doesn’t seem to have destroyed anything it wasn’t supposed to. No-one has yet accidentally blown up a city. That’s something. Anyway, I’ll edit the post.
http://davidbrin.blogspot.com/2006/01/ritual-of-streetcorner-brins-exercise.html
I don’t know about other people, but when I am scared, my driving gets worse. I start over thinking everything. I obsess about whether that guy at the drive way is just creeping forward, or if he’s going to suddenly zip in front of me. Then fail to notice the guy right in front of me.
I actually agree. I’m not sure what lesson to draw from the fact that humans can drive. But it’s interesting that so many of you seem to share my intuition that this is surprising or counterintuitive.
Good post—this has struck me too. By the way, this is a good example showing that social life and human behaviour in general is much more “law-like” and indeed predictable than many “anti-positivists” in the social sciences would have it. Car drivers’ behaviour is remarkably regular and predictable, even though it is, as you say, in no way trivial to drive a car.
Still many mistakes which end up causing accidents are made, and thus I’m sure automatized or semi-automatized cars could decrease the number of accidents hugely.
A good point—compare with this comic.
When it breaks down, the results can be horrifying.
Yes, but he probably wouldn’t get away with it, i.e., all these things would likely result in him getting arrested or seriously injured.
How are our speed limits determined? Is it reasonable to assume that they’re set such that it’s just within our abilities to drive that fast (plus some margin, of course)?
In my experience people mostly ignore the speed limits and drive at whatever speed feels right for the circumstances. Speed limits might have a role in building peoples’ intuitions, though.
Do you live in a country where speed limits aren’t enforced? People here most certainly don’t ignore speed limits. That becomes expensive fast.
I live in a region of the US where they are only sort of enforced.
In certain places at least, increasing revenues from fines also seems to be a goal.
I’ve thought about the same thing. Automobiles are relatively unsafe compared to other modes of tranportation, but it is amazing to me how commutes in major metro areas seem so free of major accidents on the majority of days.
However, though I’m pretty ignorant of programming, I’m not sure your analogy works.
Driving is hard. But there are lots of mechanisms in play that make it easy enough. Traffic signals and road markings, for instance. Plus, even if there is a crash, damage is limited in scope based on the nature of the situation.
From what I can gather in regard to LW’s views of theoretical AI, there are no helpful lanes or traffic lights. And bad code that leads to UFAI doesn’t just cause an isolated incident… it spawns a revolution of hyper-powerful agents with ends in direct conflict to humans.
A fender bender can become the end of existence.
Maybe humans are not safe AGI. Maybe both the idea of “safety” and the idea of “general intelligence” are ill-defined.
Humans definitely aren’t safe GI (not A, anyway).
In particular, there may be a very real danger that humans will use their intelligence to design more intelligent systems, which in turn will design more intelligent systems, etc., leading to a highly unpredictable singularity with no adequate safety measures in place.
In other words: if AGI is a real threat then so is human intelligence, since it’s human intelligence that would take the initial steps towards AGI. What’s potentially different about AGI is that it might become dangerous much faster than anything we’re used to.
(Of course there are other more obvious examples of disastrous things humans might do: nuclear war, etc.)
3. Well, this does not really check out. We do take our eyes off the road, for a second. Sometimes. There are somewhat safe moments to do this and less safe ones. When several hundred meters of highway ahead are empty, the road we will travel in those seconds + some safety-cushion, and it is straight, it is somewhat safe. You are just exxagerating a bit. On a long boring ride or drive, the mind is not really occupied by the task, the eyes can still safely look at the road, ready to react with exactly one single thing to anything unforeseen: appyling the brakes.
According to traffic engineers, most accidents happen after the driver stops paying attention for two or more seconds. Basically, if you blink for ‘one one thousand’ you’re probably ok, ‘one one thousand two one thousand’ is tempting fate.
math mistake, ignore
You seem to say that the difficult part of driving is staying in the lane. That’s by far the easiest part of driving, both for humans and computers.
I do all my driving north of the 64th parallel. It’s been all ice, snow and darkness for the past few months. That’s probably coloring my perception here.
In your original post you mentioned clutter, which is I think a better example of what’s difficult in driving: predicting the behavior of drivers and pedestrians. Even just processing the world into objects and deciding which ones might move is harder than seeing lanes in the dark, I think even for humans.
As for ice and snow, they produce more error, requiring larger distances between vehicles, but usually don’t change the basic negative feedback mechanism, a mechanism that has been implemented by machines for centuries. The big problem with them is skidding, which is a completely different problem.