Notes on Autonomous Cars
Excerpts from literature on robotic/self-driving/autonomous cars with a focus on legal issues, lengthy, often tedious; some more SI work. See also Notes on Psychopathy.
Having read through all this material, my general feeling is: the near-term future (1 decade) for autonomous cars is not that great. What’s been accomplished, legally speaking, is great but more limited than most people appreciate. And there are many serious problems with penetrating the elaborate ingrown rent-seeking tangle of law & politics & insurance. I expect the mid-future (+2 decades) to look more like autonomous cars completely taking over many odd niches and applications where the user can afford to ignore those issues (eg. on private land or in warehouses or factories), with highways and regular roads continuing to see many human drivers with some level of automated assistance. However, none of these problems seem fatal and all of them seem amenable to gradual accommodation and pressure, so I am now more confident that in the long run we will see autonomous cars become the norm and human driving ever more niche (and possibly lower-class). On none of these am I sure how to formulate a precise prediction, though, since I expect lots of boundary-crossing and tertium quids. We’ll see.
0.1 Self-driving cars
The first success inaugurating the modern era can be considered the 2005 DARPA Grand Challenge where multiple vehicles completed the course. The first legislation of any kind addressing autonomous cars was Nevada’s 2011 approval. 5 states have passed legislation dealing with autonomous cars.
However, these laws are highly preliminary and all the analyses I can find agree that they punt on the real legal issues of liability; they permit relatively little.
0.1.1 Lobbying, Liability, and Insurance
(Warning: legal analysis quoted at length in some excerpts.)
“Toward Robotic Cars”, Thrun 2010 (pre-Google):
Junior’s behavior is governed by a finite state machine, which provides for the possibility that common traffic rules may leave a robot without a legal option as how to proceed. When that happens, the robot will eventually invoke its general-purpose path planner to find a solution, regardless of traffic rules. [Raising serious issues of liability related to potentially making people worse-off]
“Google Cars Drive Themselves, in Traffic” (PDF), NYT 2010:
But the advent of autonomous vehicles poses thorny legal issues, the Google researchers acknowledged. Under current law, a human must be in control of a car at all times, but what does that mean if the human is not really paying attention as the car crosses through, say, a school zone, figuring that the robot is driving more safely than he would? And in the event of an accident, who would be liable—the person behind the wheel or the maker of the software?
“The technology is ahead of the law in many areas,” said Bernard Lu, senior staff counsel for the California Department of Motor Vehicles. “If you look at the vehicle code, there are dozens of laws pertaining to the driver of a vehicle, and they all presume to have a human being operating the vehicle.” The Google researchers said they had carefully examined California’s motor vehicle regulations and determined that because a human driver can override any error, the experimental cars are legal. Mr. Lu agreed.
“Calif. Greenlights Self-Driving Cars, But Legal Kinks Linger”:
For instance, if a self-driving car runs a red light and gets caught, who gets the ticket? “I don’t know—whoever owns the car, I would think. But we will work that out,” Gov. Brown said at the signing event for California’s bill to legalize and regulate the robotic cars. “That will be the easiest thing to work out.” Google co-founder Sergey Brin, who was also at the ceremony, jokingly said “self-driving cars don’t run red lights.” That may be true, but Bryant Walker Smith, who teaches a class at Stanford Law School this fall on the law supporting self-driving cars, says eventually one of these vehicles will get into an accident. When it does, he says, it’s not clear who will pay.
…Or is it the company that wrote the software? Or the automaker that built the car? When it came to assigning responsibility, California decided that a self-driving car would always have a human operator. Even if that operator wasn’t actually in the car, that person would be legally responsible. It sounds straightforward, but it’s not. Let’s say the operator of a self-driving car is inebriated; he or she is still legally the operator, but the car is driving itself. “That was a decision that department made—that the operator would be subject to the laws, including laws against driving while intoxicated, even if the operator wasn’t there,” Walker Smith says…Still, issues surrounding liability and who is ultimately responsible when robots take the wheel are likely to remain contentious. Already trial lawyers, insurers, automakers and software engineers are queuing up to lobby rule-makers in California’s capital.
“Google’s Driverless Car Draws Political Power: Internet Giant Hones Its Lobbying Skills in State Capitols; Giving Test Drives to Lawmakers”, WSJ, 12 October 2012:
Overall, Google spent nearly $9 million in the first half of 2012 lobbying in Washington for a wide variety of issues, including speaking to U.S. Department of Transportation officials and lawmakers about autonomous vehicle technology, according to federal records, nearing the $9.68 million it spent on lobbying in all of 2011. It is unclear how much Google has spent in total on lobbying state officials; the company doesn’t disclose such data.
…In most states, autonomous vehicles are neither prohibited nor permitted-a key reason why Google’s fleet of autonomous cars secretly drove more than 100,000 miles on the road before the company announced the initiative in fall 2010. Last month, Mr. Brin said he expects self-driving cars to be publicly available within five years.
In January 2011, Mr. Goldwater approached Ms. Dondero Loop and the Nevada assembly transportation committee about proposing a bill to direct the state’s department of motor vehicles to draft regulations around the self-driving vehicles. “We’re not saying, ‘Put this on the road,’” he said he told the lawmakers. “We’re saying, ‘This is legitimate technology,’ and we’re letting the DMV test it and certify it.” Following the Nevada bill’s passage, legislators from other states began showing interest in similar legislation. So Google repeated its original recipe and added an extra ingredient: giving lawmakers the chance to ride in one of its about a dozen self-driving cars…In California, an autonomous-vehicle bill became law last month despite opposition from the Alliance of Automobile Manufacturers, which includes 12 top auto makers such as GM, BMW and Toyota. The group had approved of the Florida bill. Dan Gage, a spokesman for the group, said the California legislation would allow companies and individuals to modify existing vehicles with self-driving technology that could be faulty, and that auto makers wouldn’t be legally protected from resulting lawsuits. “They’re not all Google, and they could convert our vehicles in a manner not intended,” Mr. Gage said. But Google helped push the bill through after spending about $140,000 over the past year to lobby legislators and California agencies, according to public records
As with California’s recently enacted law, Cheh’s [Washington D.C.] bill requires that a licensed driver be present in the driver’s seat of these vehicles. While seemingly inconsequential, this effectively outlaws one of the more promising functions of autonomous vehicle technology: allowing disabled people to enjoy the personal mobility that most people take for granted. Google highlighted this benefit when one of its driverless cars drove a legally blind man to a Taco Bell. Bizarrely, Cheh’s bill also requires that autonomous vehicles operate only on alternative fuels. While the Google Self-Driving Car may manifest itself as an eco-conscious Prius, self-driving vehicle technology has nothing to do with hybrids, plug-in electrics or vehicles fueled with natural gas. The technology does not depend on vehicle make or model, but Cheh is seeking to mandate as much. That could delay the technology’s widespread adoption for no good reason…Another flaw in Cheh’s bill is that it would impose a special tax on drivers of autonomous vehicles. Instead of paying fuel taxes, “Owners of autonomous vehicles shall pay a vehicle-miles travelled (VMT) fee of 1.875 cents per mile.” Administrative details aside, a VMT tax would require drivers to install a recording device to be periodically audited by the government. There may be good reasons to replace fuel taxes with VMT fees, but greatly restricting the use of a potentially revolutionary new technology by singling it out for a new tax system would be a mistake.
“Driverless cars are on the way. Here’s how not to regulate them.”
“How autonomous vehicle policy in California and Nevada addresses technological and non-technological liabilities”, Pinto 2012:
The State of Nevada has adopted one policy approach to dealing with these technical and policy issues. At the urging of Google, a new Nevada law directs the Nevada Department of Motor Vehicles (NDMV) to issue regulations for the testing and possible licensing of autonomous vehicles and for licensing the owners/drivers of these vehicles. There is also a similar law being proposed in California with details not covered by Nevada AB 511. This paper evaluates the strengths and weaknesses of the Nevada and California approaches
Another problem posed by the non-computer world is that human drivers frequently bend the rules by rolling through stop signs and driving above speed limits. How does a polite and law-abiding robot vehicle act in these situations? To solve this problem, the Google Car can be programmed for different driving personalities, mirroring the current conditions. On one end, it would be cautious, being more likely to yield to another car and strictly following the laws on the road. At the other end of the spectrum, the robocar would be aggressive, where it is more likely to go first at the stop sign. When going through a four-way intersection, for example, it yields to other vehicles based on road rules; but if other cars don’t reciprocate, it advances a bit to show to the other drivers its intention.
However, there is a time period between a problem being diagnosed and the car being fixed. In theory, one would disable the vehicle remotely and only start it back up when the problem is fixed. However in reality, this would be extremely disruptive to a person’s life as they would have to tow their vehicle to the nearest mechanic or autonomous vehicle equivalent to solve the issue. Google has not developed the technology to approach this problem, instead relying on the human driver to take control of the vehicle if there is ever a problem in their test vehicles.
[previous Lu quote about human-centric laws] …this can create particularly tricky situations such as deciding whether the police should have the right to pull over autonomous vehicles, a question yet to be answered. Even the chief counsel of the National Highway Traffic Safety Administration admits that the federal government does not have enough information to determine how to regulate driverless technologies. This can become a particularly thorny issue when there is the first accident between autonomous and self driving vehicles and how to go about assigning liability.
This question of liability arose during an [unpublished 11 Feb 2012] interview on the future of autonomous vehicles with Roger Noll. Although Professor Noll hasn’t read the current literature on this issue, he voiced concern over what the verdict of the first trial between an accident between an autonomous vehicle and normal car will be. He believes that the jury will almost certainly side with the human driver despite the details of the case, as he eloquently put in his husky Utah accent and subsequent laughter, “how are we going to defend the autonomous vehicle; can we ask it to testify for itself?” To answer Roger Noll’s question, Brad Templeton’s blog elaborates how he believes that liability reasons are a largely unimportant question for two reasons. First, in new technology, there is no question that any lawsuit over any incident involving the cars will include the vendor as the defendant so potential vendors must plan for liability. For the second reason, Brad Templeton makes an economic argument that the cost of accidents is borne by car buyers through higher insurance premiums. If the accidents are deemed the fault of the vehicle maker, this cost goes into the price of the car, and is paid for by the vehicle maker’s insurance or self- insurance. Instead, Brad Templeton believes that the big question is whether the liability assigned in any lawsuit will be significantly greater than it is in ordinary collisions because of punitive damages. In theory, robocars should drive the costs down because of the reductions in collisions, and that means savings for the car buyer and for society and thus cheaper auto insurance. However, if the cost per collision is much higher even though the number of collisions drops, there is uncertainty over whether autonomous vehicles will save money for both parties.
California’s Proposition 103 dictates that any insurance policy’s price must be based on weighted factors, and the top 3 weighted factors must be, 1. driving record, 2. number of miles driven and 3. number of years of experience. Other factors like the type of car someone has (i.e. autonomous vehicle) will be weighed lower. Subsequently, this law makes it very hard to get cheap insurance for a robocar.
Nevada Policy: AB 511 Section 8 This short piece of legislation accomplishes the goal of setting good standards for the DMV to follow. By setting general standards (part a), insurance requirements (part b), and safety standards (part c), this sets a precedent for these areas without being too limited with details, leaving them to be decided by the DMV instead of the politicians. …part b only discusses insurance briefly, saying the state must, “Set forth requirements for the insurance that is required to test or operate an autonomous vehicle on a highway within this State.” The definitions set in the second part of Section 8 are not specific enough. Following the open-ended standards set in the earlier part of the Section 8 is good for continuity, but not technically addressing the problem. According to Ryan Calo, Director of Privacy and Robotics for Stanford Law School’s Center for Internet and Society (CIS), the bill’s definition of “autonomous vehicles” is unclear and circular. In the context of this legislation, autonomous driving is seen as a binary system of existence, but in reality, it falls more under a spectrum.
Overall, AB 511 did not address either the technological liabilities and barely mentioned the non-technological liabilities that are necessary to overcome for future success of autonomous vehicles. Since it was the first type of legislation to ever approach the issue of autonomous vehicles, it is understandable that the policymakers did not want to go into specifics and instead rely on future regulation to determine the details.
California Policy: SB 1298…would require the adoption of safety standards and performance requirements to ensure the safe operation and testing of “autonomous vehicles” on California public roads. The bill would allow autonomous vehicles to be operated or tested on the public roads on the condition they meet safety standards and performance requirements of the bill. SB 1298’s 66 lines of text is also considerably longer than AB 511’s 12 lines of relevant text (the entirety of AB 511 is much longer but consists of irrelevant information for the purposes of autonomous cars). would require the adoption of safety standards and performance requirements to ensure the safe operation and testing of “autonomous vehicles” on California public roads. The bill would allow autonomous vehicles to be operated or tested on the public roads on the condition they meet safety standards and performance requirements of the bill. SB 1298’s 66 lines of text is also considerably longer than AB 511’s 12 lines of relevant text (the entirety of AB 511 is much longer but consists of irrelevant information for the purposes of autonomous cars).
SB 1298 has clear intentions to have company developed vehicles by saying in Section 2, Part B that, “autonomous vehicles have been operated safely on public roads in the state in recent years by companies developing and testing this technology” and how these companies have set the standard for what safety standards will be necessary for future testing by others. This part of the legislation implicitly supports Google’s autonomous vehicle because it has the most extensively tested fleet of vehicles out of all the companies, and all this testing has been nearly exclusively done in California. This bill is an improvement over AB 511 by putting more control in the hands of Google to focus on developing the technology, which is a signal by the policymakers to create a climate favorable for Google’s innovation within the constraints of keeping society safe.
To avoid setting a dangerous precedent for liability in accidents, policymakers can consider protecting the car companies from frivolous and malicious lawsuits. Without such legislation, future plaintiffs will be justified to sue Google and put full liability on them. There are also potential free riding effects of the economic moral hazard of putting the blame on the company that makes the technology, not the company that manufactures the vehicle. Since we are assuming that autonomous vehicle technology will all come from one source of Google, then any accident that occurs will pin the blame primarily on Google, the common denominator, not as much as on the car manufacturer…Policy that ensures the costs per accident remains close to today’s current cost will save money for both the insurer and customer. This could potentially mean putting a cap on rewards towards the recipients or punishments towards the company to limit shocks to the industry. Overall, a policymaker can choose to create a gradual limit on the amount of liability placed on the vendor based on certain technology or scaling issues that are met without accidents.
SB 1298 manages to cover some of the shortcomings of AB 511, such as how to improve upon the definition of an autonomous vehicle, as well as looking more towards the future by giving Google more responsibility and alleviating some of the non-technical liability by considering their product “under development”. However, both pieces of legislation fail to address the specific technical liabilities such as bugs in the code base or computer attacks, and non-technical liabilities such as insurance or accident liability.
“Can I See Your License, Registration and C.P.U.?”, Tyler Cowen; see also his “What do the laws against driverless cars look like?”:
The driverless car is illegal in all 50 states. Google, which has been at the forefront of this particular technology, is asking the Nevada legislature to relax restrictions on the cars so it can test some of them on roads there. Unfortunately, the very necessity for this lobbying is a sign of our ambivalence toward change. Ideally, politicians should be calling for accelerated safety trials and promising to pass liability caps if the cars meet acceptable standards, whether that be sooner or later. Yet no major public figure has taken up this cause.
Enabling the development of driverless cars will require squadrons of lawyers because a variety of state, local and federal laws presume that a human being is operating the automobiles on our roads. No state has anything close to a functioning system to inspect whether the computers in driverless cars are in good working order, much as we routinely test emissions and brake lights. Ordinary laws change only if legislators make those revisions a priority. Yet the mundane political issues of the day often appear quite pressing, not to mention politically safer than enabling a new product that is likely to engender controversy.
Politics, of course, is often geared toward preserving the status quo, which is highly visible, familiar in its risks, and lucrative for companies already making a profit from it. Some parts of government do foster innovation, such as Darpa, the Defense Advanced Research Projects Agency, which is part of the Defense Department. Darpa helped create the Internet and is supporting the development of the driverless car. It operates largely outside the public eye; the real problems come when its innovations start to enter everyday life and meet political resistance and disturbing press reports.
…In the meantime, transportation is one area where progress has been slow for decades. We’re still flying 747s, a plane designed in the 1960s. Many rail and bus networks have contracted. And traffic congestion is worse than ever. As I’ argued in a previous column, this is probably part of a broader slowdown of technological advances.
But it’s clear that in the early part of the 20th century, the original advent of the motor car was not impeded by anything like the current mélange of regulations, laws and lawsuits. Potentially major innovations need a path forward, through the current thicket of restrictions. That debate on this issue is so quiet shows the urgency of doing something now.
Ryan Calo of the CIS argues essentially that no specific law bans autonomous cars and the threat of the human-centric laws & regulations is overblown. (See the later Russian incident.)
“SCU conference on legal issues of robocars”, Brad Templeton:
Liability: After a technology introduction where Sven Bieker of Stanford outlined the challenges he saw which put fully autonomous robocars 2 decades away, the first session was on civil liability. The short message was that based on a number of related cases from the past, it will be hard for manufacturers to avoid liability for any safety problems with their robocars, even when the systems were built to provide the highest statistical safety result if it traded off one type of safety for another. In general when robocars come up as a subject of discussion in web threads, I frequently see “Who will be liable in a crash” as the first question. I think it’s a largely unimportant question for two reasons. First of all, when the technology is new, there is no question that any lawsuit over any incident involving the cars will include the vendor as the defendant, in many cases with justifiable reasons, but even if there is no easily seen reason why. So potential vendors can’t expect to not plan for liability. But most of all, the reality is that in the end, the cost of accidents is borne by car buyers. Normally, they do it by buying insurance. But if the accidents are deemed the fault of the vehicle maker, this cost goes into the price of the car, and is paid for by the vehicle maker’s insurance or self-insurance. It’s just a question of figuring out how the vehicle buyer will pay, and the market should be capable of that (though see below.) No, the big question in my mind is whether the liability assigned in any lawsuit will be significantly greater than it is in ordinary collisions where human error is at fault, because of punitive damages…Unfortunately, some liability history points to the latter scenario, though it is possible for statutes to modify this.
Insurance: …Because Prop 103 [specifying insurance by weighted factors, see previous] is a ballot proposition, it can’t easily be superseded by the legislature. It takes a 2⁄3 vote and a court agreeing the change matches the intent of the original ballot proposition. One would hope the courts would agree that cheaper insurance to encourage safer cars would match the voter intent, but this is a challenge.
Local and criminal laws: The session on criminal laws centered more on the traffic code (which isn’t really criminal law) and the fact it varies a lot from state to state. Indeed, any robocar that wants to operate in multiple states will have to deal with this, though fortunately there is a federal standard on traffic controls (signs and lights) to rely on. Some global standards are a concern—the Geneva convention on traffic laws requires every car has a driver who is in control of the vehicle. However, I think that governments will be able to quickly see—if they want to—that these are laws in need of updating. Some precedent in drunk driving can create problems—people have been convicted of DUI for being in their car, drunk, with the keys in their pocket, because they had clear intent to drive drunk. However, one would hope the possession of a robocar (of the sort that does not need human manual driving) would express an entirely different intent to the law.
“Definition of necessary vehicle and infrastructure systems for Automated Driving”, European Commission report 29 June 2011:
Yet another paramount aspect tightly related to automated driving at present and in the near future, and certainly related to autonomous driving in the long run, is the interpretation of the Vienna Convention. It will be shown in the report how this European legislation is commonly interpreted, how it creates the framework necessary to deploy on a large scale automated and cooperative driving systems, and what legal limitations are foreseen in making the new step toward autonomous driving. The report analyses in the same context other conventions and legislative acts, searches for gaps in the current legislation and makes an interesting link with the aviation industry where several lessons can be learnt from.
It seems appropriate to end this summary with a few remarks not directly related to the subject of this report, but worth in the process of thinking about automated driving, cooperative driving, and autonomous driving. The progress in the human history has systematically taken the path of the shortest resistance and has often bypassed governmental rules, business models, and the obvious thinking. At the end of the 1990s nobody was anticipating the prominent role the smart phone would have in 10 years, but scientists were busy planning journeys to Mars within the same timeframe. The latter has not happened and will probably not happen soon… One lesson humanity has learnt during its existence is that historical changes that followed the path of the minimum resistance triggered at a later stage fundamental changes in the society. “A car is a car” like David Strickland, administrator of the National Highway Traffic Safety Administration (NHTSA) in the U.S. said in his speech at the Telematics Update conference in Detroit, June 2011, but it may drive soon its progress along a historical path of minimum resistance.
An automated driving systems needs to meet the Vienna Convention (see Section 3, aspect 2). The private sector, especially those who are in the end responsible for the performance of the vehicle, should be involved in the discussion.
The Vienna Convention on Road Traffic is an international treaty designed to facilitate international road traffic and to increase road safety by standardizing the uniform traffic rules among the contracting parties. This convention was agreed upon at the United Nations Economic and Social Council’s Conference on Road Traffic (October 7, 1968 - November 8, 1968). It came into force on May 21 1977. Not all EU countries have ratified the treaty, see Figure 13 (e.g. Ireland, Spain and UK did not). It should be noted that in 1968, animals were still used for traction of vehicles and the concept of autonomous driving was considered to be science fiction. This is important when interpreting the text of the treaty: in a strict interpretation to the letter of the text, or interpretation of what is meant (at that time).
The common opinion of the expert panel is that the Vienna Convention will have only a limited effect on the successful deployment of automated driving systems due to several reasons:
OEMs already deal with the situation that some of the Advanced Driver Assistance Systems touch the Vienna Convention today. For example, they provide an on/off switch for ADAS or allow an overriding of the functions by the driver. They develop their ADAS in line with the RESPONSE Code of Practice (2009) [41] following the principle that the driver is in control and remains responsible. In addition, the OEMs have a careful marketing strategy and they do not exaggerate and do not claim that an ADAS is working in all driving situations or that there is a solution to “all” safety problems.
Automation is not black and white, automated or not automated, but much more complex, involving many design dimensions. A helpful model of automation is to consider different levels of assistance and automation that can e.g. be organized on a 1d- scale [42]. Several levels could be within the Vienna Convention, while extreme levels are outside of today’s version of the Vienna Convention. For example, one partitioning could be to have levels of automation Manual, Assisted, Semi-Automated, Highly Automated, and Fully Automated driving, see Figure 14. In highly automated driving, the automation has the technical capabilities to drive almost autonomously, but the driver is still in the loop and able to take over control when it is necessary. Fully automated driving like PRT, where the driver is not required to monitor the automation and does not have the ability to take over control, seems not to be covered by the Vienna Convention.
Criteria for deciding if the automation is still in line with the Vienna Convention could be:
the involvement of the driver in the driving task (vehicle control),
the involvement of the driver in monitoring the automation and the traffic environment,
the ability to take over control or to override the automation
The Vienna Convention already contains openings, or is variable, or can be changed.
It contains a certain variability regarding the autonomy in the means of transportation, e.g. “to control the vehicle or guide the animals”. It is obvious that some of the current technological developments were not foreseen by the authors of the Vienna Convention. Issues like platooning are not addressed. The Vienna Convention already contains in Annex 5 (chapter 4, exemptions) an opening to be investigated with appropriate legal expertise:
“For domestic purposes, Contracting Parties may grant exemptions from the provisions of this Annex in respect of: (c) Vehicles used for experiments whose purpose is to keep up with technical progress and improve road safety; (d) Vehicles of a special form or type, or which are used for particular purposes under special conditions”. - In addition, the Vienna Convention can be changed. The last change was made in 2006. A new paragraph (paragraph 6) was added to Article 8 stating that the driver should minimize any activity other than driving.
…different understandings of the term “to control” with no clear consensus [44]: 1. Control in a sense of influencing e.g. the driver controls the vehicle movements, the driver can override the automation and/or the driver can switch the automation off. 2. Control in a sense of monitoring e.g. the driver monitors the actions of the automation. Both interpretations allow the use of some form of automation in a vehicle as it can be seen in today’s cars where e.g. ACC or emergency brake assistance systems etc. are available.
The first interpretation allows automation that can be overridden by the driver or that reacts in emergency situations only when the driver cannot cope with the situation anymore. Forms of automation that cannot be overridden seem to be not in line with the first interpretation [45, p. 818]. The second interpretation is more flexible and would allow also forms of automation that cannot be overridden and are within the Vienna Convention as long as the driver monitors the automation [44]. …In the literature, some other assistance and automation functions were appraised by juridical experts. For example, [46] postulates that automatic emergency braking systems are in line with the Vienna Convention as long as they react only when a crash is unavoidable (collision mitigation). Otherwise a conflict between the driver’s intention (here, steering) and the reaction of the automation (here, braking) cannot be excluded. Albrecht [47] concludes that an Intelligent Speed Adaptation (ISA) which cannot be overridden by the driver is not in line with the Vienna Convention because it is not consistent with Article 8 and Article 13 of the Vienna Convention.
…As soon as data from the vehicle is used for V2X-communication or is stored in the vehicle itself, data protection and privacy issues become relevant. Directives and documents that need to be checked include:
Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data;
Directive 2010/40/EU on the framework for the deployment of Intelligent Transport Systems in the field of road transport and for interfaces with other modes of transport;
WP 29 Working document on data protection and privacy implications in the eCall initiative and the European Data Protection Supervisor (EDPS) opinion on ITS Action Plan and Directive.
The bottleneck is that at the current stage of development the risk related costs and benefits of viable deployment paths are unknown, combined with the fact that the deployment paths themselves are wide open because the possible deployment scenarios are not assessed and debated in a political environment. There is currently no consensus amongst stakeholders on which of the deployment scenarios proposed will eventually prevail…Changes in EU legislation might change the role of players and increase the risk for them. Any change in EU legislation will change the position of the players, and uncertainty in which direction this change (gap) would go adds to the risk. This prohibits players from having an outspoken opinion on the issue. If an update of existing legislation is considered, this should be European legislation, not national legislation. It would be better to go for a world-wide harmonized legislation, when it is decided to take that path.
A useful case study for understanding the issues associated with automated driving can be found in SAFESPOT [4] which can be viewed as a parallel to automated driving functions (for more details, see Appendix I. Related to aspect 3). SAFESPOT provided an in-depth analysis of the legal aspects of the service named ‘Speed Warning’, in two configurations V2I and V2V. It is performed against two fundamentally different law schemes, namely Dutch and English law. This analysis concluded that the concept of co-operative systems raises questions and might complicate legal disputes. This is for several reasons:
There are more parties involved, all with their own responsibilities for the proper functioning of elements of a cooperative system.
Growing technical interdependencies between vehicles, and between vehicles and the infrastructure, may also lead to system failure, including scenarios that may be characterised as an unlucky combination of events (“a freak accident”) or as a failure for which the exact cause simply cannot be traced back (because of the technical complexity).
Risks that cannot be influenced by the people who suffer the consequences tend to be judged less acceptable by society and, likewise, from a legal point of view.
The in-depth analysis of SAFESPOT concluded that (potential) participants such as system producers and road managers may well be exposed to liability risks. Even if the driver of the probe vehicle could not successfully claim a defense (towards other road users), based on a failure of a system, system providers and road managers may still remain (partially) responsible through the mechanism of subrogation and right of recourse.
Current law states that the driver must be in control of his vehicle at all times. In general, EU drivers are prohibited to exhibit dangerous behaviour while driving. The police have prosecuted drivers in the UK for drinking and/or eating; i.e. only having one hand on the steering wheel. The use of a mobile phone while driving is prohibited in many European countries, only use of phones equipped for hands free operation are permitted. Liability still rests firmly with the driver for the safe operation of vehicles.
New legislation may be required for automated driving. It is highly unlikely that any OEM or supplier will risk introducing an automatic driving vehicle (where responsibility for safe driving is removed from the driver) without there being a framework of new legislation which clearly sets out where their responsibility and liability begins and ends. In some ways it could be seen as similar to warranty liability, the OEM warrants certain quality and performance levels, backed by reciprocal agreements within the supply chain. Civil (and possibly criminal) liability in the case of accidents involving automated driving vehicles is a major issue that can truly delay the introduction of these technologies…Since there are no statistical records of the effects of automated driving systems, the entrepreneurship of insurers should compensate for the issue of unknown risks…The following factors are regarded as hindering an optimal role to be played by the insurance industry in promoting new safety systems through their insurance policies:
Premium-setting is based on statistical principles, resulting in a time-lag problem;
Competition/sensitive relationships with clients;
Investment costs (e.g. aftermarket installations);
Administrative costs;
Market regulation
No precedence lawsuits of liability with automated systems have happened to date. The Toyota malfunctions of their brake-by-wire system in 2010 did not end in a lawsuit. A system like parking assist is technically not redundant. What would happen if the driver claimed he/she could not override the brakes? For (premium) insurance a critical mass is required, so initially all stake-holders including governments should potentially play a role.
The Google project has made important advances over its predecessor, consolidating down to one laser rangefinder from five and incorporating data from a broader range of sources to help the car make more informed decisions about how to respond to its external environment. “The threshold for error is minuscule,” says Thrun, who points out that regulators will likely set a much higher bar for safety with a self-driving car than for one driven by notoriously error-prone humans.
“The future of driving, Part III: hack my ride”, Lee 2008:
Of course, one reason that private investors might not want to invest in automotive technologies is the risk of excessive liability in the case of crashes. The tort system serves a valuable function by giving manufacturers a strong incentive to make safe, reliable products. But too much tort liability can have the perverse consequence of discouraging the introduction of even relatively safe products into the marketplace. Templeton tells Ars that the aviation industry once faced that problem. At one point, “all of the general aviation manufacturers stopped making planes because they couldn’t handle the liability. They were being found slightly liable in every plane crash, and it started to cost them more than the cost of manufacturing the plane.” Airplane manufacturers eventually convinced Congress to place limits on their liability. At the moment, crashes tend to lead to lawsuits against human drivers, who rarely have deep pockets. Unless there is evidence that a mechanical defect caused the crash, car manufacturers tend not to be the target of most accident-related lawsuits. That would change if cars were driven by software. And because car manufacturers have much deeper pockets than individual drivers do, plaintiffs are likely to seek much larger damages than they would against human drivers. That could lead to the perverse result that even safer self-driving cars would be more expensive to insure than human drivers. Since car manufacturers, rather than drivers, would be the first ones sued in the event of an accident, car companies are likely to protect themselves by buying their own insurance. And if insurance premiums get too high, they may take the route the aviation industry did and seek limits on liability. An added benefit for consumers is that most would never have to worry about auto insurance. Cars would come preinsured for the life of the vehicle (or at least the life of the warranty)…Self-driving vehicles will sit at the intersection of two industries that are currently subject to very different regulatory regimes. The automobile industry is heavily regulated, while the software industry is largely unregulated at all. The most fundamental decision regulators will need to make is whether one of these existing regulatory regimes will be suitable for self-driving technologies, or whether an entirely new regulatory framework will be needed to accommodate them.
http://www.917wy.com/topicpie/2008/11/future-of-driving-part-3/2
It’s inevitable that at some point, a self-driving vehicle will be involved in a fatal crash which generates worldwide publicity. Unfortunately, even if self-driving vehicles have amassed an overall safety record that’s superior to that of human drivers, the first crash is likely to prompt calls for drastic restrictions on the use of self-driving technologies. It will therefore be important for business leaders and elected officials to lay the groundwork by both educating the public about the benefits of self-driving technologies and managing expectations so that the public isn’t too surprised when crashes happen. Of course, if the first self-driving cars turn out to be significantly less safe than the average human driver, then they should be pulled off the streets and re-tooled. But this seems unlikely to happen. A company that introduced self-driving technology into the marketplace before it was ready would not only have trouble convincing regulators that its cars are safe, but it would be risking ruinous lawsuits, as well. The far greater danger is that the combination of liability fears and red tape will cause the United States to lose the initiative in self-driving technologies. Countries such as China, India, and Singapore that have more autocratic regimes or less-developed economies may seize the initiative and introduce self-driving cars while American policymakers are still debating how to regulate them. Eventually, the specter of other countries using technologies that aren’t available in the United States will spur American politicians into action, but only after several thousand Americans lose their lives unnecessarily at the hands of human drivers.
…One likely area of dispute is whether people will be allowed to modify the software on their own cars. The United States has a long tradition of people tinkering with both their cars and their computers. No doubt, there will be many people who are interested in modifying the software on their self-driving cars. But there is likely to be significant pressure for legislation criminalizing unauthorized tinkering with self-driving car software. Both car manufacturers and (as we’ll see shortly) the law enforcement community are likely to be in favor of criminalizing the modification of car software. And they’ll have a plausible safety argument: buggy car software would be dangerous not only to the car owner but to others on the road. The obvious analogy is to the DMCA, which criminalized unauthorized tinkering with copy protection schemes. But there are also important differences. One is that car manufacturers will be much more motivated to prevent tinkering than Apple or Microsoft are. If manufacturers are liable for the damage done by their vehicles, then tinkering not only endangers lives, but their bottom lines as well. It’s unlikely that Apple would ever sue people caught jailbreaking their iPhones. But car manufacturers probably will contractually prohibit tinkering and then sue those caught doing it for breach of contract.
http://www.917wy.com/topicpie/2008/11/future-of-driving-part-3/3
The more stalwart advocate of locked-down cars is likely to be the government, because self-driving car software promises to be a fantastic tool for social control. Consider, for example, how useful locked-down cars could be to law enforcement. Rather than physically driving to a suspect’s house, knocking on his door (or not), and forcibly restraining, handcuffing, and escorting a suspect to the station, police will be able to simply seize a suspect’s self-driving car remotely and order it to drive to the nearest police station. And that’s just the beginning. Locked-down car software could be used to enforce traffic laws, to track and log peoples’ movements for later review by law enforcement, to enforce curfews, to clear the way for emergency vehicles, and dozens of other purposes. Some of these functions are innocuous. Others will be very controversial. But all of them depend on restricting user control over their own vehicles. If users were free to swap in custom software, they might disable the government’s “back door” and re-program it to ignore government requirements. So the government is likely to push hard for laws mandating that only government-approved software run self-driving cars.
…It’s too early to say exactly what the car-related civil liberties fights will be about, or how they will be resolved. But one thing we can say for certain is that the technical decisions made by today’s computer scientists will be important for setting the stage for those battles. Advocates for online free speech and anonymity have been helped tremendously by the fact that the Internet was designed with an open, decentralized architecture. The self-driving cars of the future are likely to be built on top of software tools that are being developed in today’s academic labs. By thinking carefully about the ways these systems are designed, today’s computer scientists can give tomorrow’s civil liberties their best shot at preserving automotive freedom.
http://www.917wy.com/topicpie/2008/11/future-of-driving-part-3/4
In our interview with him, Congressman Adam Schiff described the public’s perception of autonomous driving technologies as a reflection of his own reaction to the idea: one that is a mixture of both fascination and skepticism. Schiff explained that the public’s fascination comes from amazement at how advanced this technology already has become, plus with Google’s sponsorship and endorsement it becomes even more alluring.
Skepticism of autonomous vehicle technologies comes from a missing element of trust. According to Clifford Nass, a professor of communications and sociology at Stanford University, this trust is an aspect of public opinion that must be earned through demonstration more so than through use. When people see a technology in action, they will begin to trust it. Professor Nass specializes in studying the way in which human beings relate to technology, and he has published several books on the topic including The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships. In our interview with him, Professor Nass explained that societal comfort with technology is gained through experience, and acceptance occurs when people have seen a technology work enough times collectively. He also pointed out that it took a long time for people to develop trust in air transportation, something that we almost take for granted now. It is certainly not the case that autonomous cars need to be equivalent in safety to plane flight before the public would adopt them. However, as Noel du Toit pointed out, we have a higher expectation for autonomous cars than we do for ourselves. Simply put, if we are willing to relinquish the “control” over our vehicles to an autonomous power, it will likely have to be under the condition that the technology drives more adeptly than we ever possibly could. Otherwise, there will simply be no trusting it. Interestingly, du Toit brought up a recent botched safety demonstration by Volvo in May of 2010. In the demonstration, Volvo showcased to the press how its emergency braking system works as part of an “adaptive cruise control” system. These systems allow a driver to set both a top speed and a following distance, which the vehicle then automatically maintains. As a consequence, if the preceding vehicle stops short, the system acts as the foundation for an emergency-braking maneuver. However, In Volvo’s demonstration the car smashed directly into a trailer13. Even though the system worked fine in several cases during the day’s worth of demonstrations, video of that one mishap went viral and did little to help the public gain trust in the technology.
Calo pointed out at that future issues related to autonomous vehicles would be approached from a standpoint of “negative liabilities”, meaning that we can assume something is legal unless there exist explicit laws against it. This discussion also led to the concept of what a driverless car would look like to bystanders, and the kind of panic that might garner. A real-life example of this occurred in Moscow during the VisLab van trek to Shanghai11. In this case, an autonomous electric van was stopped by Russian authorities due to its apparent lack of a driver behind the wheel. Thankfully, engineers present were able to convince the Russian officer who stopped the vehicle not to issue a ticket. The above [Nevadan] legislation fits in well with the information that we collected from Congressman Schiff about potential federal involvement in autonomous vehicle technology. Basically, Schiff relayed the idea that a strong governmental role expected for this technology would come in the form of regulating safety. Furthermore, he called attention to hefty governmental requirements for crash testing that every new vehicle must meet before it is allowed on the road.
In autonomous driving, liability concerns can be inferred through a couple of examples. In one example, Noel du Toit described DARPA’s use of hired stunt drivers to share the testing grounds with driverless vehicle entries in the 2007 Urban Challenge. This behavior clearly illustrates the level of precaution that the DARPA officials felt it was necessary to take. In another example, Dmitri Dolgov expounded on how Google’s cars are never driving by themselves; whenever they are operated on public roads, there are at least two well-trained operators in the car. Dolgov went on to say that these operators “are in control at all times”, which helps illustrate Google’s position-they are not taking any chances when it comes to liabilities. Kent Kresa, former CEO of Northrup Grumman and interim chairman of GM in 2009, was also concerned about the liability issues presented by autonomous vehicles. Kresa felt that a future with driverless cars piloting the streets was somewhat unimaginable at present, especially when one considers the possibility of a pedestrian getting hit. In the case of such a collision it is still very unclear who would be at fault. Whether or not the company that made the vehicle would be responsible is at present unknown.
A conversation we had with Bruce Gillman, the public information officer for the Los Angeles Department of Transportation (DOT), revealed that the department is very busy putting out many other fires. Gillman noted that DOT is focused on getting people out of their cars and onto bikes or into buses. Thus, autonomous vehicles are not on their radar. Moreover, Gillman was adamant that DOT would wait until autonomous vehicles were being manufactured commercially before addressing any issues concerning them. His viewpoint certainly reinforces that idea that supportive infrastructure updates coming form a city government level would be unlikely. No matter what adoption pathway is used, federal government financial support could come in the form of incentives and subsidies like those seen during the initial rollout of hybrid vehicles. However, Brian Thomas explained that this would only be possible if the federal government was willing to do a cost-benefit valuation for the mainstream introduction of autonomous vehicles.
http://www.pickar.caltech.edu/e103/Final%20Exams/Autonomous%20Vehicles%20for%20Personal%20Transport.pdf [shades of Amara’s law: we always overestimate in the short run & underestimate in the long run]
Car manufacturers might be held liable for a larger share of the accidents-a responsibility they are certain to resist. (A legal analysis by Nidhi Kalra and her colleagues at the RAND Corporation suggests this problem is not insuperable.) –“Leave the Driving to It”, Brian Hayes American Scientist 2011
The RAND report: “Liability and Regulation of Autonomous Vehicle Technologies”, Kalra et al 2009:
In this work, we first evaluate how the existing liability regime would likely assign responsibility in crashes involving autonomous vehicle technologies. We identify the controlling legal principles for crashes involving these technologies and examine the implications for their further development and adoption. We anticipate that consumer education will play an important role in reducing consumer overreliance on nascent autonomous vehicle technologies and minimizing liability risk. We also discuss the possibility that the existing liability regime will slow the adoption of these socially desirable technologies because they are likely to increase liability for manufacturers while reducing liability for drivers. Finally, we discuss the possibility of federal preemption of state tort suits if the U.S. Department of Transportation (US DOT) promulgates regulations and some of the implications of eliminating state tort liability. Second, we review the existing literature on the regulatory environment for autonomous vehicle technologies. To date, there are no government regulations for these technologies, but work is being done to develop initial industry standards.
…Additionally, for some systems, the driver is expected to intervene when the system cannot control the vehicle completely. For example, if a very rapid stop is required, ACC may depend on the driver to provide braking beyond its own capabilities. ACC also does not respond to driving hazards, such as debris on the road or potholes-the driver is expected to intervene. Simultaneously, research suggests that drivers using these conveniences often become complacent and slow to intervene when necessary; this behavioral adaptation means drivers are less responsive and responsible than if they were fully in control (Rudin-Brown and Parker, 2004). Does such evidence suggest that manufacturers may be responsible for monitoring driver behavior as well as vehicle behavior? Some manufacturers have already taken a step toward ensuring that the driver assumes responsibility and is attentive, by requiring the driver to periodically depress a button or by monitoring the driver by sensing eye movements and grip on the steering wheel. As discussed later, litigation may occur around the issue of driver monitoring and the danger of the driver relying on the technology for something that it is not designed to accomplish.
…Ayers (1994) surveyed a range of emerging autonomous vehicle technologies and automated highways, evaluated the likelihood of a shift in liability occurring, discussed the appropriateness of government intervention, and highlighted the most-promising interventions for different technologies. Ayers found that collision warning and collision-avoidance systems “are likely to generate a host of negligence suits against auto manufacturers” and that liability disclaimers and federal regulations may be the most effective methods of dealing with the liability concerns (p. 21). The report was written before many of these technologies appeared on the market, and Ayers further speculated that “the liability for almost all accidents in cars equipped with collision-avoidance systems would conceivably fall on the manufacturer” (p. 22), which could “delay or even prevent the deployment of collision warning systems that are cost-effective in terms of accident reduction” (p. 25). Syverud (1992) examines the legal cases stemming from the introduction of air bags, antilock brakes, cruise control, and cellular telephones to provide some general lessons for the liability concerns for autonomous vehicle technologies. In another report, Syverud (1993) examines the legal barriers to a wide range of IVHSs and finds that liability poses a significant barrier particularly to autonomous vehicle technologies that take control of the vehicle. In this work, Syverud’s interviews with manufacturers reveal that liability concerns had already adversely affected research and development in these technologies in several companies. One interviewee is quoted as saying that “IVHS will essentially remain ‘information technology and a few pie-in-the sky pork barrel control technology demonstrations, at least in this country, until you lawyers do something about products liability law’” (1993, p. 25).
…While the victims in these circumstances could presumably sue the vehicle manufacturer, products liability lawsuits are more expensive to bring and take more time to resolve than run-of-the-mill automobile-crash litigation. This shift in responsibility from the driver to the manufacturer may make no-fault automobile-insurance regimes more attractive. They are designed to provide compensation to victims relatively quickly, and they do not depend upon the identification of an “at-fault” party
…Suppose that autonomous vehicle technologies are remarkably effective at virtually eliminating minor crashes caused by human error. But it may be that the comparatively few crashes that do occur usually result in very serious injuries or fatalities (e.g., because autonomous vehicles are operating at much higher speeds or densities). This change in the distribution of crashes may affect the economics of insuring against them. Actuarially, it is much easier for an insurance company to calculate the expected costs of somewhat common small crashes than of rarer, much larger events. This may limit the downward trend in automobile-insurance costs that we would otherwise expect.
…Suppose that most cars brake automatically when they sense a pedestrian in their path. As more cars with this feature come to be on the road, pedestrians may expect that cars will stop, in the same way that people stick their limbs in elevator doors confident that the door will automatically reopen. The general level of pedestrian care may decline as people become accustomed to this common safety feature. But if there were a few models of cars that did not stop in the same way, a new category of crashes could emerge. In this case, should pedestrians who wrongly assume that a car would automatically stop and are then injured be able to recover? To allow recovery in this instance would seem to undermine incentives for pedestrians to take efficient care. On the other hand, allowing the injured pedestrian to recover may encourage the universal adoption of this safety feature. Since negligence is defined by unreasonableness, the evolving set of shared assumptions about the operation of the roadways-what counts as “reasonable”-will determine liability. Fourth, we think that it is not likely that operators of partially or fully autonomous vehicles will be found strictly liable with driving such vehicles as an ultrahazardous activity. As explained earlier, these technologies will be introduced incrementally and will initially serve merely to aid the driver rather than take full control of the vehicle. This will give the public and courts time to become familiar with the capabilities and limits of the technology. As a result, it seems unlikely that courts will consider its gradual introduction and use to be ultrahazardous. On the other hand, this would not be true if a person attempted to operate a car fully autonomously before the technology adequately matured. Suppose, for example, that a home hobbyist put together his own autonomous vehicle and attempted to operate it on public roads. Victims of any crashes that resulted may well be successful in convincing a court to find the operator strictly liable on the grounds that such activity was ultrahazardous.
…Product-liability law can be divided into theories of liability and kinds of defect. Theories of liability include negligence, misrepresentation, warranty, and strict liability.22 Types of defect include manufacturing defects, design defects, and warning defects. A product-liability lawsuit will involve one or more theories of manufacturer liability attached to a specific allegation of a type of defect. In practice, the legal tests for the theories of liability often overlap and, depending on the jurisdiction, may be identical. … While it is difficult to generalize, automobile (and subsystem) manufacturers may fare well under a negligence standard that uses a cost-benefit analysis that includes crashes avoided from the use of autonomous vehicle technologies. Automakers can argue that the overall benefits from the use of a particular technology outweigh the risks. The number of crashes avoided by the use of these technologies is probably large. …Unfortunately, the socially optimal liability rule is unclear. Permitting the defendant to include the long-run benefits in the cost-benefit analysis may encourage the adoption of technology that can indeed save many lives. On the other hand, it may shield the manufacturer from liability for shorter-run decisions that were inefficiently dangerous. Suppose, for example, that a crash-prevention system operates successfully 70% of the time but that, with additional time and work, it could have been designed to operate successfully 90% of the time. Then suppose that a victim is injured in a crash that would have been prevented had the system worked 90% of the time. Assume that the adoption of the 70-percent technology is socially desirable but the adoption of the 90-percent technology would be even more socially desirable. How should the cost-benefit analysis be conducted? Is the manufacturer permitted to cite the 70% of crashes that were prevented in arguing for the benefits of the technology? Or should the cost-benefit analysis focus on the manufacturer’s failure to design the product to function at 90-percent effectiveness? If the latter, the manufacturer might not employ the technology, thereby leading to many preventable crashes. In calculating the marginal cost of the 90-percent technology, should the manufacturer be able to count the lives lost in the delay in implementation as compared to possible release of the 70-percent technology? …Tortious misrepresentation may play a role in litigation involving crashes that result from autonomous vehicle technologies. If advertising overpromises the benefits of these technologies, consumers may misuse them. Consider the following hypothetical scenario. Suppose that an automaker touts the “autopilot-like” features of its ACC and lane-keeping function. In fact, the technologies are intended to be used by an alert driver supervising their operation. After activating the ACC and lane-keeping function, a consumer assumes that the car is in control and falls asleep. Due to road resurfacing, the lane-keeping function fails, and the automobile leaves the roadway and crashes into a tree. The consumer then sues the automaker for tortious misrepresentation based on the advertising that suggested that the car was able to control itself.
…Finally, it is also possible that auto manufacturers will be sued for failing to incorporate autonomous vehicle technologies in their vehicles. While absence of available safety technology is a common basis for design- defect lawsuits (e.g., Camacho v. Honda Motor Co., 741 P.2d 1240, 1987, overturning summary dismissal of suit alleging that Honda could easily have added crash bars to its motorcycles, which would have prevented the plaintiff’s leg injuries), this theory has met with little success in the automotive field because manufacturers have successfully argued that state tort remedies were preempted by federal regulation (Geier v. American Honda Motor Co., 529 U.S. 861, 2000, finding that the plaintiff’s claim that the manufacturer was negligent for failing to include air bags was implicitly preempted by the National Traffic and Motor Vehicle Safety Act). We discuss preemption and the relationship between regulation and tort in Section 4.3.
…Preemption has arisen in the automotive context in litigation over a manufacturer’s failure to install air bags. In Geier v. American Honda Motor Co. (2000), the U.S. Supreme Court found that state tort litigation over a manufacturer’s failure to install air bags was preempted by the National Traffic and Motor Vehicle Safety Act (Pub. L. No. 89-563). More specifically, the Court found that the Federal Motor Vehicle Safety Standard (FMVSS) 208, promulgated by the US DOT, required manufacturers to equip some but not all of their 1987 vehicle-year vehicles with passive restraints. Because the plaintiffs’ theory that the defendants were negligent under state tort law for failing to include air bags was inconsistent with the objectives of this regulation (FMVSS 208), the Court held that the state lawsuits were preempted. Presently, there has been very little regulation promulgated by the US DOT with respect to autonomous vehicle technologies. Should the US DOT promulgate such regulation, it is likely that state tort law claims that were found to be inconsistent with the objective of the regulation would be held to be preempted under the analysis used in Geier. Substantial litigation might be expected as to whether particular state-law claims are, in fact, inconsistent with the objectives of the regulation. Resolution of those claims will depend on the specific state tort law claims, the specific regulation, and the court’s analysis of whether they are “inconsistent.” …Our analysis necessarily raises a more general question: Why should we be concerned about liability issues raised by a new technology? The answer is the same as for why we care about tort law at all: that a tort regime must balance economic incentives, victim compensation, and corrective justice. Any new technology has the potential to change the sets of risks, benefits, and expectations that tort law must reconcile. …Congress could consider creating a comprehensive regulatory regime to govern the use of these technologies. If it does so, it should also consider preempting inconsistent state-court tort remedies. This may minimize the number of inconsistent legal regimes that manufacturers face and simplify and speed the introduction of this technology. While federal preemption has important disadvantages, it might speed the development and utilization of this technology and should be considered, if accompanied by a comprehensive federal regulatory regime.
…This tension produced “a standoff between airbag proponents and the automakers that resulted in contentious debates, several court cases, and very few airbags” (Wetmore, 2004, p. 391). In 1984, the US DOT passed a ruling requiring vehicles manufactured after 1990 to be equipped with some type of passive restraint system (e.g., air bags or automatic seat belts) (Wetmore, 2004); in 1991, this regulation was amended to require air bags in particular in all automobiles by 1999 (Pub. L. No. 102-240). The mandatory performance standards in the FMVSS further required air bags to protect an unbelted adult male passenger in a head-on, 30 mph crash. Additionally, by 1990, the situation had changed dramatically, and air bags were being installed in millions of cars. Wetmore attributes this development to three factors: First, technology had advanced to enable air-bag deployment with high reliability; second, public attitude shifted, and safety features became important factors for consumers; and, third, air bags were no longer being promoted as replacements but as supplements to seat belts, which resulted in a sharing of responsibility between manufacturers and passengers and lessened manufacturers’ potential liability (Wetmore, 2004). While air bags have certainly saved many lives, they have not lived up to original expectations: In 1977, NHTSA estimated that air bags would save on the order of 9,000 lives per year and based its regulations on these expectations (Thompson, Segui-Gomez, and Graham, 2002). Today, by contrast, NHTSA calculates that air bags saved 8,369 lives in the 14 years between 1987 and 2001 (Glassbrenner, undated). Simultaneously, however, it has become evident that air bags pose a risk to many passengers, particularly smaller passengers, such as women of small stature, the elderly, and children. NHTSA (2008a) determined that 291 deaths were caused by air bags between 1990 and July 2008, primarily due to the extreme force that is necessary to meet the performance standard of protecting the unbelted adult male passenger. Houston and Richardson (2000) describe the strong reaction to these losses and a backlash against air bags, despite their benefits. The unintended consequences of air bags have led to technology developments and changes to standards and regulations. Between 1997 and 2000, NHTSA developed a number of interim solutions designed to reduce the risks of air bags, including on-off switches and deployment with less force (Ho, 2006). Simultaneously, safer air bags, called advanced air bags, were developed that deploy with a force tailored to the occupant by taking into account the seat position, belt usage, occupant weight, and other factors. In 2000, NHTSA mandated that the introduction of these advanced air bags begin in 2003 and that, by 2006, every new passenger vehicle would include these safety measures (NHTSA, 2000). What lessons does this experience offer for regulation of autonomous vehicle technologies? We suggest that modesty and flexibility are necessary. The early air-bag regulators envisioned air bags as being a substitute for seat belts because the rates of seat-belt usage were so low and appeared intractable. Few anticipated that seat-belt usage would rise as much over time as it has and that air bags would eventually be used primarily as a supplement rather than a substitute for seat belts. Similarly unexpected developments are likely to arise in the context of autonomous vehicle technologies. In 2006, for example, Honda introduced its Accord model in the UK with a combined lane-keeping and ACC system that allows the vehicle to drive itself under the driver’s watch; this combination of features has yet to be introduced in the United States (Miller, 2006). Ho (2006, p. 27) observes a general trend that “the U.S. market trails Europe, and the European market trails Japan by 2 to 3 years.” What is the extent of these differences? What aspects of the liability and regulatory rules in those countries have enabled accelerated deployment? What other factors are at play (e.g., differences in consumers’ sensitivity to price)?
“New Technology—Old Law: Autonomous Vehicles and California’s Insurance Framework”, Peterson 2012:
This Article will address this issue and propose ways in which auto insurance might change to accommodate the use of AVs. Part I briefly reviews the background of insurance regulation nationally and in California. Part II discusses general insurance and liability issues related to AVs. Part III discusses some challenges that insurers and regulators may face when setting rates for AVs, both generally and under California’s more idiosyncratic regulatory structure. Part IV discusses challenges faced by California insurers who may want to reduce rates in a timely way when technological improvements rapidly reduce risk.
…When working within the context of a file-and-use or use-and-file environment, AVs will present only modest challenges to an insurer that wants to write these policies. The main challenge will arise from the fact that the policy must be rated for a new technology that may have an inadequate base of experience for an actuary to estimate future losses.21 “Prior approval” states, like California, require that automobile rates be approved prior to their use in the marketplace.22 These states rely more on regulation than on competition to modulate insurance rates.23 In California, automobile insurance rates are approved in a two-step process. The first step is the creation of a “rate plan.”24 The rate plan considers the insurer’s entire book of business in the relative line of insurance and asks the question: How much total premium must the insurer collect in order to cover the projected risks, overhead and permitted profit for that line?25 The insurer then creates a “class plan.” The class plan asks the question: How should different policyholders’ premiums be adjusted up or down based on the risks presented by different groups or classes of policyholders?26 Among other factors, the Department of Insurance requires that the rating factors comply with California law and be justified by the loss experience for the group.27 Rating a new technology with an unproven track record may include a considerable amount of guesswork. …California is the largest insurance market in the United States, and it is the sixth largest among the countries of the world.28 Cars are culture in this most populous state. There are far more insured automobiles in California than any other state.29
…Although adopted by the barest majority, [California’s] Proposition 103 [see previous discussion of its 3-part requirement for rating insurance premiums] may be amended by the legislature only by a two-thirds vote, and then only if the legislation “further[s] [the] purposes” of Proposition 103.68 Thus, Proposition 103 and the regulations adopted by the Department of Insurance are the matrix in which most (but not all) insurance is sold and regulated in California.69 …The most sensible approach to this dilemma, at least with respect to AVs, would be to abolish or substantially re-order the three mandatory rating factors. However, this is more easily said than done. As noted above, amending Proposition 103 requires a two-thirds vote of the legislature.160 Moreover, section 8(b) of the Proposition provides: “The provisions of this act shall not be amended by the Legislature except to further its purposes.”161 Both of these requirements can be formidable hurdles. Persistency discounts serve as an example. Most are aware that their insurer discounts their rates if they have been with the insurer for a period of time.162 This is called the “persistency discount.” The discount is usually justified on the basis that persistency saves the insurer the producing expenses associated with finding a new insured. If one wants to change insurers, Proposition 103 does not permit the subsequent insurer to match the persistency discount offered by the insured’s current insurer.163 Thus, the second insurer could not compete by offering the same discount. Changing insurers, then, was somewhat like a taxable event. The “tax” is the loss of the persistency discount when purchasing the new policy. The California legislature concluded that this both undermined competition and drove up the cost of insurance by discouraging the ability to shop for lower rates. …Despite these legislative findings, the Court of Appeal held the amendment invalid because, in the Court’s view, it did not further the purposes of Proposition 103.165 The Court also held that Proposition 103 vests only the Insurance Commissioner with the power to set optional rating factors.166 Thus, the legislature, even by a super majority, may not be authorized to adopt rating factors for auto insurance. Following this defeat in the courts, promoters of “portable persistency” qualified a ballot initiative to amend this aspect of Proposition 103. With a vote of 51.9% to 48.1%, the initiative failed in the June 8, 2010 election.167
…The State of Nevada recently adopted regulations for licensing the testing of AVs in the state. The regulations would require insurance in the minimum amounts required for other cars “for the payment of tort liabilities arising from the maintenance or use of the motor vehicle.”73 The regulation, however, does not suggest how the tort liability may arise. If there is no fault on the part of the operator or owner, then liability may arise, if at all, only for the manufacturer or supplier. Manufacturers and suppliers are not “insureds” under the standard automobile policy-at least so far. Thus, for the reasons stated above, owners, manufacturers and suppliers may fall outside the coverage of the policy.
…One possible approach would be to invoke the various doctrines of products liability law. This would attach the major liability to sellers and manufacturers of the vehicle. However, it is doubtful that this is an acceptable approach for several reasons. For example, while some accidents are catastrophic, fortunately most accidents cause only modest damages. By contrast, products liability lawsuits tend to be complex and expensive. Indeed, they may require the translation of hundreds or thousands of engineering documents-perhaps written in Japanese, Chinese or Korean…See In re Puerto Rico Electric Power Authority, 687 F.2d 501, 505 (1st Cir. 1982) (stating each party to bear translation costs of documents requested by it but cost possibly taxable to prevailing party). Translation costs of Japanese documents in range of $250,000, and translation costs of additional Spanish documents may exceed that amount.
…Commercial insurers of manufacturers and suppliers are not encumbered with Proposition 103’s unique automobile provisions,197 therefore they need not offer a GDD, nor need they conform to the ranking of the mandatory rating factors. To the extent that the risks of AVs are transferred to them, the insurance burden passed to consumers in the price of the car can reflect the actual, and presumably lower, risk presented by AVs. As noted above, however, for practical reasons some rating factors, such as annual miles driven and territory, cannot properly be reflected in the automobile price. Moving from the awkward and arbitrary results mandated by Proposition 103’s rating factors to a commercial insurance setting that cannot properly reflect some other rating factors is also an awkward trade-off. At best, it may be a choice of the least worst. Another viable solution might to be to amend the California Insurance Code section 660(a) to exclude from the definition of “policy” those policies covering liability for AVs (at least when operated in autonomous mode). Since Proposition 103 incorporates section 660(a), this would likely require a two-thirds vote of the legislature and the amendment would have to “further the purposes” of Proposition 103. Assuming a two-thirds vote could be mustered, the issue would then be whether the amendment furthers the purposes of the Proposition. To the extent that liability moves from fault-based driving to defect-based products liability, the purposes underlying the mandatory rating factors and the GDD simply cannot be accomplished. Manufacturers will pass these costs through to automobile buyers free of the Proposition’s restraints. Since the purposes of the Proposition, at least with respect to liability coverage,199 simply cannot be accomplished when dealing with self-driving cars, amending section 660(a) would not frustrate the purposes of Proposition 103.
…Filing a “complete rate application with the commissioner” is a substantial impediment to reducing rates. A complete rate application is an expensive, ponderous and time-consuming process. A typical filing may take three to five months before approval. Some applications have even been delayed for a year.205 In 2009, when insurers filed many new rate plans in order to comply with the new territorial rating regulations, delays among the top twenty private passenger auto insurers ranged from a low of 54 days (Viking) to a high of 558 days (USAA and USAA Casualty). Many took over 300 days (e.g., State Farm Mutual, Farmers Insurance Exchange, Progressive Choice).206 …n addition, once an application to lower rates is filed, the Commissioner, consumer groups, and others can intervene and ask that the rates be lowered even further.207 Thus, an application to lower a rate by 6% may invite pressure to lower it even further.208 If they “substantially contributed, as a whole” to the decision, a consumer group can also bill the insurance company for its legal, advocacy, and witness fees.209
…Unless ways can be found to conform Proposition 103 to this new reality, insurance for AVs is likely to migrate to a statutory and regulatory environment untrammeled by Proposition 103-commercial policies carried by manufacturers and suppliers. This migration presents its own set of problems. While the safety of AVs could be more fairly rated, other important rating factors, such as annual miles driven and territory, must be compromised. Whether this migration occurs will also depend on how liability rules do or do not adjust to a world in which people will nevertheless suffer injuries from AVs, but in which it is unlikely our present fault rules will adequately address compensation. If concepts of non-delegable duty, agency, or strict liability attach initial liability to owners of faulty cars with faultless drivers, the insurance burden will first be filtered through automobile insurance governed by Proposition 103. These insurers will then pass the losses up the distribution line to the insurers of suppliers and manufacturers that are not governed by Proposition 103. Manufacturers and suppliers will then pass the insurance cost back to AV owners in the cost of the vehicle. The insurance load reflected in the price of the car will pass through to automobile owners free of any of the restrictions imposed by Proposition 103. There will be no GDD, such as it is, no mandatory rating factors, and, depending on where the suppliers’ or manufacturers’ insurers are located, more flexible rating. One may ask: What is gained by this merry-go-round?
“‘Look Ma, No Hands!’: Wrinkles and Wrecks in the Age of Autonomous Vehicles”, Garza 2012
The benefits of these systems cannot be overestimated given that one-third of drivers admit to having fallen asleep at the wheel within the previous thirty days.31 …If the driver fails to react in time, it applies 40% of the full braking power to reduce the severity of the collision.39 In the most advanced version, the CMBS performs all of the functions described above, and it will also stop the car automatically to avoid a collision when traveling under ten miles-per-hour.40 Car companies are hesitant to push the automatic braking threshold too far out of fear that ‚fully ‘automatic’ braking systems will shift the responsibility of avoiding an accident from the vehicle’s driver to the vehicle’s manufacturer.’41…See Larry Carley, Active Safety Technology: Adaptive Cruise Control, Lane Departure Warning & Collision Mitigation Braking, IMPORT CAR (June 16, 2009), http://www.import-car.com/Article/58867/active_safety_technology_adaptive_cruise_control_lane_departure_warning__collision_mitigation_braking.aspx
…Automobile products liability cases are typically divided into two categories: ‚(1) accidents caused by automotive defects, and (2) aggravated injuries caused by a vehicle’s failure to be sufficiently ‘crashworthy’ to protect its occupants in an accident.‘79 …For example, a car suffers from a design defect when a malfunction in the steering wheel causes a crash. 81 Additionally, plaintiffs have alleged and prevailed on manufacturing- defect claims in cases where ‚unintended, sudden and uncontrollable acceleration’ causes an accident.82 In such cases, plaintiffs have been able to recover under a ‚malfunction theory.’83 Under a malfunction theory, plaintiffs use a ‚res ipsa loquitur like inference to infer defectiveness in strict liability where there was no independent proof of a defect in the product.’84 Plaintiffs have also prevailed where design defects cause injury. 85 For example, there was a proliferation of litigation in the 1970s and 1980s as a result of vehicles that were designed with a high center of gravity, which increased their propensity to roll over.86 Additionally, many design-defect cases arose in response to faulty transmissions that could inadvertently slip into gear, causing crashes and occupants to be run over in some cases. 87 The two primary tests that courts use to assess the defectiveness of a product’s design are the consumer-expectations test and the risk-utility test.88 The consumer-expectations test focuses on whether ‚the danger posed by the design is greater than an ordinary consumer would expect when using the product in an intended or reasonably foreseeable manner.’89 …Thus, while an ordinary consumer can have expectations that a car will not explode at a stoplight or catch fire in a two-mile-per-hour collision, they may not be able to have expectations about how a truck should handle after striking a five- or six-inch rock at thirty-five miles-per-hour.92 Perhaps because the consumer-expectations test is difficult to apply to complex products, and we live in a world where technological growth increases complexity, the risk-utility test has become the dominant test in design-defect cases.93 …Litigation can also arise where a plaintiff alleges that a vehicle is not sufficiently ‚crashworthy.’104 Crashworthiness claims are a type of design- defect claim.105
…Since their advent and incorporation, seat belts have resulted in litigation-much of which has involved crashworthiness claims. 136 In Jackson v. General Motors Corp., for example, the plaintiff alleged that as a result of a defectively designed seat belt, his injuries were enhanced. 137 The defendant manufacturer argued that the complexity of seat belts foreclosed any consumer expectation,138 but the Tennessee Supreme Court noted that seat belts are ‘familiar products for which consumers’ expectations of safety have had an opportunity to develop,’ and permitted the plaintiff to recover under the consumer-expectations test.139 Although manufacturers have been sued where seat belts render a car insufficiently crashworthy- as in cases where they fail to perform as intended or enhance injury-the incorporation of seat belts has reduced liability as well.140 This reduction comes in the form of the ‚seat belt defense.’141 The ’seat belt defense’ allows a defendant to present evidence about an occupant’s nonuse of a seat belt to mitigate damages or to defend against an enhanced-injury claim.142 Because seat belts are capable of reducing the number of lives lost and the overall severity of injuries sustained in crashes, it is argued that nonuse should protect a manufacturer from some claims.143 Although the majority rule is to prevent the admission of such evidence in enhanced-injury litigation, there is a growing trend toward admission.144
…Since their incorporation, consumers have sued manufacturers for defective cruise control systems that lead to injury. 171 Because of the complexity of cruise control technology, courts may not allow a plaintiff to use the consumer-expectations test.172 Despite the complexity of the technology, other courts allow plaintiffs to establish a defect using either the risk-utility test or the consumer-expectations test.173
…Under the consumer-expectations test, manufacturers will likely argue-as they historically have-that OAV technology is too complicated for the average consumer to have appropriate expectations about its capabilities.182 Commentators have stated that ‚consumers may have unrealistic expectations about the capabilities of these technologies . . . . Technologies that are engineered to assist the driver may be overly relied on to replace the need for independent vigilance on the part of the vehicle operator.’183 Plaintiffs will argue that, while the workings of the technology are concededly complex, the overall concept of autonomous driving is not.184 Like the car exploding at a stoplight or the car that catches fire in a two- mile-per-hour collision, the average consumer would expect autonomous vehicles to drive themselves without incident.185 This means that components that are meant to keep the car within a lane will do just that, and others will stop the vehicle at traffic lights. 186 Where incidents occur, OAVs will not have performed as the average consumer would expect.187 …plaintiffs who purchase OAVs at the cusp of availability, and attempt to prove defect under the consumer- expectations test, are likely to face an up-hill battle.194 But the unavailability of the consumer-expectations test will not be a significant detriment as plaintiffs can fall back on the risk-utility test.195 And as OAVs are increasingly incorporated, and users become more familiar with their capabilities, the consumer-expectations test will become more accessible to plaintiffs.196 Given the modern trend, plaintiffs are likely to face the risk- utility test.197
…Additionally, the extent to which injuries are ‚enhanced’ by OAVs will be debated.228 Because the majority of drivers fail to fully apply their brakes prior to a collision,229 where an OAV only partially applies brakes, or fails to apply brakes at all, manufacturers and plaintiffs will disagree about the extent of enhancement.230 Manufacturers will argue that, absent the OAV, the result would have been the same or worse-thus, the extent to which the injuries of the plaintiff are ‚enhanced’ is minimal.231 Plaintiffs will argue that, just like the presentation of crash statistics in a risk-utility analysis, this is a false choice.232 Like no-fire air bag claims, plaintiffs will contend that but for the malfunction of the OAV, their injuries would have been greatly reduced or nonexistent. 233 As a result, any injuries sustained above that threshold should serve as a basis for recovery. 234
…In products liability cases the ’use of expert witnesses has grown in both importance and expense.’301 Because of the extraordinary cost of experts in products liability litigation, many plaintiffs are turned away because, even if they were to recover, the prospective award would not cover the expense of litigating the claim. 302
…Although complex, OAVs function much like the cruise control that exists in modern cars. As we have seen with seat belts, air bags, and cruise control, manufacturers have always been hesitant to adopt safety technologies. Despite concerns, products liability law is capable of handling OAVs just as it has these past technologies. While the novelty and complexity of OAVs are likely to preclude plaintiffs from proving defect under the consumer-expectation test, as implementation increases this likelihood may decrease. Under a risk-utility analysis, manufacturers will stress the extraordinary safety benefits of OAVs, while consumers will allege that designs can be improved. In the end, OAV adoption will benefit manufacturers. Although liability will fall on manufacturers when vehicles fail, decreased incidences and severity of crashes will result in a net decrease in liability. Further, the combination of LDWS cameras and EDRs will drastically reduce the cost of litigation. By reducing reliance on experts for complex causation determinations, both manufacturers and plaintiffs will benefit. In the end, obstacles to OAV implementation are more likely to be psychological than legal, and the sooner that courts, manufacturers, and the motoring public prepare to confront these issues, the sooner lives can be saved.
“Self-driving cars can navigate the road, but can they navigate the law? Google’s lobbying hard for its self-driving technology, but some features may never be legal”, The Verge 14 December 2012
Google says that on a given day, they have a dozen autonomous cars on the road. This August, they passed 300,000 driver-hours. In Spain this summer, Volvo drove a convoy of three cars through 200 kilometers of desert highway with just one driver and a police escort.
…Bryant Walker Smith teaches a class on autonomous vehicles at Stanford Law School. At a workshop this summer, he put forward this thought experiment: the year is 2020, and a number of companies offer “advanced driver assistance systems” with their high-end model. Over 100,000 units have been sold. The owner’s manual states that the driver must remain alert at all times, but one night a driver—we’ll call him “Paul”—falls asleep while driving over a foggy bridge. The car tries to rouse him with alarms and vibrations but he’s a deep sleeper, so the car turns on the hazard lights and pulls over to the side of the road where another driver (let’s say Julie) rear-ends him. He’s injured, angry, and prone to litigation. So is Julie. That would be tricky enough by itself, but then Smith starts layering on complications. Another model of auto-driver would have driven to the end of the bridge before pulling over. If Paul had updated his software, it would have braced his seatbelt for the crash, mitigating his injuries, but he didn’t. The company could have pushed the update automatically, but management chose not to. Now, Smith asks the workshop, who gets sued? Or for a shorter list, who doesn’t?
…The financial stakes are high. According to the Insurance Research Council, auto liability claims paid out roughly $215 for each insured car, between bodily injury and property damage claims. With 250 million cars on the road, that’s $54 billion a year in liability. If even a tiny portion of those lawsuits are directed towards technologists, the business would become unprofitable fast.
…Changing the laws in Europe would take a replay of the internationally ratified Vienna Convention (passed in 1968) as well as pushing through a hodgepodge of national and regional laws. As Google proved, it’s not impossible, but it leaves SARTRE facing an unusually tricky adoption problem. Lawmakers won’t care about the project unless they think consumers really want it, but it’s hard to get consumers excited about a product that doesn’t exist yet. Projects like this usually rely on a core of early adopters to demonstrate their usefulness—a hard enough task, as most startups can tell you—but in this case, SARTRE has to bring auto regulators along for the ride. Optimistically, Volvo told us they expect the technology to be ready “towards the end of this decade,” but that may depend entirely on how quickly the law moves. The less optimistic prediction is that it never arrives at all. Steve Shladover is the program manager of mobility at California’s PATH program, where they’ve been trying to make convoy technology happen for 25 years, lured by the prospect of fitting three times as many cars on the freeway. They were showing off a working version as early as 1997 (powered by a single Pentium processor), before falling into the same gap between prototype and final product. “It’s a solvable problem once people can see the benefits,” he told The Verge, “but I think a lot of the current activity is wildly optimistic in terms of what can be achieved.” When I asked him when we’d see a self-driving car, Shladover told me what he says at the many auto conferences he’s been to: “I don’t expect to see the fully-automated, autonomous vehicle out on the road in the lifetime of anyone in this room.”
…Many of Google’s planned features may simply never be legal. One difficult feature is the “come pick me up” button that Larry Page has pushed as a solution to parking congestion. Instead of wasting energy and space on urban parking lots, why not have cars drop us off and then drive themselves to park somewhere more remote, like an automated valet?It’s a genuinely good idea, and one Google seems passionate about, but it’s extremely difficult to square with most vehicle codes. The Geneva Convention on Road Traffic (1949) requires that drivers “shall at all times be able to control their vehicles,” and provisions against reckless driving usually require “the conscious and intentional operation of a motor vehicle.” Some of that is simple semantics, but other concerns are harder to dismiss. After a crash, drivers are legally obligated to stop and help the injured—a difficult task if there’s no one in the car. As a result, most experts predict drivers will be legally required to have a person in the car at all times, ready to take over if the automatic system fails. If they’re right, the self-parking car may never be legal.
“Automated Vehicles are Probably Legal in the United States”, Bryant Walker Smith 2012
The short answer is that the computer direction of a motor vehicle’s steering, braking, and accelerating without real-time human input is probably legal….The paper’s largely descriptive analysis, which begins with the principle that everything is permitted unless prohibited, covers three key legal regimes: the 1949 Geneva Convention on Road Traffic, regulations enacted by the National Highway Traffic Safety Administration (NHTSA), and the vehicle codes of all fifty US states.
The Geneva Convention, to which the United States is a party, probably does not prohibit automated driving. The treaty promotes road safety by establishing uniform rules, one of which requires every vehicle or combination thereof to have a driver who is “at all times … able to control” it. However, this requirement is likely satisfied if a human is able to intervene in the automated vehicle’s operation.
NHTSA’s regulations, which include the Federal Motor Vehicle Safety Standards to which new vehicles must be certified, do not generally prohibit or uniquely burden automated vehicles, with the possible exception of one rule regarding emergency flashers. State vehicle codes probably do not prohibit-but may complicate-automated driving. These codes assume the presence of licensed human drivers who are able to exercise human judgment, and particular rules may functionally require that presence. New York somewhat uniquely directs a driver to keep one hand on the wheel at all times. In addition, far more common rules mandating reasonable, prudent, practicable, and safe driving have uncertain application to automated vehicles and their users. Following distance requirements may also restrict the lawful operation of tightly spaced vehicle platoons. Many of these issues arise even in the three states that expressly regulate automated vehicles.
…This paper does not consider how the rules of tort could or should apply to automated vehicles-that is, the extent to which tort liability might shift upstream to companies responsible for the design, manufacture, sale, operation, or provision of data or other services to an automated vehicle. 6
…Because of the broad way in which the term and others like it are defined, an automated vehicle probably has a human “driver.” 295 Obligations imposed on that person may limit the independence with which the vehicle may lawfully operate. 296 In addition, the automated vehicle itself must meet numerous requirements, some of which may also complicate its operation. 297 Although three states have expressly established the legality of automated vehicles under certain conditions, their respective laws do not resolve many of the questions raised in this section. 298
…A brief but important aside: To varying degrees, states impose criminal or quasicriminal liability on owners who permit others to drive their vehicles. 359 In Washington, “[b]oth a person operating a vehicle with the express or implied permission of the owner and the owner of the vehicle are responsible for any act or omission that is declared unlawful in this chapter. The primary responsibility is the owner’s.” 360 Some states permit an inference that the owner of a vehicle was its operator for certain offenses; 361 Wisconsin provides what is by far the most detailed statutory set of rebuttable presumptions. 362 Many others punish owners who knowingly permit their vehicles to be driven unlawfully. 363 Although these owners are not drivers, they are assumed to exercise some judgment or control with respect to those drivers-an instance of vicarious liability that suggests an owner of an automated vehicle might be liable for merely permitting its automated operation. 364
…On the human side, physical presence would likely continue to provide a proxy for or presumption of driving. 366 In other words, an individual who is physically positioned to provide real-time input to a motor vehicle may well be treated as its driver. This is particularly likely at levels of automation that involve human input for certain portions of a trip. In addition, an individual who starts or dispatches an automated vehicle, who initiates the automated operation of that vehicle, or who specifies certain parameters of operation probably qualifies as a driver under existing law. That individual may use some device-anything from a physical key to the click of a mouse to the sound of her voice-to activate the vehicle by herself. She may likewise deliberately request that the vehicle assume the active driving task. And she may set the vehicle’s maximum speed or level of assertiveness. This working definition is unclear in the same ways that existing law is likely to be unclear. Relevant acts might occur at any level of the primary driving task, from a decision to take a particular trip to a decision to exceed any speed limit by ten miles per hour. 367 A tactical decision like speeding is closely connected with the consequences-whether a moving violation or an injury-that may result. But treating an individual who dispatches her fully automated vehicle as the driver for the entirety of the trip could attenuate the relationship between legal responsibility and legal fault. 368 Nonetheless, strict liability of this sort is accepted within tort law 369 and present, however controversially, in US criminal law. 370
On the corporate side, a firm that designs or supplies a vehicle’s automated functionality or that provides data or other digital services might qualify as a driver under existing law. The key element, as provided in the working definition, may be the lack of a human intermediary: A human who provides some input may still seem a better fit for a human-centered vehicle code than a company with other relevant legal exposure. However, as noted above, public outrage is another element that may motivate new uses of existing laws. 377
…The mechanism by which someone other than a human would obtain a driving license is unclear. For example, some companies may possess great vision, but “a test of the applicant’s eyesight” may nonetheless be difficult. 395 And while General Motors may (or may not) 396 meet a state’s minimum age requirement, Google would not. [See Google, Google’s mission is to organize the world’s information and make it universally accessible and useful, www.google.com/intl/en/about/company/. In some states, Google might be allowed to drive itself to school. See, e.g., Nev. Rev. Stat. § 483.270; Nev. Admin. Code § 483.200.]
And people say lawyers have no sense of humor.
One thing that hasn’t been been mentioned is what kind of security the car’s operating system has. Image what will happen after the first major autonomous car-virus, especially if the virus is malicious rather than merely incidentally introducing bugs. Keep in mind it’s not to hard for a virus to be very malicious since autonomous cars need to know what pedestrians are in order to avoid hitting them.
Have there been major cell phone viruses? I haven’t heard of many, but maybe I’m just out of the loop.
About the red light, I’m not even sure if it’s necessary for anyone to pay. If you think about it, fines are there to discourage human drivers from breaking the rules. But in a robotic car, running a red light is due to faulty programming or bugs. Robotic cars will try not to run red lights even if there is no fine—they will not be allowed on the road unless they already obey the rules.
If some company happens to produce robotic cars that run red lights a lot, then of course it would be necessary to, for example, place a temporary ban on those car models until the software issues are resolved.
Of course, for cases where property damage is done or traffic is disrupted, then you could argue that the company that produced the car would have to be fined.
“Exclusive: In boost to self-driving cars, U.S. tells Google computers can qualify as drivers”:
Keep in mind that these developments will not be occurring in a vacuum, but in the context of other types of autonomous drones being developed.
I don’t really understand the legal problem.
Why can’t the law just be, if you’re behind the wheel of an autonomous car in possession of immediate over-ride, then you’re exactly as liable as a normal driver?
Now, in practice, if something goes wrong you’re going to be in a terrible position to stop it, because you’re going to not be paying attention. But the law can just be “Well, you have to be paying attention or you’re liable!”—even if that really just amounts to the fact that you’re taking on different risks driving this autonomous car.
The law exploits this illusion of control people have all the time. Why not here?
Of course, this means you can’t just read a book while your car drives itself—at first. As the market penetration increases and the software improves, and people get more comfortable, the laws will be relaxed. Especially because the more driverless cars on the road, the safer they will be.
This assumes that there is someone in the car for starters.
What’s so wrong with that? Ideally we want to move past it, but for regulation for right now it seems a fine compromise.
The main application of autonomous cars is replacing truck drivers and delivery services, and maybe taxis. Otherwise you are just making driving slightly more convenient.
That’s how Google runs it cars at the moment.
That’s the wrong question. You have to ask yourself about what the existing laws do. If one of Google autonomous cars that drives around crashes into another car I would guess that a court would see Google as being responsible in addition to seeing the driver as responsible.
That’s no good line of argumentation when you want to convince a politician to pass a law in your favor.
Economically it makes sense to have cars that work like Taxi’s where the human driver doesn’t have a way of immediate over-riding the system.
How do you get from the second sentence to the first sentence?
Isn’t it premature to make predictions about car use? Shouldn’t you start with predictions about further legal change? (of course there is positive feedback, so they aren’t completely independent)
Or maybe the legal barrier isn’t the first one to look at. When you predict that even niches that can ignore public road law will not be dominated by autonomous cars a decade from now, you seem to be making a prediction that does not depend on the law. But the relationship between your various claims is not clear to me.
I thought it was fairly clear: major legal obstacles remain, and essentially nothing has been done. Hence, application will be limited.
Maybe, if I had any idea how to phrase it. I would’ve hoped the excerpts conveyed a sense of how complex and legalistic things are. Do I phrase it as ‘passed legislation’? But what about states where what matters is how courts interpret some phrasing in existing law relating to horses? Or what about ones where what matters is whether the insurance giants will sue the begeezus out of any autonomous car? If it’s legislation, what exactly should it say? Does a prediction about legalization count if the legislation forces all liability onto the car manufacturer and so no one will sell an autonomous car into that state and you can’t actually buy one there?
This is just an observation that some niches are being taken over right now: I understand some warehouses are being automated as we speak, in particular, Amazon’s. But these things take time. Everything physical moves slower than software.
--”Elon Wants to Make Your Tesla Drive Itself. Is That Legal?”
Are you confident in your short-term pessimism, gwern? It seems like there are many ways for the technology to potentially quickly gain acceptance. Mainly, it’s extremely convenient that states are able to regulate driving, as that provides 50 avenues for starting rolling out cars (in addition to the 190 other international opportunities). Once one place adopts, my mental model of how things will probably happen says that many will follow within a year, and most (70%) will follow within seven years (and probably within three years if the early adopter has borderless access to non-adopters, like a US state or a Schengen area nation).
Maybe that’s too optimistic, but it just seems like the benefits are enormous and immediate (accidents reduction, but also probably improved traffic flow/improved road capacity being the really big ones), while the costs are slightly delayed (mainly job disruptions for taxis would probably only happen after a few years, although probably much faster for truck drivers), which should help adoption. Once there are serious accident stats, and we see that accidents fall by hopefully somewhere over 90% for individuals choosing to adopt, it may prove politically difficult to justify letting tens of thousands die each year, and some areas would probably also prefer to encourage adoption of driverless cars to expanding their road network. Although I guess the process could be delayed if there are high-profile mess-ups.
No, I’m not strongly confident. In a sane world autonomous cars would go much faster than they are now, but we’ve seen stagnation in many areas and autonomous cars could just be the next one.
Can you name any major legal issue which has passed through legislatures that fast? Gay marriage, regulating smoking, anything?
One could’ve said something similar about tobacco.
Those are social issues with minimal short-term economic impact. I don’t think they’re really the right reference class to use (although if they turn out to be, I would agree that we’re in trouble). Unfortunately, I can’t think of things that are in quite the right reference class in modern times. Perhaps earlier examples would be early automobiles, passenger airplanes (and Concorde), the electric grid, that sort of thing. Most of these seem to have followed a route of “permitted until proven dangerous or undesirable”, but unfortunately they also mostly emerged too long ago, and the social climate has definitely now become more litigious and risk-averse, so I can see how you might find the comparison unpersuasive.
If you’re looking for general issues, it’s difficult to judge. I think you’d agree that individuals states do pass some reforms rather quickly. For example California institutes fuel efficiency regulations without too much debate, I think. I do admit that states almost never sync up their regulations completely unless the federal government takes it up, but this may be a special case, as enforcement would prove problematic if some states chose to forbid driverless cars while others permitted them.
Not really—health issues are inherently difficult to isolate. We still seem pretty unsure about most dietary things, for example. On the other hand, the feedback from driverless cars should be a lot more clear and immediate.
They also were all very slow emergences, decades from first attempts to any market penetration you could call widespread, with considerable legislation slowing them down at points—one thinks of the story that cars in cities needed someone walking in front of them waving a flag (not sure if that was true or apocryphal).
I assume most of those are going to be follow-up regulations, additional tightenings of the screw.
“Who is at fault in this accident?” “Not me, officer!” One thinks of the Toyota acceleration issues, where it may just have been the elderly drivers panicking & blaming the car, but where the lawsuits are probably still going on.
I’m not sure if I’ve seen this suggested, but with all the sensors these things have for driving, wouldn’t it be trivial to have a “black box” installed that recorded exactly what happened in the event of an accident? There might be some privacy concerns, etc., but it seems like it’d make things a lot easier (specifically, even if companies are held to be liable, if there are few enough errors, litigation could still be decently cheap).
(Also, Toyota recently settled most of the suits for $1.1 billion, although a few smaller ones are outstanding. But that’s a good point.)
Anyway, I guess overall I’m just a bit more optimistic about the combination of potential immense benefits from the technology with politicians being pragmatic. We shall see? (If that seem a bit more pessimistic than the position I’ve been arguing, take that as me updating on your pessimism.)
My understanding was that the black boxes exist but all said simply that ‘the pedal was pushed so the car went faster’; the boxes can only record what they record, and if the signals or messages themselves were false, they’re not going to pinpoint the true cause. This was why Toyota was tearing through the electrical and computer systems to see how a false pedal signal could be created. Nothing was found: http://en.wikipedia.org/wiki/2009%E2%80%932011_Toyota_vehicle_recalls
On black boxes:
As far as autonomous car adoption rates go:
$1.1b is worth a lot of risk aversion.
Yeah. The nice thing about autonomous cars is that the consequences are pretty bounded, and so, unlike most/all existential risks, we can afford to just wait and see: all that a wrong national/international decision on autonomous cars costs is trillions of dollars and millions of lives.
I more had in mind the idea that with black boxes installed in self-driving cars, they could record the full situation as seen by all sensors, and thus tell if accidents occurred because of another driver, or while the driver of the car was overriding the self-driving mode, which should simplify things. I’d imagine the car should be able to tell whether the signals came from it or the driver, which should at least drastically reduce the number of “It wasn’t me, officer!” claims.
Well, taken literally, it’s really not. If, say, 20% of an annual 12 million annual cars sold were automated, just an extra profit of $458 a car would be enough to offset that in a year (obviously, you’d need some extra profit to justify development and such, but still). That said, the liabilities for any serious failure would naturally increase in proportion with sales, so it would really depend on the details of the situation. If there’s a risk that the car will seriously mess up on a software level (e.g. cause 1 accident per day per 10000 cars with the problem going unnoticed for several months) or that it might get hacked, that might be too risky to go forward if the manufacturer is liable.
Pretty much, yes. There may be some low-hanging fruit that can be obtained efficiently. For example, it would be helpful to have papers by already prominent academics showing the cost-benefit analysis, which should hopefully be picked up by the media and generate some positive public opinion priming.
The world will see autonomous passenger trains and autonomous commercial planes before it has to get used to autonomous cars.
And once autonomous cars routinely win races against human drivers, the laws will change quickly enough.
I’m under the impression that commercial plans are largely autonomous already.
Already has.
“Self-Driving Cars Will Make Us Want Fewer Cars”; can’t seem to figure out where the actual McKinsey paper is. There’s one on their site from 2013, but not 2015.
New paper: https://www.enotrans.org/wp-content/uploads/wpsc/downloadables/AV-paper.pdf
Discussion: http://www.washingtonpost.com/blogs/wonkblog/wp/2013/10/23/heres-what-it-would-take-for-self-driving-cars-to-catch-on/
“Of Frightened Horses and Autonomous Vehicles: Tort Law and its Assimilation of Innovations”, Graham 2012 http://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=1170&context=facpubs
More links:
http://www.templetons.com/brad/robocars/accident.html
http://www.volokh.com/2013/05/05/self-driving-vehicles-how-soon-and-who-will-bear-the-liability-costs/ with a potential pointer to more detailed legal work:
http://www.thedailybeast.com/articles/2013/01/24/are-driverless-cars-really-in-our-near-future.html
http://www.economist.com/news/special-report/21576224-one-day-every-car-may-come-invisible-chauffeur-look-no-hands
http://www.economist.com/news/business/21564821-carmakers-are-starting-take-autonomous-vehicles-seriously-other-businesses-should-too
A legal parallel illustrating my concerns about the burden of insurance, the decidedly non-robotic ride-sharing sector; The Economist, “All eyes on the sharing economy—Collaborative consumption: Technology makes it easier for people to rent items to each other. But as it grows, the “sharing economy” is hitting roadblocks”:
If I would be google I would start by going to small nations like Singapore. Get the necessary laws to operate the technology in Singapore. Singapore has no problem with making laws that simply issues like that.
Afterwards let your lobbyists go to other countries and propose that they give you the same laws.
It’ll be interesting to see how medical robot (protected by the FDA) lawsuits turn out: http://climateerinvest.blogspot.se/2013/01/first-they-came-for-robot-surgeons.html
Hello, I’m looking for the comment section and got lost, is this it?
The legal status quo is secondary to public perception, which—other than some technophile aficionados—is quite reserved. There’s too much male identity attached to driving, not only are cars used to show off status, but so is the driving style you use them with. As is often the case, people confuse a “autonomous cars are not for me” with “autonomous cars—what nonsense, should not be allowed!”, in part because they feel threatened their identity-generating toy could be devalued / taken away.
Secondly, the reaction to a robot (car) causing accidents—killing people (gasp) is vastly disproportionate in relation to human-caused killings that are accepted as part of the supposed fabric of nature/society. It is one of the main reasons why robotic surgery has such a hard time taking over—a surgeon “doing his best” yet the patient not surviving? A human tragedy, it may have been the patient’s time to go. A robot—with a much higher success rate—“killing” a patient? Outrage! Liability demands! A central figure that can be blamed (the manufacturer). It is comical how the makers of the da Vinci surgical system have to insist that every move is controlled by a physician, when certain aspects would work much better when automated.
And that is why Carthago should be destroyed. I’m sorry, what was this comment about? Ah well …
Obviously, this is quite cultural. Apparently in Stockholm (Sweden), only about one quarter (link in Finnish) of 18-year olds acquires a driving license, though many get one later on when life circumstances change. In the Helsinki region (Finland) as well, there has been a bit of a reported decline in the popularity of driving licenses among the young, though it’s unclear to what extent the statistics actually support this. In cities that have a good public transport system, people can easily grow up with the notion that owning a car simply isn’t necessary.
(I don’t have a license myself, and although I know many who do, I’ve never gotten the impression that not having one would be considered particularly unusual. Of course, my normal social circles are rather unrepresentative of the population at large.)
https://www.youtube.com/watch?feature=player_detailpage&v=2bmqdnx5R1U#t=364s
Huh. Must be a countryside thing.
Yes of course. (Unfortunately, ) the benchmark culture inhibiting or furthering progress on such trends still seems to be the United States, where due to the distances involved, and the typically abysmal public transportation system, personal mobility by car is usually a central component of everyday life. It is in principle possible for such progress to emerge in, say, Japan, as it did in the mobile communications market. However, such progress has typically been insular. To truly make the jump to the “mainstream”, the canary in the coal mine has historically been the US.
But … but … how do you drive your Lamborghini?
Well, at least if you measure “mainstream” by what’s mainstream in the US. ;) It wasn’t just Japan that pulled ahead with mobile communications, for example—back in 2000 the US was lagging badly behind most of Europe when it came to cell phones. And there are areas like banking services where it’s doing even worse—before I visited the US back in 2010, I think I had seen one or two cheques in my whole life, but in the US they were still in active use.
Why do you assume that banking services are worse if people still use checks? That seems like cultural parochialism. It isn’t like credit card and debit card transactions aren’t universally available, but check transactions are as well. It seems to be an added convenience the retail sector provides for those who want it.
Btw, this inspired me to ask someone I know why she still uses checks to pay for groceries. She said that she just likes using the little ledger in the checkbook to keep track of her spending. And (she says) it doesn’t take any longer really. You just sign the check, and most stores have a machine that fills out everything else for you.
(Quote from a later post, because I wanted to respond higher up in the thread...)
One instance I can think of where this is not the case is rent payments, many (most?) of which are still done by check by default.
But in any case, the very fact that many poor Americans are underbanked (which you say leads them to prefer checks) demonstrates that the US banking system is inferior to the European, although this is probably more a result of bad overall regulation and market structure than being “behind” on technology. That said, I don’t think either system is especially innovative, except perhaps in the (sometimes unfortunate) sense of creating new financial products. If we want to find actual consumer banking innovation, it seems to be primarily occurring in the developing world, where we’re seeing things like microlenders, interesting savings products, phone-based money transfer (and the usage of airtime as an alternative currency), and so on.
Bitcoin.
I’m aware it exists, but its penetration is vastly lower than the things I mentioned, and its usefulness more dubious.
To me Kickstarter is just as innovative as Microlending.
Paypal is a recent invention and works much better than phone-based money transger with airtime as alternative currency.
First of all, I think it’s worth pointing out that this conversation is about innovation not whether one retail banking system is better. It is possible for a system to be more innovative while also having worse outcomes. For example, the US healthcare system lags in health outcomes in many ways, but is dominant in medical research publications, medical Nobel prizes (a solid majority of medical Nobel prizes have gone to US researchers in the last 30 years) medical device manufacturing, pharmaceuticals, etc.
You are just assuming that these people are bankless due to some unusual quality of American banks.The main reason I have heard for poor people avoiding banks is overdraft charges—something almost all banks in all countries have. It’s quite possible that America just has a more feckless underclass than most European countries, and they are bad at estimating when they will overdraw. For people like that, using cash would make a lot of sense. Or maybe America’s underclass has more cashflow problems because the US welfare system is oriented more toward in-kind services than cash transfers. Or maybe its just a weird subcultural thing that America’s underclass likes having wads of cash on hand.
I agree with this.
This does not appear to be the case. Here’s a good overview from the Federal Reserve Bank of St. Louis. My personal guess is that the fractured nature of the US banking systems (many small local banks) results in uneven service quality and less caring about reputation from banks, as well as costs being higher per client, making poorer clients less profitable. In addition, Deloitte writes: “In the United States, many traditional bank competitors view the unbanked and underbanked segments as unattractive as their costs to serve are high and income generated low relative to more affluent segments. This is partially a reflection of a high-cost base and expensive and restrictive regulatory actions exacerbated by recent consumer legislation.”
It is not a case of the poor being better off without banking services or anything like that (as mention in the Fed article, and also here’s the World Bank for some international stats), it’s really just a case of poor provision. On the bright side, it seems like the situation is somewhat improving with prepaid cards being offered, which do not require a complicated application process, and which have no overdraft at all (if you think that’s important), with the one downside being that it seems like some of them may have unreasonable fees.
Well, that went off-topic.
You are very dishonestly misrepresenting your sources, and privileging your hypothesis. None of your sources actually claim that US bank practices are different. Deloitte does say that regulations (ceteris paribus) increase the cost of serving low-income people, but does not actually demonstrate that such costs are higher than they are in Europe. Furthermore, most of these regulations are designed to make banking more appealing to low-income people, since they decrease fees, and force banks to offer overdraft protection. Yes those costs make banks less likely to advertise to poor people, but also make bank use more attractive for poor people. And none of this demonstrates that bank behavior is different in the US—since European countries probably also have consumer protection laws. Deloitte never claims that US banks are more regulated on balance.
The World Bank source actually argues my point in my earlier response to you: that America’s poor people don’t use banks because 1. they have cashflow problems and 2. They don’t trust banks (i.e. a cultural problem). Both of those are characteristics of US poor people, not of US banks.
The Fed St. Louis report also supports my claim:
You claim that I privilege my hypothesis, while your own hypotheses as originally stated were: 1) overdraft charges are scaring the poor away, 1a) maybe US poor are more “feckless”, 2) maybe US poor have less cash flow due to welfare being less cash-based, 3) maybe US poor prefer to use cash… and then you claim that Fed St. Louis supports you in that paragraph? Really? It mentions none of those things except insofar as one of the aspects of “negative prior experience” is overdraft charges (and by a very generous reading it could perhaps be argued that the “paycheck to paycheck” bit is somewhat similar to what you said). In fact, I went out of my way to search for “overdraft” specifically, as that was your primary hypothesis, and I found that fears of overdraft protection come up in surveys as minor factors for not having a bank account, and as fairly major factors among those who prefer prepaid cards to debit cards (which I mentioned).
In addition, I would appreciate it if in the future you would not assume that someone is being willfully dishonest unless you are absolutely certain of it. Having reread my previous post and re-skimmed the sources I cite, I do not see where I misrepresented them. Although I obviously did not write everything each of them said (and indeed I also read three or four other things I did not link at all), I believe my post is broadly in alignment with their intent.
Regardless, I find I have no interest in further discussing this topic with you, so this will be my last post.
I have never had any interest in discussing this with you. I did so merely as a courtesy for someone who seemed to have an interest.
Customer protection laws in Europe don’t allow banks to operate the same way with overdraft charge as banks can operate in the US.
This doesn’t seem to be true. I’ve confirmed that both the UK and France have overdraft fees. US banks also are required by law to offer overdraft protection (which is opt-in).
Because I’ve been told that the main reason for checks still being in use is the lack of convenient wire transfers, and having to bother with cashing out checks when e.g. getting your salary sounds a lot less convenient than just getting the money directly to your bank account. (Feel free to correct me in case my information’s mistaken.)
Most employers actually prefer to pay via direct transfer, and obviously the banks offer these services.
Again, it isn’t a case of the service not being offered, it is just that individuals are offered a choice, and some choose differently than you would. Why does that bother you?
It doesn’t; I was under the impression that it was a case of the service not being offered. Some folks from around here that I know have sold their products in webstores owned by American companies, and complained about those companies insisting on payment by check, which are a pain to get processed in our banks. But perhaps this just means that American companies aren’t good with international wire transfers, while being fine with domestic ones?
Cheques aren’t an added convenience to cards, cards and online payment are added conveniences that are slowly superseding cheques. In Britain, cheques are on the way out, although not quite as fast as was once planned. Most shops no longer accept them.
I’m sure the US will catch up.
Unless you’re making the purely semantic point that checks existed first (which is entirely obvious, irrelevant, and did not need to be pointed out), yes they are an added (i.e. additional) convenience offered over firms that do not accept them.
It is surprising that you would think accepting more payment methods is somehow worse than offering fewer.
Additional security vulnerabilities, additional costs to implement and support possibly blocking new better approaches, additional complexity...
If checks had not yet been invented, and someone came to Europeans saying, ‘I have this new ultra-cool system of payment which involves trivially forged signatures on paper where the bank upon receiving it takes a photograph rather than store it on paper and where fraud may not be detected for days; also, you have to manually keep track of the balance and if you don’t and you write a check that bounces you’ll be fined by the bank and maybe also the person you wrote the check too; and did I mention that the security is so weak that people who want to distribute their checks far & wide like Don Knuth can’t do it because their bank accounts will be raided? Pls pay me $$$ for my kicking invention kthnxbai’, do you think they would greet him with open arms?
Yes checks have downsides, but they have upsides as well, which is why a substantial percentage of people prefer to receive them. You should read about this thing called “Revealed Preference.” For example, more than a fifth of black and Hispanic Americans don’t have checking accounts—usually because of bad experiences involving a overdraft fees. So being able to receive checks is a nice convenience for them.
Also with checks you avoid interchange fees, so there are advantages to merchants as well.
I strongly suspect that in Europe the number of “bankless” households is much smaller. My point, though, is that having check capabilities isn’t worse than not having them—unless you already happen to be living in a place where nobody prefers checks.
Your argument reminds me of this guy I know who always gets really mad that some people buy iPhones instead of Android phones. But… they’re technically inferior! Stop using what I don’t use!
I take it that it did not occur to you that this was the point of my use of the reversal test on the European absence of checks.
I reiterate the point about revealed preferences plus a reiteration of my list of costs to supporting checks.
In that case, you have misunderstood what my discussion with Kaj was about. He felt that the fact that checks were still in wide use in the United States implied that financial services in America lack modern conveniences.
I pointed out that the fact that an older medium is still accepted does not, in fact, imply that wireless transfer and debit card payments are not available. My argument was not that Europeans should adopt checks because they are superior, but that having check capabilities doesn’t actually mean modern conveniences are lacking, as he implied.
Revealed preferences as applied here can only give a stalemate: Americans preferring checks and Europeans never using checks doesn’t tell us which is superior. Adding in the reversal test and noting that the European electronic systems came after rather than simultaneously with checks tells us that in a direct head-to-head comparison, checks lost in a large area of the world, and noting that they don’t offer them at all, apparently, tells us that there is a large burden or cost to supporting checks—in direct opposition to your rhetorical invocation of ‘how can additional options be bad?’. With the burden of supporting checks established, the situation now looks like one of path dependence or local optima traps: the Europeans were able to escape to a more efficient secure useful system of e-banking while the Americans continue to be trapped in a local optima because they cannot profitably shift to a e-banking system while also supporting the costs of the existing checking system.
If this claim was right, there would be lots of locations in which card readers are not available, but checks are accepted. In fact, you seem to be totally wrong about this bizarre claim. In reality, debit cards, credit cards, and charge cards are pretty much always available wherever checks are accepted. I’ve never been to a retailer that accepted checks but not cards.
The same argument could be used to claim that Los Angeles’ transportation infrastructure is more advanced than San Francisco’s. (LA used to have a light rail system, but later transitioned entirely to cars. SF kept light rail going, in addition to having cars).
It’s a good thing I never said that because it is indeed bizarre. Of course individuals may move closer to the better European optima, but the system as a whole remains in the optima. You may be able to use your debit card in plenty of places, but where’s the rest of the European style system? Can you trivially send money from account to account? Receive deposits in minutes or hours rather than multiple days? etc.
Would pass the reversal test. There are plenty of LAers who would welcome a light rail system, and googling I see there are active light rail projects.
The downside is, cards aren’t as useful for doing this.
Given the recent history of technological innovation, WTF are you talking about?
There are counterexamples (e.g. renewable energy), but from the internet itself to MS, Intel, Google, FB, Sun, Cisco, IBM, the market and the market innovators (if not the manufacturing) were centered on the United States. Look at where those autonomous cars are developed, and by whom.
What are you thinking of that makes you so incredulous?
In Germany BMV and Audi develop autonomous cars. In Japan Toyota develops them. The US carmakers don’t but Google does develop them.
The US is probably one of the countries where the price you have to pay when your car accidently crashes a pedestrian is highest. If you want to bring a new techonlogy to market that’s likely to kill people by accident the US might be the worst place.
I think this has more to do with the fact that US automakers have major financial problems and thus can’t afford to spend large amounts of money on speculative research.
From the context in the grandparent it seemed like you were arguing against the notion that the US is the early adapter.
Sorry if I was unclear.
There are probably similar identity-creating elements in other countries, beyond cars.
Digressing a bit, I’m having a hard time comprehending the notion of assigning identity to anything beyond the way my own mind works. Doing that with a car seems completely insane, but as I said I’ve never even been close to that volume of mind-space, so I might be completely misunderstanding.
I’d like to understand what’s going on, though. Would anyone like to try explaining?
Identity is about your place and role in society, and car ownership feels like a role just as much as anything else, so I’m not sure what exactly about it is that you find confusing. (To me, the notion of only assigning identity to the way one’s mind works sounds a little weird.)
My place and role in society are.. just circumstances. Caring about them as ends, rather than means, seems weird to me.
When you put it like that, though.. At some point I read an article, and I think it was here on Lesswrong, about the types of people who survived WW2 camps. The main takeaway was that the survivors were primarily people who defined themselves in terms of their own mind and immutable attributes, rather than in terms of career, friends, etc.
The article also pointed out that such people are fairly rare, which if the common-case identities also includes items such as cars (I rather think it would) would help explain why common attitudes towards cars are as they are.
I don’t suppose you remember the article in question?
I can’t find that article here, on OB, or in my Evernotes.
Googling, this Holocaust meta-analysis http://www.apa.org/pubs/journals/features/bul1365677.pdf only mentions defensive mechanisms post-Holocaust (focusing on denial); http://amcha.org/Upload/folgen.pdf & http://peterfelix.tripod.com/home/Psychopathology.pdf focused on treatment of survivors and any effects on their children & grandchildren.
Roberta Greene’s ‘resilience’ research sounds relevant, but from this summary http://www.templeton.org/pdfs/press_releases/Utopian%20Spring%202009.pdf it sounds like it found the opposite—dependence on friends & family:
Some interesting citations and facts:
Here.
Yes, that’s it, thanks!
Now I can go on to actually read the book.
Well, evolutionarily at least it would make a lot of sense to care about your role in society as an end. If the village can only support one smith, for instance, then you better focus your energies on being a damn good smith so that nobody else can replace you.
And nope, afraid not.
If the village can only support one smith, and you’re the smith, then if you’re not bloody incompetent you can probably do just fine for yourself as long as you make sure to take on an apprentice eventually (it needn’t be until later in life). Nobody else’s business is smithing, so you don’t have any competition, and as long as your goods perform reasonably well you’re pretty much set.
At best you take a son as apprentice so that he can feed you when you are too old to work as smith.
I believe you are recalling a remark by Mike Darwin.
I don’t see any hits for ‘Holocaust’ on chronopause.com, and http://www.ibiblio.org/weidai/lesswrong_user.php?u=mikedarwin turns up nothing useful for ‘Holocaust’ or ‘survivor’.
Anyways I recall the remark being an unjustified assertion.
It was a fairly long article, probably in PDF format. Does that sound right?
I remember seeing it in a LW comment or article, perhaps quoted from somewhere else.
Assuming you never what to leave the city.
Or mostly only visit similar cities.
No, I’m a student living in Berlin and I have no car but a driven license.
Airplanes and trains are decent possibilities to leave the city. Even carpooling provides a nice way to leave the city.
Or am willing to rent a car when I do.
You still need to have a driver’s license to do that.
I think you substantially overestimate how important this is. As urbanization continues and suburbs empty out, cars simply become impossible for many people to support. Further, the car mystique is being attacked at the root: young people. As minimum wages stagnate, teen unemployment continues to increase, insurance maintains its inexorable creep upwards, and additional obstacles put in the way of getting drivers’ licenses, teens literally cannot afford cars unless their parents buy them. It’s hard for anything to become part of your identity when you cannot obtain it.
Certainly. This is one of the factors making me pessimistic in the short-run. Autonomous cars are simply too novel, and will be treated under a massive double-standard. But as the young people grow up and the statistics start to percolate through the old peoples’ heads, combined with the expected improvements in autonomous cars, the problem will abate. This may not have happened in your physician example, but then again, if taxi drivers had veto power over autonomous cars, it might not happen there either...
Related reading: http://www.theatlantic.com/magazine/archive/2012/09/the-cheapest-generation/309060/ http://www.theatlantic.com/business/archive/2012/08/why-are-young-people-ditching-cars-for-smartphones/260801/
(unrelated) - I’m confused. Is there a reason why random letters are bolded?
Kawoomba’s post spells out “weathehollowmen”, (“we the hollow men” it seems) and gwern’s spells out “lipsthatwouldkissformprayerstobrokenchips” (I suppose that means “lips that would kiss form prayers to broken chips”). I have no idea why though… Probably a quote from something.
Doesn’t everybody know how to use Google yet? :-)
Which leads to a new trilemma on the existence of ignorance:
if a LWer hath not access to Google (the Internet), then from whence is he posting his question?
if a LWer hath access to Google and a desire to know, then for why his question?
if a LWer hath access to Google and desireth to know an answer and obtains it, then how did he post his question and not an answer?
QED, God is not omnipotent. Or something.
It might be fair to point at that ygert did not in fact ask a question (perhaps ygert does not care for looking up references despite caring to ‘uncode’ hidden messages) and brilee might have thought that the bolding was a technical issue and didn’t think to look for a message which would be google-able..
By the way, shortly after posting my comment, I did in fact google it. I didn’t go back and comment again or edit my comment though, assuming that others who want to find out could google it themselves (and I was being lazy). Perhaps that was a mistake.
Further evidence: population adjusted miles driven per year, normalized to 1971 levels
Peaked in 2005, currently back down to 1998 levels with little sign of the trend slackening.
Also, rising fuel prices. (This is more of an issue in Europe, especially Italy, than in the US, though.)
Overwhelming exception. Where I am, ISTM that most people in their early 20s drive a car, but few of them bought it themselves.
I was explicitly talking about teenagers.
I don’t see how that changes my point. In Italy you need to be 18 before applying for a driving licence, so the fact that younger people don’t drive doesn’t mean much. Many teenagers ride scooters that their parents buy them, and a small, second-hand car isn’t much more expensive, so I guess they’d drive their parents’ car if they were legally allowed to.
Italy is incomparable in many ways to the USA; discussion of trends in the USA do not easily generalize to Italy, so I don’t really care about Italian scooters. My points were about the USA, and I believe they remain valid about the USA; and given the predominance of the USA in technological matters, the USA’s regulation and trends will matter most to the development of autonomous cars.
AFAIK the US is wealthier than Italy, so, if anything, I’d expect American adults to be more willing to buy cars to their children than Italian adults are. Am I missing something? (Probably, given that I’ve never been to the US; but what, exactly?)
Maybe cars cost more here. Maybe insurance costs more. Maybe the culture frowns on scooters as replacements. Maybe a million things.
Wouldn’t that make it less likely for teenagers to buy their own cars, rather than more?
(Maybe I wasn’t clear. What I meant by “overwhelming exception” in the great^n-grandparent is that I’d guess that most of the teenagers who drive cars already are the ones who were bought cars by their parents. Were you implying that in the US until now there have been a large fraction of teenagers who buy their own cars?)
Yes. As far as I can tell, decades ago it was a lot more common for teenagers to buy cars, assisted by part-time jobs.
Those Atlantic articles seem like bogus trend pieces. The main evidence they cite is that the percentage of car sales to youth has declined—not surprising given the aging of the population in recent decades. As for the “suburbs empty[ing] out,” that isn’t actually happening. Suburban populations are still rising, just relatively slowly compared to the growth of cities.
And driverless cars are a boon especially to suburban areas, since they make commuting less annoying (and potentially, much faster).
Certainly. I think there’s multiple overlapping trends feeding the final results, but the cited stats could be purely cyclical or cherrypicked.
And the total population is still growing, which means a shift.
This can only happen after autonomous cars are accepted and widespread for ordinary driving, hence it doesn’t matter to my argument about acceptance.
Ubj vf gur GF Ryvbg ersrerapr eryrinag?
It seems an apt description of the people who are the subject of his first point.
Can you ROT13 that, it’s supposed to be a challenge. (Also, well done!)
I could say it was a harmless bit of levity, or that it’s nice to have some carrot (solving a puzzling problem) to go with the stick (pondering the legality of robotic cars, a niche subject). I could vaguely hint that such references are obliquely applicable in most any context, or that it was a stray thought, if that. I don’t really know. Maybe I have it all backwards.
EDIT: “Maybe I have it all backwards” and “Can you ROT13 that”, along with “I could vaguely hint” would have revealed “stage two complete”. Ah well. :-(
I did actually get it (bolded letters again, weren’t that hard to figure out), and considered a response, but never got motivated enough to post it. It would have been something like the following but I would hope to make it far less clumsy before posting it: “Amusement real enough. Though hidden Eliot reference eluded my own reading’s effectiveness—so took advantage google’s efficient search.”
Sounds like you’re making this up. At least cite some examples. I have never heard anyone express anything like this attitude.
Yes there are some people who enjoy driving, and some of those people may even choose not to buy driverless cars. Even for driving enthusiasts, they might enjoy taking their car out for a pleasure drive somewhere, but that doesn’t mean they enjoy driving their same boring commute everyday. So even for most driving enthusiasts, they might enjoy having an autopilot mode in addition to a manual mode for fun driving.
But even if there are some purists, that doesn’t imply that they will try to ban it for everyone else. I’ve never heard of hot-rodders or muscle-car enthusiasts ganging up to ban Priuses because they feel they somehow “threaten their masculinity”.
Basically this sounds like familiar nerd paranoia—the evil macho jocks are trying to ruin everything.
That particular claim I personally witnessed only once or twice—which counts for little. However, I’ve seen the more general pattern all too often: “I personally object to X, therefore X should be forbidden for everyone.” Gay marriage, abortion, THC, you name it. It’s rare to find a stance of (hyperbolically speaking) “I object to that activity, but I will fight to the death for your right to do it.”, or even to legally tolerate it. As such, even a priori (but based on the posteriors on many other issues) I’d expect for that pattern to apply to autonomous cars as well.
Instantiated to this instance: People who don’t want autonomous cars because they deem them unsafe, or because they prefer to drive their SUVs themselves, would not mind taking away the rights of others to use them. At least, that’s my claim.
(If someone feels strongly such a phenomenon does not exist and we find a good way to gather broader evidence, we could set up a bet, going to a charity of the winner’s choice.)
I’m pretty sure that phenomenon does exist, but it seems unlikely to me that that’s what’s going on in this case.
You realize that cars can kill/injure people other than their drivers?
You seem to be confusing “I don’t like X” with “I object to X”. The following two examples should help illustrate proper usage:
I don’t like chocolate.
I object to baby eating.
Just because you, personally, object to eating babies, doesn’t mean you have any right to say whether eating babies should be forbidden to others!
Very well, I have a preference too, I prefer that people who kill small children receive the death penalty. Put the electric chair next to the kitchen, you follow your preference, and I’ll follow mine.
(With apologies to Charles James Napier.)
I’m not confusing those, I claim those are all too easily confused in the general population.
Looking at all your comments in this thread, it seems to me that you are. At the very least you don’t seem to have exerted any effort thinking about how to tell whether something is like chocolate or like baby-eating.
Frankly, I’ve seen a lot more instances of people who prefer not to drive SUVs themselves attempting to take away the rights of those who do than the other way around.
Which would exactly be supporting my point of “I don’t like / object to X, so it should be forbidden for everyone!” I could have used your sentence just the same.
If you think of evidence that would contradict my claim, it would be people who oppose (or even just don’t like) X not wishing for X to be illegal.
As I think I already mentioned once, that’s a very common (maybe the majority) stance in Italy about abortion.
A good counter example. It’s curious but heartening that a country as seemingly catholic as Italy still accepts the right of others to choose differently. As opposed to, say, Ireland.
Vegetarianism is another one—I don’t think many vegetarians wish eating meat was illegal.
Yep. When people complain of the Church’s influence in Italian politics, I tell them about Ireland (I studied there for a year), including stuff like alcohol sales being banned on Good Friday. (OTOH, the Church does have ridiculous privileges fiscally in Italy, among other things.)
Who the hell is downvoting everything, anyway?
Given their stated reason for not eating meat, a reasonable argument could be made that this behavior is hypocritical.
What are you talking about? The vegetarians I’ve met vary widely in their stated reasons not to eat meat, with many of them being some variety of “I just don’t like it”.
At this point your argument appears to be “I and people similar to myself like to force our preferences about cars down other people’s thoughts; therefore, so does everybody else”. Sounds like a case of psychological projection to me.
If you’ve “seen a lot of instances of people who prefer not to drive SUVs themselves attempting to take away the rights of those who do” you seem to have observed the same pattern I did. As for the “force our preferences about cars”, I do not know what you mean, since it’s not a topic I’m particularly invested in. Personally, I use whatever gets me from A to B comfortably and safe, signalling be damned.
I also like the as much ability to choose what point B is, not to mention, the ability to decide at B whether to go to C or back to A without planning out the whole trip ahead of time. This is why I prefer cars to public transportation.
I don’t think that it’s fair to say that most of the opposition against abortion comes from people not wanting to use it themselves. The rather think that’s inherently immoral. The same to a lesser extend for gay marriage.
Autonomous cars don’t seem to be in the same category.
Why haven’t these macho driving enthusiasts tried to ban chauffeurs or taxis? Why haven’t they organized against automatic transmissions or cruise control or parking assist, etc. Again, you just seem like a conspiracy theorist.
I’m reminded of an Irish guy who once said that using manual transmission is one of the three things all men should be able to do (the other two being using maps rather than asking for directions, and opening jars). (Not sure how serious he was.)
(Anyway, it’s cultural. In Italy, pretty much everybody uses manual transmission, regardless of their gender.)
I assume Europeans use manual because the automatics decrease fuel efficiency and fuel is usually twice as expensive in Europe.
Wow. I didn’t know that. (I would have assumed that deciding which gear is most efficient at a given speed was something machines would be better at than humans.) What’s it good for, then?
Automatics are easier to use and probably safer, since both hands can be kept on the wheel. The fuel efficiency reduction is (I think) mainly because of the energy used in the pumping of hydraulic fluid, not switching gears poorly.
Funny.
Again, as clearly as I can:
Claim 1: There are reasons a lot of people do not themselves want to utilize autonomous cars. One of those reasons is their attachment to personally driving cars, what that self-signals and other-signals. There are other reasons.
Claim 2: People are prone to confuse “I do not want to use this because I feel threatened by it” with “I object to autonomous cars” with “This should be forbidden for everyone”. It makes sense, in a way, since with noone being allowed to do X / having X (in this case autonomous cars), there is less actual change to fear and less pressure to defend your individual aversion.
Your conclusion: I’m saying that macho driving enthusiasts try to ban chauffeurs or taxis. This is, of course, nonsense, since a culture that uses chauffeurs or taxis, which are humans, does not threaten the paradigm of “humans themselves control the driving”. Automatic transmissions are an absolute rarity e.g. in Europe (which we aren’t talking about), but even in the US do not trigger the “a machine is in control” angst. It does not follow that being afraid of losing control of the car means every small step along that way must be perceived that way. There is a qualititative difference between automatic transmissions, cruise control or parking assist and self-driving “enter your destination” type autonomous cars.
(What’s with the strawmanning, this isn’t Reddit.)
I didn’t misspell the word, so I’m not sure why you’re accusing me of that.
You obviously aren’t even reading my comments. You still have never explained why anyone would feel their own identity was threatened by someone else making a different decision about cars. Most new technologies are viewed with some concern at first. For some reason you are trying to make this about the evils of “male-identity,” instead of normal tech-wariness.
The obvious examples are all political, think “why anyone would feel their own identity was threatened by someone else making a different decision about X” with X being any of (more loaded—because they deal with identity-loaded issues) marriage, abortions, etc.
We seem to differ mostly about how important (not for functional, but signalling/identity—which I see as self-signalling—purposes) cars are to the average American. Is that correct? The market for fast luxury cars (Porsche et al) seems to indicate there is something not strictly functional going on. How many would buy an autonomous 911 versus one you drive yourself?
(Also, it would’ve been hard to quote “chaffeur” without having read the c, the h, the a, the f, another f, the e, the u and the r. I did miss a “u”, sorry for that. Still, 8 out of 9! Hopefully that’s settled then.)
No, that is not the crux of our disagreement. I do disagree somewhat, but that is a side-argument.*
The main disagreement we have is rooted in this:
and this:
Do you also think that liberals oppose guns, GMOs, and fission power plants out of a desire to protect their personal identities? Or do they actually have some reasons to be concerned?
You are really assuming the worst about the people you disagree with. I think the reason some people are wary of self-driving cars is that they are actually afraid of harm caused by self-driving cars. My position has the benefit of simplicity: people are saying what they mean. Your position is based on woo about gender identity and subconscious motivations.
Personally I disagree with the people who are afraid of self-driving cars.
I disagree with the dichotomy you’ve set up—not everything is about utilitarian function or signalling. Many car drivers enjoy their fast cars because they are fun—in the same way puzzles, action movies, drugs, roller coasters, and cooking are fun. Because of the kick of endorphins they get from going 150 mph, etc. But certainly signalling is also a major part of car decisions.
While I am claiming that A: “I personally do not like / object to X based on subconscious etc. reasons” leads to B: “X should be illegal” all too often, I am not claiming that B: “X should be illegal” necessarily implies A:”(...) because of subsconscious etc. motivations”.
Does that explanation help?
In part, yes. Nothing better for building group identity than a common enemy to rally against. There are legitimate actual reasons, but looking at protestors chaining themselves to train tracks to stop trains with fissile material, I’d doubt they are driven mostly by rational reasoning.
So do I. I’d be happy to be an early adopter.
I didn’t aim to set up an absolute dichotomy, I’ll reread my previous comments for clarity. It was merely a reason among many (two of which I expounded upon).
Alright, mind addressing my comment now?
I added some.
http://www.youtube.com/watch?v=NUuBXCEWOhc