Over the past couple of years Techdirt has written almost as much as my own Daily Stirrer news site about the pernicious TTIP trade treaty that will effectively end democracy in the developed world. What concerns all critics of TTIP (and there are many, in the USA and Europe, on both sides of the political fence and abounst businessmen and union leaders) are the corporate sovereignty chapters in trade agreements that grant foreign companies far-reaching powers to sue a government simply for issuing regulations that impact their investments. Recently, there has been a textbook example of how the investor-state dispute settlement (ISDS) tribunals that adjudicate corporate sovereignty cases are literally a law unto themselves. A post on The Hill explains the background:
A company sought to develop a mining and marine terminal project in Canada, but it had to obtain approval from provincial and federal authorities. As part of that process, the company had to submit an environmental impact study (EIS) addressing the project’s potential impacts on the natural and human environment.
A panel of experts was appointed to review that study, and to issue a recommendation on whether the project should go ahead. The experts recommended against approval, partly on the basis that it would have been inconsistent with "core community values." As a result, the federal and provincial officials rejected the project. The company involved, Bilcon, appealed against that decision, but did so invoking NAFTA's corporate sovereignty provisions. The ISDS tribunal ruled that:
The advisory panel's consideration of "core community values" went beyond the panel’s duty to consider impacts on the "human environment" taking into account the local "economy, life style, social traditions, or quality of life." The arbitrators then proclaimed that the government's decision to reject Bilcon's proposed project based on the experts' recommendation was a violation of the NAFTA.
As The Hill article points out, that shouldn't have happened:
The parties to the NAFTA -- the United States, Canada and Mexico -- have all repeatedly clarified that ISDS is not meant to be a court of appeals sitting in judgment of domestic administrative or judicial decisions.
Nonetheless, the ISDS tribunal's lawyers ignored the clear intent of NAFTA's corporate sovereignty provisions, and issued their judgment dismissing local decisions following national laws. Because of the astonishing way that ISDS works, Canada can't even appeal. However, as the article in The Hill points out, the situation would have been even worsehad the ISDS tribunal argued correctly:
It shows that ISDS stymies crucial evolution in domestic law. Under the tribunal's reasoning, a breach of international law arises when government officials interpret vague concepts such as the "human environment" or "socio-economic" impacts using principles or terms not expressly found in earlier decisions. Yet, particularly in common-law jurisdictions such as the US's, law develops in large part through new interpretations, adapting to changing circumstances and times. If this evolving process were indeed a breach of international law, the US should expect to face significant liability to foreign companies, especially as ISDS is included in new treaties with capital-exporting countries.
In fact, there is a first hint that the US government is well aware of these huge problems with corporate sovereignty provisions, and that it is already preparing for the day when it loses a major ISDS case. That hasn't happened so far in part because relatively few foreign companies covered by existing trade agreements with corporate sovereignty provisions have major investments in the US that would allow them to make claims. However, that will change dramatically if an ISDS chapter is included in the TTIP/TAFTA deal currently being negotiated. According to Public Citizen's calculations(pdf):
Massive protests in Europe, an issue that unites the bitterly hostile right and left wing politiucal factions in the USA, the Trans Altantic Trade And Investment Partnership (TTIP) must be something quite special. The Daily Stirrer believes it is a corporate power grab, a bid to transfer the lawmaking systems of sovereign nations to bureucrats and corporate lawyers. Learn more below ...
The Association of Global Automakers, a lobbying firm for 12 manufacturers, is asking the U.S. Copyright Office to prevent car owners from accessing “computer programs that control the functioning of a motorized land vehicle, including personal automobiles, commercial motor vehicles, and agricultural machinery, for purposes of lawful diagnosis and repair, or aftermarket personalization, modification, or other improvement.”
“In order to modify automotive software for the purpose of ‘diagnosis and repair, or aftermarket personalization, modification, or other improvement,’ the modifier must use a substantial amount of the copyrighted software – copying the software is at issue after all, not wholly replacing it,” the AGA claimed. “Because the ‘heart,’ if not the entirety, of the copyrighted work will remain in the modified copy, the amount and substantiality of the portion copied strongly indicates that the proposed uses are not fair.”
Auto Alliance, which also represents 12 automobile manufacturers, is also asking the agency to scrap exemptions to the Digital Millennium Copyright Act that allow car enthusiasts to modify and tune their rides.
“Allowing vehicle owners to add and remove [electronic control] programs at whim is highly likely to take vehicles out of compliance with [federal] requirements, rendering the operation or re-sale of the vehicle legally problematic,” Auto Alliance claimed in a statement. “The decision to employ access controls to hinder unauthorized ‘tinkering’ with these vital computer programs is necessary in order to protect the safety and security of drivers and passengers and to reduce the level of non-compliance with regulatory standards.”
But people have been working on their own cars since cars were invented.
“It’s not a new thing to be able to repair and modify cars,” a staff attorney with the Electronic Frontier Foundation, Kit Walsh, said. “It’s actually a new thing to keep people from doing it.”
Interestingly, this attack on the do-it-yourself auto hobby coincides with the current push towards self-driving cars, and who do you think will resist autonomous cars the most?
Auto hobbyists, such as hot rodders, drag racers and home tuners.
“The biggest threat to our hobby is those people in powerful situations who’s idea of a great day out in their car is to spend it riding in the back seat while someone else handles the driving ‘chore’ for them,” a hot rodder said on the subject. “These are the same people who will ban ‘old junk’ from the roads, enforce ’50 miles per gallon’ standards on new, and then older vehicles, and eventually force everyone to drive ‘standardized’ cars that will fit precisely in parking spaces, take up the minimum space on public roads, and follow all the ‘environmentally friendly’ buzz words while boring real car drivers like us to death.”
And the first step to keep people from behind the steering wheel is to keep them from opening the hood.
Last week, we noted that Senator Ron Wyden and Rep. Jared Polis had introduced an important bill to fix a part of the DMCA's broken anti-circumvention laws found in Section 1201 of the DMCA. For whatever reason, some people still have trouble understanding why the law is so broken. So here's a story that hopefully makes the point clearly. Thanks to DMCA 1201, John Deere claims it still owns the tractor you thought you bought from it. Instead, John Deere claims you're really just licensing that tractor:
In the absence of an express written license in conjunction with the purchase of the vehicle, the vehicle owner receives an implied license for the life of the vehicle to operate the vehicle, subject to any warranty limitations, disclaimers or other contractual limitation in the sales contract or documentation.
How nice of John Deere to say that your ability to operate the vehicle is really subject to the "implied license" it granted you. These comments (and many others) come in response to the ridiculous triennial review process in which the Librarian of Congress reviews requests to "exempt" certain cases from Section 1201's rules against circumvention. We discussed the ridiculous responses from some concerning video game archiving last week, and the John Deere statement is in response to requests to diagnose, repair or modify vehicle software. And, of course, lots of car companies are against this, including GM, which argues that all hell will break loose if people can diagnose problems in their own cars' computers. It, too, thinks that you don't really own your car and worries that people are mixed up in thinking they own the software that makes the car they bought run:
Proponents incorrectly conflate ownership of a vehicle with ownership of the underlying computer software in a vehicle.... Although we currently consider ownership of vehicle software instead of wireless handset software, the law’s ambiguity similarly renders it impossible for Proponents to establish that vehicle owners own the software in their vehicles (or even own a copy of the software rather than have a license), particularly where the law has not changed.
But the real conflation here is by GM, John Deere, and others, in thinking that because they hold a copyright to some software, that somehow gives them ownership over what you do with the copy you legally purchased with the car itself. Once that purchase is concluded, the vehicle owners should be seen to have given up any proprietary interest in the single vehicle you bought. But thanks to copyright and Section 1201, that's an issue that faces "uncertainty." And that's a problem.
Google’s Driverless Cars Causing Accidents, But Police Reports Remain Hidden
Many people have been unaware that Google’s driverless “concept” car has been provisionally deemed ‘road legal’ by the Department of Motor Vehicles, and have been active on California’s roads over the last 12 months. That’s not all…
As it turns out, Google ‘Self-Driving electric car is not as ‘idiot proof’ as they thought. It’s been causing accidents. Why police are remaining so tight-lipped about this trend is unknown, but there could be a classified ‘DARPA-like’ aspect to this new tech.
It’s all been a bit hush-hush on Google’s end who want the public to believe that their driverless cars are no worse than human drivers behind the wheel.
Liability
This latest revelation also brings up the issue of liability. If there is a fatal accident caused by a robotic software or hardware glitch, then who is legally responsible? Google?
Google already has hundreds of millions invested in this technology and product line, and they want to bring their new project to market very soon – within the next 5 years, so expect general curiosity and scrutiny to increase between now and then…
Google’s Plan to Eliminate Human Driving in 5 Years
Google’s adorable self-driving car prototype hits the road this summer, the tech giant announced last week. Real roads, in the real world. The car has no steering wheel or pedals, so it’s up to the computer to do all the driving.
As cool as this sounds, it isn’t a huge technological step forward. The goofy little cars use the same software controlling the Lexus and Toyota vehicles that have logged hundreds of thousands of autonomous miles, and Google’s spent the past year testing its prototypes on test tracks. And, in keeping with California law, there will be a human aboard, ready to take over (with a removable steering wheel, accelerator pedal, and brake pedal) if the something goes haywire.
What’s important here is Google’s commitment to its all-or-nothing approach, which contrasts with the steady-as-she-goes approach favored by automakers like Mercedes, Audi and Nissan.
Autonomous vehicles are coming. Make no mistake. But conventional automakers are rolling out features piecemeal, over the course of many years. Cars already have active safety features like automatic braking and lane departure warnings. In the next few years, expect cars to handle themselves on the highway, with more complicated urban driving to follow.
“We call it a revolution by evolution. We will take it step by step, and add more functionality, add more usefulness to the system,” says Thomas Ruchatz, Audi’s head of driver assistance systems and integrated safety. Full autonomy is “not going to happen just like that,” where from one day to the next “we can travel from our doorstep to our work and we don’t have a steering wheel in the car.”
Google thinks that’s exactly what’s going to happen. It isn’t messing around with anything less than a completely autonomous vehicle, one that reduces “driving” to little more than getting in, entering a destination, and enjoying the ride. This tech will just appear one day (though when that day will be remains to be seen), like Venus rolling in on a scallop shell, fully formed and beautiful.
Caught On Tape: Self-Driving Car Ploughs Into Journalists
Apple has, in case the fading wearables mania is the former, a Plan C: a self-driving car.
Or maybe not, because for a company built on the successful creation, execution and marketing of gadgets with a two year average lifespan, the worst thing that can happen is for the world to glimpse the unpleasant reality behind the glitzy, futuristic facade for sale every day (usually with a 4-6 weeks delivery delay) in Cupertino.
Such as this video, taken in the Dominican Republic, showing a self-parking Volvo XC60 reversing itself, waiting, and then slamming into journalists who were gawking at the "fascinating" if somewhat homicidal creation, at full speed.
As the Independent reports, the horrifying pictures went viral and were presumed to have resulted from a malfunction with the car.
Only it wasn't a malfunction.
Instead, in what is perhaps the most epic "option" in the history of automotive history, Volvo decided to make the special feature known as “pedestrian detection functionality” cost extra money.
It gets better: the cars do have auto-braking features as standard, but only for avoiding other cars — if they are to avoid crashing into pedestrians, too, then owners must pay extra.
The “Internet of Things” Gets Hacked To Smithereens
Nothing is secure, not even drug infusion pumps in hospitals.
You see, the Internet of Things is the rapidly arriving era when all things are connected to each other and everything else via the Internet, from your Nest thermostat that measures and transmits everything that’s going on inside your house to your refrigerator that’s connected to Safeway and automatically transmits the shopping list, to be delivered by a driverless Internet-connected car with an Internet-connected robot that can let itself into your house and drop off the Internet-connected groceries while you’re at work.
Convenient? Convenient for hackers.
OK, someone hacking into your fridge and fiddling with the temperature setting to freeze your milk is one thing…. But we already had the first hacking and remote takeover of a car.
Researchers hacked into a Chrysler Cherokee via its Internet-connected radio system and issued commands to its engine, steering, and brakes until it ran into the ditch. Thankfully this exploit wasn’t published until after Chrysler was able to work out a fix. It then recalled 1.4 million vehicles. The “recall” was done just like the hackers had done it: via the Internet. So if Chrysler can modify the software via the Internet, hackers can too.
That was a week ago. Today, the National Highway Traffic Safety Administration warned that Chrysler’s supplier sold these hackable radio systems to “a lot of other manufacturers.” NHTSA head Mark Rosekind told reporters: “A lot of our work now is trying to find out how broad the vulnerability could be.”
Maybe better not drive your Internet-connected car for a while.
And yesterday, researchers demonstrated (video) how hackers could exploit a security flaw in a mobile app for GM’s OnStar vehicle communications system.
To top off the week, the Food and Drug Administration warned today that hospitals and other healthcare facilities should stop using Hospira’s Symbiq Infusion System, a computerized pump that continuously delivers medication into the bloodstream because it’s vulnerable to hacking.
The FDA explained that the system communicates with a Hospital Information System (HIS) via a wired or wireless connection. The HIS is connected to the Internet. And thus, this pump is just one more thing on the Internet of Things.
“We strongly encourage” hospitals to “discontinue use of these pumps,” and do so “as soon as possible,” the FDA said.
The Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team (in government alphabet soup: ICS-CERT) is also “aware” of these cybersecurity vulnerabilities.
Hospira and an independent researcher confirmed that Hospira’s Symbiq Infusion System could be accessed remotely through a hospital’s network. This could allow an unauthorized user to control the device and change the dosage the pump delivers, which could lead to over- or under-infusion of critical patient therapies.
So this could be deadly. Thank goodness, the “FDA and Hospira are currently not aware of any patient adverse events or unauthorized access of a Symbiq Infusion System….”
The first essential step “to reduce the risk of unauthorized system access”: “Disconnect the affected product from the network.”
In other words, there is no fix. Hence, unplug the thing from the Internet of Things, and then deal with the ensuing “operational impacts.”
“Cyber security” is a figment of marketing imagination. There is no such thing as a connected device that is secure. The best security measures only make a hacker’s job harder and more time-consuming, but not impossible.
We’ve already accepted, despite occasional outbursts, that we live in a seamless surveillance society. But the Internet of Things goes beyond surveillance; so this won’t be the only story of a cyber-vulnerability of a potentially life-threatening kind. But hey, greet the Internet of Things, and all the Silicon Valley hype and money that is sloshing around it, with open arms. We get it. This is going to be good for us.
Google car AI qualifies as a ‘driver,’ US regulator says
A US traffic regulator has said that the artificial intelligence controlling Alphabet Inc’s Google self-piloted car can be considered a driver just like a human.
In a recently revealed letter, the National Highway Traffic Safety Administration (NHTSA) stated that it “will interpret ‘driver’ in the context of Google’s described motor vehicle design as referring to the SDS [self-driving system] and not to any of the vehicle occupants.”
The acknowledgement is a big boost for getting the SDSs on the road. The director of Google’s self-driving car project said the agency’s decision “will have major impact” on its development, according to a November letter reviewed by Reuters on Wednesday.
This statement however is not an official announcement and it does not mean that Google cars are going to be driving around freely anytime in the near future. NHTSA warns that Google might face many other problems in relation to existing regulations. “NHTSA would need to commence a rulemaking to consider how FMVSS [Federal Motor Vehicle Safety Standards] No. 135 might be amended in response to ‘changed circumstances’ in order to ensure that automated vehicle designs like Google’s… have a way to comply with the standard,” the letter goes.
Another obstacle the company’s car will have to face is technical issues, such as the safety of passengers on the road and preventing the AI system from being hacked. The company’s autonomous cars have been involved in 17 accidents in the 2 million miles they have driven, a University of Michigan study has established.
***************
the accident rate per million miles travelled for human driven cars is slightly less that one. https://www.cga.ct.gov/2004/rpt/2004-R-0035.htm
So that makes for 1.8 (let's be generous to the technology geeks and say two) per two million miles travelled. Which kind of leaves Google's idiotmobile with eight per million miles looking like a road safety hazard. I think you owe Vlad an apology. And do some fact checking in future. A good place to start is with the assumption that if Googe said something is true, it is not true.
Driverless cars will take to Britain's motorways for the first time next year
George Osborne announces trials for driverless cars on motorways as he hails them as the 'most fundamental change to transport since the invention of the internal combustion engine'
Driverless cars will take to Britain's motorways for the first time next year, George Osborne has announced ahead of his Budget next week.
The Chancellor said that trials will take place on local roads this year before being extended to A-roads and motorways in 2016.
He vowed to clear red tape so that the cars can be sold to the public and put into widespread use on Britain's roads by 2020.
However the Chancellor has yesterday warned against reviving the "war on the motorist" amid concerns he is planning to raise fuel duty and increase insurance premiums.
Mr Osborne is considering an increase in fuel duty of up to 2p a litre, which he is said to believe is justified in the wake of the collapse of World oil prices.
The move has prompted a furious response from Conservative MPs, with scores of them signing a survey opposing the move amid concerns it will alienate motorists.
The Chancellor is also considering a further hike in insurance premium tax, which could see motorists pay an additional £80 for their cover.
The AA, the motoring organisation, said it had been informed by senior Westminster sources that the plan is being considered and warned the Chancellor not to treat drivers like "wallets on wheels".
Mr Osborne is said to be considering a series "stealth taxes" after he was forced to abandon plans to scrap higher rate tax relief on pensions following a furious backlash.
The Chancellor said that at a time of "global uncertainty" he wants Britain to be a "world leader" in new technologies such as driverless cars.
The motorway trials, which will be overseen by Highways England, will take place at quiet times on lanes which are closed off to other traffic.
Mr Osborne said: "Driverless cars could represent the most fundamental change to transport since the invention of the internal combustion engine. Naturally we need to ensure safety, and that’s what the trials we are introducing will test."
Under the plans groups of driverless lorries could also soon be seen along Britain’s motorways as the government pushes ahead with bringing about next-generation transport.
A stretch of the M6 near Carlisle has reportedly been earmarked as a potential test route.
Why The Hard-Sell For The "Self-Driving" Car?
Submitted by Tyler Durden on 04/29/2016 19:00 -0400
This week, Ford and Volvo announced they are forming a “coaliton” – along with Google – to push not only for the development of self-driving cars, but for federal “action” (their term) to force-feed them to us.
Why?
The reasons are obvious: There’s money – and control – in it.
To understand what’s going on, to grok the tub-thumping for these things, it is first of all necessary to deconstruct the terminology. The cars are not “self-driving.” This implies independence.
And “self-driving” cars are all about dependence.
The “self-driving” car does what it has been programmed to do by the people who control it. Which isn’t you or me. Instead of you controlling how fast you go, when to brake – and so on – such things will be programmed in by … programmers. Who will – inevitably- program in parameters they deem appropriate. What do you suppose those parameters will be?
“Safety” will be the byword, of course.
But the point being, you will no longer have any meaningful control over (ahem!) “your” car. You’ll pay for the privilege of “owning” it, of course. But your “ownership” will not come with the right to control what you “own.”
It will be a tag-team of the government and the car companies who control (and thereby, effectively own) “your” car.
And thereby, you.
Not only will how you drive (well, ride) be under their control, they will also know where and when you go. It will be easy to keep track of you in real time, all the time. And if they decide they don’t want you to go anywhere at all, that’s easy, too. Just transmit the code and the car is auto-immobilized.
You only get to go when you have their permission to go. It will be a very effective way of reducing those dangerous “greenhouse gas” emissions, for instance.
If this all sounds paranoid, consider the times we live in. Reflect upon what we know for a fact they are already doing.
For instance, making the case – in court – that we (the putative “owners” of “our” vehicles) ought to be legally forbidden from making any modifications to them. The argument being that such modifications could potentially affect various “safety” systems and they do not want to be held liable for any resultant problems that may occur.
This argument easily scales when applied to the self-driving car, which we will be forced to trust with our lives at 70 MPH.
For at least 30 years now – since the appearance of anti-lock brakes back in the ‘80s – the focus of the car industry has been to take drivers and driving out of the equation. To idiot-proof cars. This is easier – and more profitable – than merely building cars that are fun to actually drive.
How much profit margin has been added to a new car via (6-8) air bags? We pay more for the car, more to repair the car (and so, more to insure the car).
This also scales.
The technology that will be necessary to achieve the “self-driving” car is very elaborate and very expensive.
Thus, very profitable.
Which by itself would be fine… provided we could choose. But we will be told. Like we’re told we must have 6-8 air bags and all the rest of it.
This is the “action” Ford and Volvo and Google are seeking.
I personally have no doubt that, in time, they will make it illegal to own a car that is not “self-driving.” Well, to actually drive the thing. Static museum displays may still be permitted.
Tesla, the state-subsidized electric car – already has the necessary “self-driving” technology and Elon Musk is pushing it, hard. He says it’s a gotta-have because people cannot be trusted to drive themselves. There’s a clue for you as to the mindset of our masters.
But the current price of the least expensive Tesla is just under $70,000.
This is not economically viable when the average family’s income is in the neighborhood of $50,000. And keep in mind, that means half the people to the left of average make less than $50,000.
They cannot afford to buy $25,000 cars.
But maybe they can afford to rent them.
This appears to be where we are headed. The perpetual rental. It makes sense, too – from an economic point-of-view. Why buy that which you don’t really own because it’s not under your control? It would be absurd to buy the bus that you ride to work in. It is arguably just as absurd to buy the car you are driven to work in, too.
The object of this exercise appears to be perpetual debt-servitude as well as placing almost everyone fully and finally under the complete control of the powers that be. Who are no longer just the powers in government. The distinction between state power and corporate power is so blurry now as to be almost impossible to parse. The two are effectively the same thing, working hand in hand for their mutual benefit.
All within the state, nothing outside the state, nothing against the state.
Sadly, there is no push back. Or doesn’t seem to be. The cattle appear to like the idea of being herded. It is depressing.
The passivity and acceptance of it all.
Must be something in the water.
See also the comment threat at the Zero Hedge copy of this:
http://www.zerohedge.com/news/2016-04-29/why-hard-sell-self-driving-car
Ford: Self-driving cars are five years away from changing the world
The technical leader of Ford's autonomous car project speaks about what it's like to be driven by a driver-less car, and how big a deal self-driving vehicles will really be.
The self-driving car is one of the most hotly contested areas of tech development right now, with tech companies like Google (and soon maybe Apple), as well as established car makers like Ford and Volvo each trying to overtake the competition.
Earlier this year Ford said it will triple the size of its autonomous Ford Fusion Hybrid test vehicles, announcing plans to test 30 vehicles on roads in Arizona, California, and Michigan.
ZDNet recently spoke to Jim McBride, technical leader in Ford's autonomous vehicles team, about the future of driving. ZDNet: Where is Ford's autonomous vehicles project now? Jim McBride: We have roughly ten cars driving right now and we're going to triple that by the end of the year. We're on public roadways -- in fact, there's one out driving right now. It's foggy and rainy here in Dearborn [Michigan] so it's a chance to go out and check out some weather that's not typical sunny weather.
What are the big technical challenges you are facing?
When you do a program like this, which is specifically aimed at what people like to call 'level four' or fully autonomous, there are a large number of scenarios that you have to be able to test for. Part of the challenge is to understand what we don't know. Think through your entire lifetime of driving experiences and I'm sure there are a few bizarre things that have happened. They don't happen very frequently but they do. How do you build that kind of intelligence in?
It's a difficult question because you can't sit down and write a list of everything you might imagine, because you are going to forget something. You need to make the vehicle generically robust to all sorts of scenarios, but the scenarios that you do anticipate happening a lot, for example people violating red lights at traffic intersections, we can, under controlled conditions, test those very repeatedly. We have a facility near us called Mcity, and it's basically a mock-urban environment where we control the infrastructure. While you and I may only see someone run a red light a few times a year, we can go out there and do it dozens of times just in the morning.
So for that category of things we can do the testing in a controlled environment, pre-planned. We can also do simulation work on data and, aside from that, it's basically getting out on the roads and aggregating a lot of experiences. How long do you think it will before autonomous vehicles are commonplace?
That's a question that I have a hard time answering because if you have autonomous vehicle technology wrapped up, you can imagine applying that to a whole variety of business cases.
It could be as simple as 'downtown London is too congested and we're going to shut that off to everything apart from some mobility shuttles'. That's a different problem to saying 'I'm going to do a ride-sharing service', which is a different problem to saying 'I'm going to do parcel delivery and a fleet service' or 'I'm going to do personal ownership, where you own and operate the car'. Each of those different uses come with a different business model and a different time to launch it.
So I think the more important question is, 'when are the underlying technologies going to be available?', and I would say the answer to that is probably four or five years. Then you go from there, deciding how you will employ that technology. What's it like being driven in an autonomous car?
This is one of the misconceptions that people have. When the car's doing what it's supposed to do, it's very mundane. The car performs like a good human driver would perform. Our project is aimed at fully autonomous, meaning we're not going to ask the driver to have to take responsibility. We don't think it's a fair proposition to have the car drive for hundreds or thousands of miles, then suddenly encounter a situation that's difficult and throw its hands up and say, 'It's your turn'. Our design is predicated on the vehicle always being able to drive itself Do you think there will be a mix of self-driving and human-driven cars on the roads in the future?
My personal feeling is there will be a mix of cars. We don't want to deny anyone the opportunity to drive. As a matter of fact, our vehicles are dual use, effectively you drive them like they're a normal production car and then, when you want to turn on the autonomous system, it's not very different to turning on your cruise control today. Then, if you want to resume control, you just disengage the system by tapping a button or grabbing the steering wheel. Do you use a particular model of car for testing?
Right now, our development platform is the Ford Fusion hybrid but the way it's designed is that the software is portable to pretty much any of the vehicles in the fleet. There's a couple of reasons we picked it being a hybrid. It has easy access to the electrical system and that's important when you want to drive a car by wire and when you might want a little extra electrical load for your computers and sensors.
But additionally the Fusion is a mid-level car that has every driver assistance feature on it that you can get in the company. The statement there is that, when we do produce a vehicle that's autonomous, we're not looking to sell to a few elite rich customers, we are trying to democratize it across the fleet as soon as possible, so everyone can enjoy the benefits.
How much computing power do you have to have in the car?
Right now I can tell you what's in the trunk of our car, it's about the equivalent of five decent laptops. At the moment we have a little bit of extra overhead, so that we can try out new code and things like that. I would image as you go to production it would be an embedded hardware unit.
Do the cars always have to be connected to the network to drive?
You can't depend on it [the infrastructure] being there, so you have to have the vehicle be able to stand on its own. We are designing the vehicle to be able to do that but if there are any connections to the infrastructure, we'll exploit them. So if traffic signals want to talk to us, we'll listen but we can't be reliant on them all broadcasting. Another use for the infrastructure is to send data back up to the cloud, so that if you notice any deviations to the map you are driving through you can report those and download updates and things like that. How big a deal are autonomous cars going to be, really?
I would say it's a paradigm shift that's not terribly dissimilar from [the shift from] horses and carriages going to cars. We're going to have cars driving without you -- without the occupant -- having to do anything. That's a huge paradigm shift and it opens up a whole variety of new business models that weren't previously available. Can other drivers tell you are driving an autonomous car?
At the moment ours is obvious because we chose not to cut holes in the car to hide the sensors, because we wanted to be flexible about the design. The next cars we are building will be less obvious, the sensors will be a lot more hidden in the body of the vehicle.
I just did 600 miles of driving in California two weeks ago on an interstate and it's got to the point now where I probably only had 10 people in 600 miles even look up from their other driving tasks to pay attention to the fact that we were in an autonomous car, so there's been quite a shift in that regard. Used to be in the old days everyone would gawk, slow down, and take pictures and wonder what the car was. It's not such a big deal any more.
Every day, our cars are becoming smarter and more connected. This may someday save your life in a crash, or prevent one altogether — but it also makes it far harder to evade blame when you're the cause of a fender-bender.
One Tesla owner appears to be finding that out firsthand as he struggles to convince the luxury automaker his wife wasn't the one who crashed his Model X. Instead, he complains, the car suddenly accelerated all by itself, jumped the curb and rammed straight into the side of a shopping center.
Tesla is disputing the owner's account of the incident, citing detailed diagnostic logs that show the car's gas pedal suddenly being pressed to the floor in the moments before the collision.
"Consistent with the driver's actions, the vehicle applied torque and accelerated as instructed," Tesla said in a press statement.
At no time did the driver have Tesla's autopilot or cruise control engaged, according to Tesla, which means the car was under manual control — it couldn't have been anyone else but the human who caused the crash. The car uses multiple sensors to double check a driver's accelerator commands.
The Model X owner appears to be standing by his story, but here's the broader takeaway. Cars have reached a level of sophistication in which they can tattle on their own owners, simply by handing over the secrets embedded in the data they already collect about your driving.
Your driving data is extremely powerful: It can tell your mechanic exactly what parts need work. It offers hints about your commute and your lifestyle. And it can help keep you safe, when combined with features such as automatic lane-keeping and crash avoidance systems.
But the potential dark side is that the data can be abused. Maybe a rogue insurance company might look at it and try to raise your premiums. Perhaps it gives automakers an incentive to claim that you, the owner, were at fault for a crash even if you think you weren't. To be clear, that isn't necessarily what's going on with Tesla's Model X owner. But the case offers a window into the kind of issues that drivers will increasingly face as their vehicles become smarter.
Tesla and Google are both driving toward autonomous vehicles. Which company is taking the better route?
Google and Tesla agree autonomous vehicles will make streets safer, and both are racing toward a driverless future. But when Google tested its self-driving car prototype on employees a few years ago, it noticed something that would take it down a different path from Tesla.
Once behind the wheel of the modified Lexus SUVs, the drivers quickly started rummaging through their bags, fiddling with their phones and taking their hands off the wheel — all while traveling on a freeway at 60 mph.
“Within about five minutes, everybody thought the car worked well, and after that, they just trusted it to work,” Chris Urmson, the head of Google’s self-driving car program, said on a panel this year. “It got to the point where people were doing ridiculous things in the car.”
After seeing how people misused its technology despite warnings to pay attention to the road, Google has opted to tinker with its algorithms until they are human-proof. The Mountain View, Calif., firm is focusing on fully autonomous vehicles — cars that drive on their own without any human intervention and, for now, operate only under the oversight of Google experts.
Tesla, on the other hand, released a self-driving feature called autopilot to customers in a software update last year. The electric carmaker, led by tech billionaire Elon Musk, says those who choose to participate in the “public beta phase” will help refine the technology and make streets safer sooner.
Tesla drivers already had logged some 130 million miles using the feature before a fatal crash in Florida in May made it the subject of a preliminary federal inquiry made public on Thursday.
The divergent approaches reflect companies with different goals and business strategies. Tesla’s rapid-fire approach is in line with its image as a small but significant auto industry disruptor, while Google — a tech company from whom no one expects auto products — has the luxury of time.
With the National Highway Traffic Safety Administration yet to release guidelines for self-driving technology, existing regulation has little influence on corporate tactics.
That makes Google’s caution even more surprising, as it has long operated with the Silicon Valley ethos of launching products fast and experimenting even faster. But in developing self-driving cars, the company has splintered from its software roots. It is taking its time to perfect a revolutionary technology that will turn Google into a company that helps people get around the real world the way it helps them navigate the Internet.
“I’ve had people say, ‘Look, my Windows laptop crashes every day — what if that’s my car?’ ” Urmson said at a conference held by The Times on transportation issues. “How do you make sure you don’t have a ‘blue screen of death,’ so to speak?”
The stakes are simply higher with self-driving cars than with operating systems and apps, Urmson said. That’s why Google has yet to bring its self-driving technology to consumer vehicles even though it’s been in development for seven years and logged more than 1.5 million test miles.
Tesla insists its vehicles go through vigorous in-house testing and are proved safe before they reach consumers. And, according to the company, putting them on the roads makes the software — which learns from experience — only better.
“We are continuously and proactively enhancing our vehicles with the latest advanced safety technology,” a Tesla spokeswoman said via email.
And there’s truth to that, said Jeff Miller, an associate professor in the Computer Engineering Department at USC, who said there is no way to stamp out every problem from technology before launching it. At some point, this kind of technology needs to be thrown into the real world.
“Every single program in the world has bugs in it,” he said. “You have imperfect human beings who have written the code, and imperfect human beings driving around the driverless cars. Accidents are going to happen.”
But this doesn’t mean these products shouldn’t launch.
“We have been testing the vehicles in labs for a good number of years now,” Miller said. “Like with airplanes, eventually you’re going to have that first flight with passengers on it.”
Getting the technology to work is only half the challenge, though. As Google learned when its employees took their hands off the wheel, the other half is ensuring that the technology is immune to human error.
It’s not enough for the technology in a vehicle to simply work as intended, said David Strickland, a former chief of the NHTSA who now leads the Self-Driving Coalition for Safer Streets, a group that includes Google, Volvo, Ford, Uber and Lyft. Part of the safety evaluation has to account for how the technology could be misused, and companies must build protections against that.
Tesla and other automakers have launched automated cruise control features with built-in sound alerts if a driver’s hands are not detected on the wheel. But these checks aren’t fool-proof, either.
“Having developed software and hardware products … I can point to the incredible inventiveness of customers in doing things you just never, ever considered possible, even when you tried to take the ridiculous and stupid into account,” said Paul Reynolds, a former vice president of engineering at wireless charging technology developer Ubeam. “If customer education is the only thing stopping your product from being dangerous in normal use, then your real problem is a company without proper consideration for safety.”
Google and other automakers aim to solve the human problem by achieving the highest level of autonomy possible. The NHTSA ranks self-driving cars based on the level they cede to the vehicle, with 1 being the lowest and 5 the highest.
Tesla’s autopilot feature is classified as level 2, which means it is capable of staying in the center of a lane, changing lanes and adjusting speed according to traffic. Google is aiming for levels 4 and 5 — the former requires a driver to input navigation instructions, but relinquishes all other control to the vehicle, while level 5 autonomy does not involve a driver at all.
Volvo plans to launch a pilot program for its level 4 autonomous car next year. BMW has signaled ambitions to develop levels 3, 4 and 5 autonomous vehicles.
The problem with level 2, critics say, is that it’s just autonomous enough to give drivers the false sense that the vehicle can drive itself, which can lead to careless behavior.
Tesla disputes this — its owner’s manual details the feature’s limitations — and it says drivers are actually clamoring for the product. Tesla executive Jonathan McNeil said in a February investor call that the autopilot feature is “one of the core stories of what’s going on here at Tesla.”
The sudden rollout of the tool in October is in line with a company that has made a name for itself as a boundary-pusher that appeals to those willing to take a risk on technology with world-changing potential.
Its regular software updates bring flashy, first-of-their-kind functions to cars already on the road — a way to build loyalty among current owners and court new ones. Indeed, 40-year-old Joshua Brown, who died when his Tesla Model S failed to detect a white big rig against the bright sky, posted two dozen videos showing the autopilot technology in action.
Analysts aren’t surprised that Tesla is moving faster than Alphabet Inc. — Google’s parent company and the second most-valuable publicly traded company on American markets. Cars, after all, are Tesla’s business.
Google makes money from its search and advertising business and has its hands in hardware, software, email and entertainment. Self-driving vehicles are one of its “moonshots” — ambitious projects with no expectation for short-term profitability. They are lumped into Google X, a secretive arm of the company that has experimented with ideas such as using balloons to connect the world to Wi-Fi and the head-mounted gadget Google Glass.
The company has no plans to manufacture and sell its own vehicles. Instead, it likely will partner with automakers, hoping its autonomous-driving software will come to dominate the market the same way its Android operating system dominates the smartphone industry.
“Google has the time, and they can develop things quietly,” said Michelle Krebs, a senior analyst with Auto Trader, “whereas Tesla is under some pressure to build this car company and start making a profit.”
As self-driving technology becomes commonplace, regulators, automakers and consumers will have to decide whether rolling out early products is worth the potential risk, said Shannon Vallor, a philosophy professor at Santa Clara University who studies the intersection of ethics and technology.
“It is far from obvious that the ends here do justify the beta testing of this technology on public roads without better safeguards,” Vallor said. tracey.lien@latimes.com Twitter: @traceylien Times staff writer Paresh Dave contributed to this report. ALSO Who are California's top-paid CEOs? Reaping the wind off California's coast for electricity faces hurdles In wake of fatal Tesla crash, BMW is in slow lane to roll out self-driving vehicles
In wake of fatal Tesla crash, BMW is in slow lane to roll out self-driving vehicles
A day after the disclosure of the first death in a crash involving a self-driving vehicle, BMW on Friday announced plans to release a fleet of fully autonomous vehicles by 2021.
In a partnership with Intel and Mobileye, the German automaker said its planned iNEXT model won’t require a human in the driver’s seat.
That marks a different course toward self-driving vehicles than Tesla, which offers a self-driving “autopilot” feature to those participating in a “public beta phase” -- though drivers are supposed to stay engaged and keep their hands on the steering wheel.
That system was in use during a fatal crash in Florida in May in which a Tesla Model S failed to detect a big-rig in its path and apply the brakes.
A day after the disclosure of the first death in a crash involving a self-driving vehicle, BMW on Friday announced plans to release a fleet of fully autonomous vehicles by 2021.
In a partnership with Intel and Mobileye, the German automaker said its planned iNEXT model won’t require a human in the driver’s seat.
That marks a different course toward self-driving vehicles than Tesla, which offers a self-driving “autopilot” feature to those participating in a “public beta phase” -- though drivers are supposed to stay engaged and keep their hands on the steering wheel.
That system was in use during a fatal crash in Florida in May in which a Tesla Model S failed to detect a big-rig in its path and apply the brakes.
BMW Chief Executive Harold Krueger addressed the Tesla crash during a news conference in Munich, Germany, on Friday, saying his company is not yet ready to roll out partially or fully autonomous vehicles.
“That’s why we announced we would take the step to autonomous driving in 2021,” he said. “We believe by today, the technologies are not ready for serious production.”
The National Highway Traffic Safety Administration, which is investigating the Tesla crash, ranks self-driving cars based on the level of control the driver cedes to the vehicle, with 1 being the lowest and 5 the highest.
Tesla’s autopilot feature is classified as level 2, as it is capable of staying in the center of a lane, changing lanes and adjusting speed according to traffic.
BMW is focusing on levels 3, 4 and 5. At level 3, the car can drive itself without human intervention under certain traffic or environmental conditions. At level 4, the driver will input destination and navigation instructions, but is not expected to drive at any point during the trip. Level 5 autonomy does not involve a driver at all.
BMW and its partners say their level 5 fully autonomous vehicles could be used by ride-hailing companies such as Uber and Lyft.
Intel Chief Executive Brian Krzanich said Friday he’s “fairly confident we can do this in the five-year time frame.” He added his company intends to dedicate several hundred people and several hundred million dollars to the self-driving project.
Amnon Shashua, Mobileye’s co-founder, chairman and chief technology officer, said his computer vision company will devote about 100 of its 700 employees to the undertaking.
Though Mobileye counts Tesla among its clients, Shashua suggested companies must do more to inform customers of potential dangers.
“I think it’s very important given this accident that we hear about in the news that companies be very transparent about limitations of the system,” Shashua said. “It’s not enough to tell the driver you need to be alert. Tell the driver why you need to be alert.” helen.zhao@latimes.com
Is Amazon Prepared for Elon Musk's Driverless Vehicle Armada?
Few people take the role of tech visionary as naturally as Elon Musk. His recently published master plan for Tesla may yet again usher business into a new era of tech history. The forward thrust may also propel him into direct competition with previously unrelated corporations, including Amazon.
Elon's new "Master Plan Part Deux" leans heavily on artificial intelligence, and commits Tesla to true self-driving vehicles that operate entirely on their own. As described by Fortune, it is generally acknowledged by now that this is the way the automobile industry is heading. Interestingly, this trend is also a catalyst for an entirely new product category of self-driving vehicles that may dominate our intra-city logistics, and topple Amazon's strategic bet on drones. This article explores the idea of unmanned ground vehicles (UGVs) and discusses both their intra-city competition with drones and touches on implications for industries like Telecom, Logistics, eCommerce and Energy. The motivation behind Elon's ventures into driverless vehicles is simple to understand. Ai-driven cars weren't a part of Elon's first master plan ten years ago. However, during his years in the automobile industry, the industry has been subject to a fundamental, strategic shift, driven by advances in artificial intelligence, computer vision and deep learning in particular. As a technology opportunist, Elon recognizes the advantages of these innovations, and adopts them to promote his broader objectives for Tesla and our society in general. An underappreciated trend that can be induced from driverless technology is the growth of Unmanned Ground Vehicles (UGVs). Today, vehicles are normally designed for at least one human driver. With self-driving technology in place, a vehicle can be totally unmanned. This means that product design no longer needs to be adapted to the human physiology - dramatically reducing the production cost per unit. In addition, the operating cost is much lower without a human driver.
"Before we know it, unmanned vehicles may be a part of our daily lives, cruising around in our cities, delivering us dinners and consumer goods."
In fact, start-ups like Dispatch and Starship Technologies are already prototyping unmanned vehicles that can navigate on their own through cities. The start-ups are fueled by advances in machine learning and computer vision, which in later years have accelerated tremendously through advances in artificial neural networks (deep learning). Large investments have been spearheaded by Tech giants like Google, and increasingly also by major players in the automobile industry, as outlined in this Wall Street Journal article.
Driving vehicles versus drones
Perhaps surprisingly, UGVs may beat drones (Unmanned Arial Vehicles) when it comes to intra-city logistics. Amazon is famously betting on drones to take over shares of the package deliveries market, with its system called Amazon Prime Air. While drone technology is flourishing, legislation remains a serious bottleneck. In addition, many people dislike the idea of drones flying above their heads and properties.
There are several factors that favor UGVs over drones. Unlike their flying counterparts, driving vehicles benefit from our 13 decades of experience with traditional cars, including legislation (traffic laws, etc) and infrastructure (roads, traffic lights, etc). People are also much more accustomed to ground based vehicles. Economically, the energy consumption for transporting goods along the ground is likely lower than for carrying them through the air - at least in urban environments.
These factors are relevant for Amazon. Before we know it, unmanned vehicles may be a part of our daily lives, cruising around in our cities, delivering us dinners and consumer goods. The time horizon very likely depends on technology and legislation, as indicated by Elon Musk himself. Whether it will be before drones are legalized remains to be seen. Either way, Amazon might find itself in direct competition with Tesla, Google, GM, FedEx - or whatever company that successfully picks up UGVs.
Selected implications
The driving vehicles are relevant for many industries. It is a natural hypothesis that they will be communicating with other vehicles and devices, making them relevant for the IoT paradigm and a growth opportunity for Telecom operators. For Logistics providers, the UGVs may challenge existing distribution models, enabling greater efficiency and flexibility as UGVs of different sizes transport goods to the final recipients. UGVs will in turn be a driver for increased eCommerce and Online Grocery Shopping as both prices and delivery time for customized transportation decreases. Electric power is a natural alternative for smaller vehicles, and UGVs may be an opportunity for Sustainable Energy and Solar companies when numbers increase. Last but not least, UGV's will be a new product category for Automobile manufacturers.
Personally I am following the subjects machine learning and computer vision very closely. These technologies are amazingly flexible and have multi-purpose properties that generate new opportunities in a high number of verticals. If you find the technologies interesting as well, I encourage you to connect with me, here on LinkedIn.
Tesla Spontaneously Catches Fire, Burns Down During Test Drive In France
After Tesla's latest problem involving a Model S crash in Beijing while in autopilot mode (which has since prompted the carmaker drop remove "autopilot" from its Chinese website), Elon Musk may have to return to a more familiar problem plaguing his vehicles: spontaneous combustion.
According to Electrek,
as part of its ‘Electric Road Trip’ tour for the summer, Tesla stopped
in Biarritz, France to promote Model S and Model X over the weekend. During
a test drive in a Model S 90D, the vehicle suddenly sent a visual alert
on the dashboard stating that there was a problem with “charging”.
The Tesla employee giving the test drive made the driver park the car
on the side of the road and all three (the driver, the Tesla employee
and another passenger) exited the vehicle.
The Tesla Model S caught on fire only a moment later (pictured
above), according to witnesses. Firefighters arrived quickly on the
scene to control the fire, but the vehicle was completely destroyed. The
result was reportedly similar to the remains of the Model S that caught
fire while Supercharging in Norway earlier this year.
The website adds that it is talking to members of the Tesla Motors Club in France and reaching out to Tesla.
While the traditional Tesla defense applies, namely that while
electric vehicle fires are widely reported there’s no evidence that
there are any more frequent than gas-powered car fires, what is
particularly interesting here – though it could change since the story
is still developing – is that previous instances of Tesla
vehicles catching fire happened after severe impacts, especially after
debris on the road punctured the battery pack at high-speed.
Those incidents stopped after Tesla added a titanium shield on the
bottom of the battery pack, but so far there’s been no report of an
impact in the case of the fire in France today.
The cause of the fire is still unknown. A Tesla spokesperson sent Electrek the following statement:
"We are working with the authorities to establish the facts of the
incident and offer our full cooperation. The passengers are all
unharmed. They were able to safely exit the vehicle before the incident
occurred.“
The developing story has yet to hit the mainstream news.