DRIVERLESS CARS 2015

A global archive of independent reviews of everything happening from the beginning of the millennium



Read our Copyright Notice click here

For publication dates click here

Driverless Cars 2013

Driverless Cars 2014 (January-June)

Driverless Cars 2014 (July-December)

Cambridge Autonomous Mobility



Augmented reality glasses from MINI that provide a form of head up display. Will they be useful in autonomous vehicles to help 'passengers' see what is rendered by microchips from sensor fusion? Photos: BMW




DRIVERLESS CARS 2015 - PART 1


Reviewed by ANDRE BEAUMONT


At Worldreviews we see driverless cars as being an eventual driver of social change much as the aeroplane has been. The latter half of 2014 brought doubts as to how far driverless cars could deliver their early theoretical promise. How far automobile manufacturers and new entrants had achieved their goals appeared to be overestimated, especially in onboard computing capabilities for scene recognition and its correct interpretation and also in passenger-vehicle interfaces.


The possibility was arising that one would not be able to dispense with the driver if the cars themselves could not make the full range of autonomous decisions necessary to safely cope with all the facets of driving likely to be encountered during a vehicle's life.

The saying goes that with new technologies we overestimate what is possible in the next four years but underestimate what is possible in the next ten. Two and a half years into serious driverless car development doubts were creeping in.

The advent of 2015 is, however, bringing on the solutions.


7 January 2015

Google and Tesla are resourceful and pioneering new entrants in the driverless car space. Deep learning has come on apace in the past few months and Google is a player in the field. Tesla may soon become the world's largest manufacturer of the type of lithium battery use in the automotive sector and future driverless cars are almost certainly going to have electric drivetrains.


Mercedes-Benz, though, has all the hallmarks of an incumbent which is determined to be a leader in the new field. Technologically it is progressing across a broad spectrum and with great attention to detail. For the application of driverless technologies it is unashamedly concentrating for now on the upper, luxury segment. This is probably wise given the high cost of early application of these technologies per vehicle put on the road. Even if the technology can later be applied at much lower cost to a wider range of vehicles it may well by then have sewn up much of the luxury end.

It is no surprise that the vehicle it unveiled at CES 2015 is called F 015 Luxury in Motion. In these two videos of it, the first is of it arriving at the show, the second is one of the official videos showing some of its features.

A number of things are striking about this driverless car.

It is massive, like a super-sized S-Class, the last model used by Mercedes to demonstrate autonomous technology.


F 015 Luxury in Motion driverless car. Source: Daimler

Inside it like a living room [or like a pumpkin going to the ball - the fantasy quality comes across in this video three months later]. The four seats face inwards so it is not intended to have a driver.

Large screens are found at the front, rear and on each door panel. Those on the door are touch screens, probably curved OLED panels. The one at the front can be commanded by gesture.

The residual steering wheel reminds one of the control column in a Boeing airliner. There are no buttons in the car (a possible dig at a rival car that had a single button). What this adds up to is that the passenger-vehicle interface is being addressed with some panache and that if you were to drive it, rather than being driven, you would do so using the single steering wheel, pedals, gestures and the touch screens. Or at least that is what the show car leads you to believe.

As for motive power it is said to have an electric motor with a plug-in range of 200 kilometres. It is also said to be able to have a fuel cell added within the envelope that would take the range up to 1000 kilometres. The fuel cell bit should be taken with a pinch of salt but it is clear that the internal combustion engine, with all the potential problems it has in a driverless car, is being ditched in favour of electric motive power. The silhouette of the vehicle alone tells one as much. Crash resistance is going to be pretty good, too, in a car of this shape and size.

It has two way communication with the outside world suggesting it will use GPS and externally sourced data and not rely on everything being calculated onboard as protection from hacking.

Given the earlier demonstration of some autonomous technologies by the S-Class, it really does inspire confidence that a car manufacturer will integrate the salient elements required for a convincing driverless car given sufficient development time.


___________________________________________________________________________

30 January 2015


It has always been the case that driverless cars would depend on the right chips and dedicated computers for rapid computation. Stuffing a few laptops in the boot and connecting them up was never going to do though the first prototype driverless cars ran off these.


NVIDIA is one system-on-a-chip (SoC) manufacturer working on automotive applications. Of the car large manufacturing groups it already has BMW, Fiat, Honda, Peugeot-Citroen and VW-Audi as advertised partners, presumably meaning leading driverless car players Google, Mercedes-Benz and Nissan are for now aligned with another chip manufacturer.

The announcement at the beginning of January of NVIDIA's Tegra X1 SoC and two NVIDIA computers aimed at the automotive market is a highly interesting development.

The tools to democratise the creation of driverless cars are on the way.

Sensors like lidar, where used, also need to reach the right price points for the creation of series vehicles but it is not improbable that two more generations of chips and dedicated computers down the line even small vehicle manufacturers will be able to buy most of the computational power (and possibly software) necessary for viable driverless vehicles off the shelf.

So, speculating looking forward, a manufacturer of motorbikes and scooters would have the tools to scale up to become a manufacturer of driverless, probably low-speed, tricycles taking on incumbent car manufacturers at a lower price point.

NVIDIA is known for its graphical processing units (GPUs) whether integrated on a chip or used as separate cards.

The Tegra X1 integrates NVIDIA highest performing Maxwell architecture GPU with 4 A53 and 4 A57 ARM cores on a SoC designed for the top end of the mobile market - most likely to be used for the next generation of gaming tablets.

What is a surprise, though logical given this SoC's graphics processing and intensive computing capability, is that small chips like these rather than the larger chips may prove sufficiently capable for all autonomous driving applications.

The two computers announced by NVIDIA at the same time as the SoC, the Drive CX Cockpit Platform and the Drive PX Auto-Pilot Platform use one and two Tegra X1 chips respectively.


Drive CX computer for automotive use. Source: NVIDIA

Both will be available to automotive manufacturers in the Q2 2015 and for production applications in 2016.

(So, incidentally, our estimate that the first viable driverless cars for use on route protected public roads will emerge in 2017 may prove correct. These SoCs and computers are likely to be used primarily for the development and production of driver assistance aids that are a halfway house to full autonomous driving but some manufacturers will undoubtedly assign them for use in driverless cars).

At this stage it is necessary to rely on information released by NVIDIA so a number of quotes (as italicized text) from its documentation will be found below.

The company says that carmakers are investing heavily in new visual computing system hardware and software development, believes that the next generation of cars may use up to 12 cameras and that 'multi camera-based driverless cars need tremendous compute power to analyse multiple live video streams in real-time.

It is interesting to draw a tenuous conclusion from this that at least one manufacturer is already envisioning going on from cars with driver assistance to driverless cars using multiple cameras but no lidar (but presumably with radar).

The DRIVE CX computer will be used primarily for display panels, surround view with advanced rendering, communications, audio and driver monitoring. The company believes cars may progress by 2020 to having multiple HD display panels with total display resolution exceeding 20 megapixels.

The DRIVE PX computer has 2 Tegra X1 chips and these can work together or one can act as a redundant processor for critical applications.

This Auto-Pilot platform can be used for the 12 cameras at 2 megapixels each with a frame rate of 60fps. This would facilitate mirrorless car surround vision with advanced rendering, self-parking which builds a 3D map of nearby objects in real time, rear collision warning, cross traffic vision, deep learning training from these visual inputs and over the air update capability.

Using this platform there are NVIDIA's advanced driver assist systems, ADAS, which would permit traffic sign recognition, blind spot detection, lane departure warning, collision avoidance systems and night vision, all necessary building blocks to autonomous driving. ADAS can also add augmented reality, computational photography, human machine interfaces and robotics.

Pedestrian detection, like collision avoidance, needs to take place in real time and requires compute intensive video and image processing which the platform is well attuned to do.

DRIVE PX has a self parking module which NVIDIA calls Auto-Valet described as follows:

NVIDIA has developed a sophisticated Auto-Valet self-parking software module that runs on DRIVE PX and enables a car to autonomously drive through a parking lot, identify an empty parking spot, and safely park itself without any human intervention.

The Auto-Valet module also runs a path-planning algorithm to control the steering, brakes, and accelerator of the car while it is driving around the garage looking for a valid spot and while parking the car in the spot. After a valid spot has been identified, a parking algorithm detects obstacles and other cars around the parking spot and directs the car to either pull directly into the spot or execute the appropriate 3-point turn to park the car in the spot.

So this is a key module that a would be driverless car producer gets off the shelf. This kind of car control is not dependent on GPS or cellular triangulation so can be used in covered and underground car parks and is essentially true autonomous driving, if at low speed. It is only a matter of time before new generations of chips let producers have even more self driving functions off the shelf.

An even more intriguing module, part of ADAS, is a deep learning module:

Current generation ADAS have several limitations and do not deliver satisfactory assistance to drivers.

First, due to limited compute power, these systems use videos from low resolution, low frame rate cameras to process and provide driver assistance. Lower resolution images often result in inaccurate identification and classification of objects in the scene, and slow responses to hazards on the road. Second, these systems are able to identify only a basic set of objects such as pedestrians, cars, and traffic signals that are clearly visible and appear as familiar structures or outlines to the ADAS.

They also often fail in cases where objects are partially occluded from view. For example, they may fail to identify and classify pedestrians that are only partially visible to the camera, or pedestrians who are pushing a bicycle along with them. Third, many of these assistance features frequently fail under adverse environmental or lighting conditions such as rain, night time, and bright reflections in the screen.

But in order to solve the problem of occluded objects, and to contextually classify objects, the ADAS needs to have the ability to recognize several millions of shapes and objects for correct classification in real-time, and would require Teraflops of compute power that just cannot be implemented locally on a small local system running traditional computer vision algorithms.

To address the complex problems detailed above, NVIDIA is bringing Deep Learning capabilities to the automotive industry through the DRIVE PX platform. Deep Learning in a nutshell is a technique that models the neural learning process of the human brain that continuously learns, and continuously gets smarter over time to deliver more accurate and faster results.

Neural networks and deep learning ideas have existed for decades, and only recently a group of scientists at Stanford University discovered that the training and learning process for a neural network model can be significantly accelerated if the model is written to be parallelized and run on GPUs that have thousands of parallel processing cores.

The three major breakthroughs that are currently revolutionizing the use of neural network-based computing in various industries are the use of GPUs as parallel processing supercomputers to train complex neural network models, the availability of Big Data methods, and the rapid advances in neural network algorithms.

The model has to correctly identify distracted pedestrians, occluded pedestrians and vehicles, traffic cones, construction vehicles, closed lanes, traffic signals, and other features in the scene. Training a neural network-based model to accurately identify these objects in a complex scene requires hundreds of TeraFLOPS and certainly cannot be done locally on the DRIVE PX system in an automobile. But once the model is sufficiently trained to identify and react to these complex road conditions, the trained, complex multi-layer neural network model can easily run in real-time on the highly parallel 256-core Tegra X1 processors in the DRIVE PX platform. The development and training of the neural network model required to deliver object identification and classification is done on high performance NVIDIA GPU-based supercomputers.

The supercomputer based training takes place in the cloud, the Tegra X1 based identification and classification on the road, and the accuracy and scope of the neural model can be continually enhanced even after it is deployed on the road.

This constitutes a considerable investment in automotive applications by a chip maker but a set of building blocks that could save driverless car developers months or years in development time.


___________________________________________________________________________

5 February 2015

The importance of photonics to future driverless cars should not be underestimated.

Light may eventually be used in microchips in place of electrons passing along conductive materials to increase yet further the density of transistors.

For automotive purposes Mercedes-Benz is already a leader in LED lighting and BMW in laser lighting.

Cameras in driverless cars need to see in the dark and so enhanced vision forward is necessary. Which better way than through intelligent headlamps?

The Mercedes Multibeam technology integrates multiple chip controllers for the LED headlights with cameras behind the windscreen. As objects are picked out by the cameras the light from the LEDs is altered in intensity, direction and duration. Oncoming vehicles and hazards are indentified and tracked and light modulated to illuminate them more clearly or so as not to dazzle them, depending on their nature. Headlamp beams predictively pivot into and out of bends and through roundabouts using GPS and other data.

This would potentially assist autonomous navigation because cameras can sight features at greater distance and earlier.

The great potential of LEDs, though, is to communicate between driverless cars and to roadside units as the light from LEDs can also be modulated at frequencies the eye does not register to carry data whilst providing illumination.

A trial in Asia has shown that WiFi beacons do not yet connect sufficiently fast for cars to hand themselves from one to the next seamlessly on the road so LiFi using LEDs is an obvious alternative.

BMW's laser light headlamps can reach about 650 yards on high beam, twice the distance of LEDs. This also allows onboard cameras to see that far in the dark where there is a clear view. Narrow passage detection shows that the laser light system can also be used for range finding and measurement and so could potentially displace lidar for these purposes in autonomous vehicles.


Narrow passage detection. Source: BMW

The CES 2015 BMW M4 show car's laser diodes can also project infomation onto the road, there is infrared recognition of people and animals, and GPS and cameras anticipate curves and roundabouts.



___________________________________________________________________________

11 February 2015

Today Britain announces its participation in driverless technologies across a broad front.


In Bristol a BAe-developed driverless military vehicle called Wildcat will be tested. This was previously developed by the Oxford Robotics Group (a participant in the Lutz project in Milton Keynes). The Wildcat uses a more expensive 3D lidar, like the one on Google cars, which is absent on pods and shuttles.




Also in Bristol the Venturer consortium is investigating public reaction to driverless vehicles and the legal aspects. The government is promising changes to the Highway Code to facilitate the operation of the vehicles. It has been confirmed that there is no legal impediment to operating them on public roads.


In Greenwich an improved version of the Navia shuttle, now named Meridian, is being given a demonstration run today.

The GATEway trials in Greenwich will test low speed, driverless navigation and automated parking. One of the principal challenges to be faced is manoevering a broad vehicle like this on pedestrian concourses whenever there are semi-permanent obstructions like fallen bicycles on the route.

This has previously presented problems in contexts like pedestrian shopping streets where there has been insufficient space to get by. Navigation on roads must eventually be the objective for shuttles and taxi-like services.




Interestingly, the government's Transport Research Laboratory will install a driverless simulator that will explore how occupants react to driverless scenarios.

In Milton Keynes, the home of the transport Catapult, the LUTZ pathfinder project has been expanded to become the UK Autodrive consortium. With participation from Jaguar Land Rover and Ford amongst others it will test new, two-seater, four-wheel-steering pods with up to 22 sensors mounted onboard although different combinations of sensors are likely to be trialled.

The four-wheel-steering will optimize the pods to use Milton Keynes' broad pavements and wide off-highway parking areas. As the speed of these will be limited to 6mph and they will have deformable exterior panels the routes of the pods will need less physical protection and modification.



The pods will also be trialled in Coventry, one of the homes of the U.K. automotive industry. Britain is a leader in low speed autonomous pod technology.

There is to some extent a fork now in the development of autonomous vehicles with low speed vehicles - which the public is intended to use for short trips - being one prong. Cooperative Intelligent Transport Systems technology will also be trialled near Coventry.


___________________________________________________________________________

18 March 2015

The following is attributed to Elon Musk at NVIDIA's GPU technology conference:

It gets tricky around the 30-40mph open driving environment. At 5-10mph it’s relatively easy because you can stop within the range of ultra-sonics. And then from 10-50mph in a complex environment, that’s where you get a lot of unexpected things happening. Over 50mph in freeway environment, it gets easier because possibilities get narrower. So, it’s the midrange that’s challenging. But we know exactly what to do and we’ll get there in a few years.

We don’t have to worry about autonomous cars. Doing self-driving is easier than people think. There used to be elevator operators, but now we’ve developed circuitry so they go where you want to go.


In the distant future, they may outlaw driven cars because they’re too dangerous. If you can count on not having an accident, you can get rid of a lot things, though we’re still some time away from that. Capacity of cars and trucks is 100 million a year and there are 2 billion on the roads now, so it could take 20 years for the whole base to be transformed. Similarly, it would take 20 years to replace the world’s fleet of cars with electric.

This confirms a few of the things said here and goes beyond. The up to10mph segment may well be the preserve of Britain's pods, Nissan has demonstrated over-50mph motorway driving but who is in the lead with 10-50mph autonomous driving is an open question. If Mr Musk would like to trial or show an autonomous car in Britain he should get in touch, directly or when he visits CSER.

[Today the Chancellor of the Exchequer announces that Britain will invest £100 million in driverless cars].


Jen-Hsun Huang, CEO of NVIDIA, interestingly added some information on the future price and capabilities of the DRIVE PX computer:

AlexNet on DRIVE PX has 630 million connections. (AlexNet is a neural network supercomputer that DRIVE PX computers can connect to). DRIVE PX processes AlexNet at 184 million frames a second.


DRIVE PX can fire off its neural capacity at a rate of 116 billion times a second. To augment today’s ADAS systems with deep learning has enormous potential. This can be done with DRIVE PX, which will be available in May 2015 as a developer kit, for $10,000, to qualified buyers.


___________________________________________________________________________

20 April 2015

Since CES 2015 the Mercedes F 015 has been testing with a full cell onboard. This restricts the use of the vehicle, at least whilst using a fuel cell, to places where for practical purposes there is zero humidity.

It is a long and heavy rear wheel drive car, at 5520mm and 3200kg, so it is probably based on a S-Class chassis and is a serious technology testbed not a highly constrained show car.

When the car stops for a pedestrian, lasers in the forward grille area can project a zebra crossing onto the road in front of the car. The car can also talk to the pedestrian.


The driver, who can let the vehicle drive autonomously and does not have to face forward, can nominate, from a screen, any of the passengers in the four seat vehicle to give commands to the car.


Driver does not have to face forward

It can self-park and come and collect you.


Testing in the desert: two pedals and a residual steering wheel


___________________________________________________________________________

23 April 2015

The most intellectually rigorous part of the British output for autonomous vehicles is what has emerged from Oxford Robotics Group/Oxbotica.

This maps space in an increasingly sophisticated way using 2D lidar, colours it with data from a camera, updates it from experience (further runs on the same route, potentially by any vehicle) and can represent it as point clouds.

Some advance 3D data and basic GPS positioning is also necessary to build the maps.

These point clouds, from an architectural viewpoint, fix elevational detail up to a certain height (and ground detail) in a 3D representation of space that does not have significant depth beyond surfaces.

The point clouds can look like strikingly beautiful artistic objects as motion is simulated through them, though obviously this was not the purpose of their creation. A single NVIDIA 650 series GPU renders them.


What this is not specialised in is operation of the control surfaces of vehicles, on-road navigation at speed, object recognition and interpretation, and parallel processing.



LUTZ pod with forward sensors to feed data into Oxford Robotics Group software visible

It is very interesting as a means of potentially lower cost spatial data acquisiton that could be parallel processed in the cloud.

The advent of dedicated automotive computers, more capable GPUs and advances in artificial intelligence could make the data readily useable for driverless cars.

This line of enquiry accepts the world and the built environment 'as is'.


A 3D lidar and multiple sensors approach, like that used by Google, will require some modifications to the road environment to optimise its safety potential.

[11 May 2015 Associated Press has reported that Google cars have had three minor collisions since September 2014].

A model with aspects of central control, like satellite-connected commercial vehicle roadtrains, will require heavy modification to the roadspace as centralised control is somewhat at odds with all other road vehicles having autonomy.


___________________________________________________________________________

12 May 2015

Last week Mercedes was licensed to operate two semi-autonomous Freightliner Inspiration Trucks in Nevada. These are Department of Transportation Level 3 (Limited Self-Driving Automation) vehicles 'where the driver is expected to be available for occasional control, but with sufficiently comfortable transition time.'



Photos: Daimler

The new truck was unveiled at the Hoover Dam.

It is claimed that the vehicle does not require changes to infrastructure although it does require the presence of good quality lane markings in autonomous mode.


Overtaking and lane changing have to be done by the driver so not a lot of cues are taken from the infrastructure for proximate action but the software does integrate GPS data and traffic information to plan the journey up to 8 miles ahead.

Its Highway Pilot technology operates the vehicle controls, ensuring that the engine and transmission work efficiently, regulating speed within speed limits, steering and braking and delivering smooth acceleration and deceleration.

Stereoscopic cameras, sideview cameras, front-facing short and long range radar and lane keeping and collision avoidance sensors provide the input data for autonomous operation.


Some of this technology, like adaptive cruise control and active power steering is already used with different software on the S-Class car - and, in proven use, on other Mercedes trucks since 2011.


It does not seek to dispense with a driver. Reducing driver fatigue to enhance alertness is an objective. It would appear that for anything other than straightforward highway driving this iteration will leave driving to the driver. Nonetheless, it can be an independent truck, unlike a roadtrain, not having to stick to predefined routes. This means that a potential commercial return from this type of vehicle may be closer to realisation.

10,000 miles have been done testing the vehicle - off public roads as has been the case with most vehicles trialling autonomous technology.




___________________________________________________________________________

20 June 2015

Many of technologies outlined above are to be found in the new BMW 7-Series launched this month but in forms that will be reliable and at the cutting edge for users rather than at the cutting edge for researchers.

So the remote control parking is impressive but it does not look for a parking space for you.

Hand Gestures and the Touch Command remote control tablet do much of what the rival F 015 Luxury in Motion does with fixed screens but in a much more immediately useable package.


They also accelerate the potential handover, in top end luxury cars, of in-car control from driver/chauffeur to passenger/autonomous systems. It is not hard to envisage that driving controls may one day follow, too.

The head up display is more capable, hinting at NVIDIA technology, but it is not hard to imagine it being rendered in the future on the lenses of augmented reality glasses as trialled by MINI, to be optionally worn by the driver or a passenger.




2015 7-Series. Source: BMW

A Welcome Light Carpet can be projected by LEDs onto areas around the car.

The Laserlight headlamp system produces adaptive beams that can reach up to 600m, as opposed to 300m with LEDs, but as the CES 2015 BMW M4 showcar has already demonstrated with narrow passage detection it is not hard to imagine the headlamps undertaking some of the simpler aspects of rangefinding performed by expensive LIDAR in non-BMW experimental autonomous vehicles.

Features such as traffic assist, lane keeping, steering and lane control, active side collision protection - all able to operate at up to 210 kmph - and surround view and crossing traffic warning, are all building blocks of autonomous navigation.

****

I have always been of the view that driverless cars should be light but immensely strong so that if the early ones do have a few crashes their passengers have an excellent chance of emerging unscathed and their relatively low mass and momentum (compared with other road users like vans, buses, trucks, 4x4s etc) will reduce the damage to others. This is one of the best way to contain liability issues. A 40-tonne autonomous truck going haywire would be a major setback to autonomous vehicle development (which is why significant infrastructure changes will be required in advance of fully autonomous trucks) but a few accidents involving quarter-tonne vehicles would most probably not.


The problem is that currently the smaller autonomous vehicles do not look particularly strong, albeit they will be travelling at low speed, and the larger, faster ones look too heavy. F1-style weight paring and strength is called for.

So it is very welcome seeing the new 7-Series introduce carbon fibre as a structural material in the model line, helping to pare 120kg weight. If BMW choose to make its luxury saloons the most autonomously capable vehicles in its range then it would incidentally be very sound for them to be immensely strong and ever lighter.



___________________________________________________________________________

Driverless Cars 2015 Part 2