WORLDREVIEWS

A global archive of independent reviews of everything happening from the beginning of the millennium


Read our Copyright Notice click here

For publication dates click here


Cambridge Autonomous Mobility



Driverless Cars 2015 Part 1


Driverless Cars 2015 Part 2


Video briefly explaining sensor fusion and object recognition.




15 December 2016

For two years I've been gently annoyed by a question that is posed as if it were a philosophical dilemma or ethical choice, especially because you hear it so often: what will a driverless car if do it has the choice of ploughing into a group of schoolchildren or driving into a wall killing its innocent occupants?

The answer almost certainly is that the onboard computers will never have to make ethical choices of this kind.

If the car recognizes an obstacle that it must avoid - children instead of a balloon, say (and this is why good object recognition is essential and most projects do not yet have it) - the car will apply the brakes, hard if necessary, and probably much earlier than a human could, to slow down. If there is a gap to go through to avoid the children it will go through it, if not it will continue to deliver maximum or optimum deceleration, with the result that the children and, maybe, the occupants (they should have airbags) may suffer injuries. What the algorithms will not deliver is a 'car commits suicide' option where it deliberately drives into a wall or something else.

On this website I once raised the question of what a driverless car would do if a large bird swooped down apparently intent on hitting it - would it brake, possibly occasioning another car to run into the back of it? What appeared to be the answer from Google came back indirectly: the car will ignore birds.

So a rarer 'ethical' question might be more pertinent than the common one: If a large piece of masonry was coming down off a building site 20 metres ahead, perhaps too close to allow much deceleration, would it drive at it or swerve and hit a wall to avoid it?

It must be said that Google is a company that has the resources to answer this one but I fear a lot of autonomous vehicle projects do not and so I am sceptical of most safety claims for autonomous vehicles in a way I was not when I was a pioneer researching how they could be introduced into the U.K. three years ago.

For many projects the likely answer would be that the object would not be recognised by the computers and if picked up by the vision systems ignored like the birds.

*****

There is also a fair amount of nonsense talked about who will be legally to blame for accidents but a dilemma of this nature can already be presented by today's technology.

If you have a car with infra-red sensors that can see an animal crossing the road in the dark before the driver can and the car applies the brakes hard, without the intervention of the driver, with the result that another vehicle runs into the back of the car, who is to blame?



8 September 2017

An AI-focussed, venture capital funded driverless car company, Five AI, has launched in Cambridge. It is probably not going to end up owning many driverless cars of its own - it is not a vehicle manufacturer by any means - but it is a good bet to develop AI that can be used by future driverless projects in an urban context and so this company looks like one that could eventually sell for a healthy multiple of initial funding.




15 September 2017

The Kangaroo Konundrum





Purely algorithmic decision-tree object recognition cannot reliably identify a kangaroo crossing the road. Deep learning artificial intelligence should be able to do so and then cloud based parallel processing should be able to verify the fellow is a kangaroo in real time. When you have a non-sentient system that does this, come back to us and we will set you the next puzzle.



21 February 2018

The Kangaroo Konudrum is now easier to solve.

Last week Google announced it would make use of its proprietary tensor processor unit chips, specifically designed for artificial intelligence tasks, available to third parties via its cloud computing service, focusing on computer vision technology, which can train computers to recognize objects.

All that is needed now to train the computers is lots of imagery of kangaroos bouncing across roads!

Would-be gatekeepers of object recognition technology are being sidelined in effect.

In the context of future autonomous navigation on roads, that is important. There are so many object-related problems to solve.

If, for instance, I should want future driverless vehicles to be able to recognize scaffolding around buildings which was not on prior maps, I would need to have an object recognition neural network trained to do so.

A driverless vehicle on the move would need to recognize that a building that was on a map had been occluded, identify the occlusions as scaffolding and use decision tree algorithms to make decisions about its robotic functions, like driving.

A problem is do sufficient labelled images - RGB, lidar point cloud, radar and infrared - exist to train a neural network? Once multiple lidar equipped vehicles are travelling around to acquire them, that point could be reached.

Can any semi-autonomous vehicle at present recognize scaffold poles standing in the roadway with clear air around them?

Technically a driverless vehicle is classified as a robot but the object recognition functions fall squarely within the domain of artificial intelligence. Motorised autonomous vehicles on public roads unable to recognize objects are non-starters.

In reality you have to informally outsource the problem solving to the wider market as there are many more challenges and problems than any single company can manage.




23 February 2018


The making available of Google TPUs to third parties is one of the most significant events in digital history.

There will be an explosion in artificial intelligence work as a result.

(It is akin to Amazon's historic decision to rent out its cloud servers. By renting out, the system will be honed, in effect outsourcing some problem solving to customers.)




23 February 2018

Although the early work on driverless cars was done by robotics enthusiasts, developments in robotics have taken place at a slower rate than those in artificial intelligence.

Google bought Deep Mind and sold on Boston Dynamics.

Now the route to autonomous vehicles is via artificial intelligence and machine learning. The skills needed in this area are in very short supply given competing claims on their use.




12 March 2018

Emphasis as to what is important differs depending on which areas a corporation's products have strengths in.

Here are two useful podcasts from Qualcomm from last year on artificial intelligence: [1] [2].





22 March 2018

The latest fantasy we have been spun is that driverless truck deliveries are on the way to Britain. The unions need not worry. I like the song Luxury Liner but 40 tons of steel, or 44 tonnes of juggernaut as you get in Britain, gone haywire or with malware fed into the guidance system because discreet frequencies have not been used for the guidance communications will be able to go straight through the base of a skyscraper killing or injuring the pedestrians and office workers in the way and taking out support columns, too. Besides, there is not a guidance system in the world in existence that can navigate a truck through winding city centre roads and no medium term prospect of one let alone of an autonomous truck that can deliver to your local city centre supermarket.

No, if you want laden 40 ton trucks travelling autonomously, stick to closed mining roads with no pedestrians or other civilian traffic on them in Australia or elsewhere and do not expect the guidance systems to recognize kangaroos crossing.

As on a racetrack, object recognition will not be the insuperable problem because there will be next to no objects to recognize.

It is time to rein in techno-fantasies fast.

Today is the anniverary of when a hostile vehicle ploughed into pedestrians on Westminster Bridge killing four people metres away from Parliament.

Ideally, all testing of autonomous and semi-autonomous trucks and vans should be banned in London for the forseeable future. The guidance technology is absent.

Cars are lighter. The only successful trial in Britain of a semi-autonomous vehicle with some capabilities approaching those of an autonomous vehicle was a year ago, on 28 February 2017, by a Nissan vehicle with an Evening Standard reporter onboard near the Excel Centre in the early hours of the morning where many of the roads are private and there was little other traffic.




12 April 2018

Video (source BMW).

As of today, this is the most sophisticated you get in self-parking.






LANE MARKINGS


Reviewed by ANDRE BEAUMONT


6 February 2016

The news that Britain will be experimenting further, on a very limited scale, with removing white lane markings from the middle of roads leaves one in two minds. This may slow down motorised vehicles so reducing accidents but it does not help pedestrians.


In many of those places with no road markings, motorbicycles, buses and cycles arriving on the road space pedestrians are using or crossing is a nightmare, because they have little in the way of visual cues to know where they will be or how they are arriving, and it will be worse for those with impaired hearing or vision.

For example, if the pedestrian is crossing a road and is caught out by a rapidly turning vehicle he can stand on the white line and the driver will understand what he is about but it is harder to see, hear and anticipate everything on an unmarked road.

Like many cars now have, human beings have their own 'sensors' - eyes and ears - and they use them intensively but they need to be given cues that they can read. Which brings us tangentially to future autonomous vehicles.


Source: Daimler

A Mercedes truck that partially uses autonomous guidance (this one, at the entry for 12 May 2015) is heavily reliant on lane markings to be 'autonomous'. In many ways it is not truly autonomous partly because it will not drive in autonomous mode off the type of highway that has good markings.

Mercedes' research is near the top of the tree for autonomous vehicles, especially in dollars committed.

A lot of other 'driverless car' demos, though, are little more than the combination of lane keeping technology, cruise control and some less than fail-safe processing of sensor data. The vehicles are also in no real sense driverless.

Here at Worldreviews we are advocates of driverless cars but ultimately we are not starry eyed.

Some driverless car projects will come onstream for restricted public use in 2017-2020 but they will be in geographically limited areas that will have been mapped in advance more intensively - by orders of magnitude - than urban roadscapes ever have been before.


Some projects will have good sensor fusion, some good object recognition - but most probably will not have in the early years.

What they will rely on to operate, though, is lots of lane markings. These are the objects they need most of all to recognize.

A pedestrian having to second guess a bus on a road with no lane markings is bad enough, a driverless white van on the same road quite a harder prospect. How would you ever guess its intention to stop in a particular place or, worse, mount the kerb? (Driverless delivery vans are probably the least feasible outcome in the near term.)


*****

March 2016


For pods and shuttles, vehicles currently designed to move through shared space at low speed, and which we should not confuse with driverless cars, there is the prospect of navigation that is not reliant on lane markings.

These would include the recently announced NVIDIA-powered WEpod of Delft Technical University that uses some of the GPU manufacturer's deep learning capabilities and also a promising project in Greenwich, UK which looks like it is run by a commercially competent consortium which includes the Transport Research Laboratory and Heathrow.


The latter project has 3D local mapping provided by Oxbotica and is partly funded by the EU.

The Dutch WEpod is an iteration of the EZ10 vehicle, supplied by EasyMile, and part of the EU-funded CityMobil2 project. Other EZ10 trials are being run or rolled out in a handful of European countries. The vehicle is electric, semi-autonomous and, at present, is low speed and concentrates on 'last mile' routes that are protected from other vehicles. Many of the routes have the potential to terminate at railway stations.


[Where these vehicle can run has proved to be circumscribed.]


*****


April 2016

It is reported that Sheik Mohammed bin Rashid Al Maktoum would like to see a quarter of all road journeys in Dubai undertaken in driverless vehicles by 2030.

An EZ10 vehicle, which as its name implies carries ten passengers, is already being tested in the state. It does not require lane markings but vehicles with a lot more artificial intelligence, like Google's cars, will. For these, the lane markings issue need not be a problem, initially, in Dubai. Some of the roads have twelve lanes so dedicating some to driverless vehicles should not be a problem. Snow obscuring the markings is unlikely to be a problem; sand, perhaps.

For those of us who have been studying driverless vehicles and their constituent technologies from the outset, one of the main problems is finding conceptual realism in those we talk to. In 2016, the conceptual frameworks for them require an understanding of many disciplines all at once, from politics to sensor fusion.


A single discipline, like computer coding, cannot deliver the majority of the answers and neither can a turnkey contractor. Conceptually, the rendering of what is being mapped may be one of the most advanced disciplines.

Consultancy reports from the financial world extrapolate safety projections and financial benefits from minimal practical experience so these have to be taken with a pinch of salt.

A quarter of road journeys is about right as is the target date of 2030. There is no way that driverless vehicles can supplant legacy vehicles, except progressively, and never absolutely. Are we to have unmanned JCBs and fire engines on public roads?

Vision and realism are required. From decades of observing Sheik Mohammed's equine operation in Newmarket, I would say he has it.

*****


July 2016

I had a brief conversation with the Director General of the EU responsible for autonomous and connected vehicles in May and watched a presentation by him which contained nothing inconsistent with the direction of travel the U.K. should go in but where will Britain be in January 2019, possibly the date Brexit takes effect?


Coordinated with Europe on connected vehicles and road security standards and probably behind the curve on autonomous vehicles unless Volvo or another external player can bring a successful trial to Britain.

*****


March 2017

The U.K. autonomous vehicle effort took a diversionary wrong turning at the beginning of 2015 which took it down a cul-de-sac.

Capabilities to read lane markings, for instance, were not in evidence.

Fortunately, not all conceptual and technical expertise went down this road and a shift in effort is now taking place which will re-centre efforts around the U.K.'s known and innovative capabilities in vehicle testing.

It is apparent that for autonomous vehicles and semi-autonomous vehicles to read and follow lane markings with a high degree of accuracy they will have to rely on machine vision systems. These will be able to see in the dark (or the best will) but their effectiveness will be degraded when lane markings are wet. When they are obscured by snow, sand or proximate vehicles, distance measuring sensors - lidar, radar and ultrasound - will have to fill the gap in locating the vehicles relative to their environment - requiring strong sensor fusion capabilities.


Until this package is finely honed and road markings upgraded, limitations remain on autonomous commercial vehicle testing and on semi-autonomous commercial vehicle platoon formation testing.


*****

January 2018

Some of the systems found on the experimental Mercedes truck above can now be found in Mercedes cars, like the new E-Class, such as autonomous gearchanging in advance of junctions based on GPS positioning. BMW also has an impressive suite of driver aids on its new 5 Series. The cars, however, stay firmly within the realm of the semi-autonomous at most.

The E-Class, for instance, requires the driver to occasionally touch the wheel when not actively steering.

The goal of fully autonomous navigation in all driving conditions, primarily pursued by Silicon Valley companies, is a long way off being attained, though on protected, defined and heavily mapped sets of routes, cut down versions of autonomy might be demonstrated this year.

It will be a long slog to get all the requisite technologies developed and to mesh.

It is not like developing an app for remotely operating a home central heating system. In fact, it is a harder challenge than Nasa returning to the moon. You are not just travelling in air and a vacuum but in a dynamically unpredictable environment.

Minimum investment for serious players runs into hundreds of millions of dollars just to develop and partially prove concept. Thereafter autonomous technologies will require billions in infrastructure investment per major city if the cars are to be used on the usual roads with traffic lights, complex junctions and roundabouts.


*****

February 2018

In the early years we could be dreamy about driverless cars and believe the hype but now that is impossible.

Take the example in the side column for the entry for 21 February 2018.

If there are not enough labelled images of scaffolding to provide training data for deep learning neural networks to begin to attempt to identify scaffolding and any alternative using another form of machine learning, like reinforcement learning, not at a sufficiently advanced stage of development how can a claim that driverless cars are nearly ready to be rolled out for city use be supported?

If the scaffold poles are in the roadway supporting something above but unenclosed, which type of sensor will spot them?

If they are enclosed in plastic sheeting, RGB imaging might result in the flagging up of a building enclosure without seeing the poles behind.

Radar imaging might show the poles but not the enclosure.

Which companies have vehicles that have good sensor fusion of RGB and radar images whilst on the move?

How can claims that driverless cars will be orders of magnitude safer than human-piloted vehicles be supported?

If the identification of scaffold poles is not a priority for the artificial intelligence community how costly is it to protect the routes used by autonomous vehicles in the meantime?

Must all routes be protected from building enclosures?

Must a route be delineated by a continuous concrete barrier one side of which a scaffold pole may not be placed?

How exactly is an autonomous taxi service in the city meant to be realistic until countless researchers and companies have broken down the problems using cloud based artificial intelligence?


*****


March 2018

Is certifiable trustworthiness a reasonable expectation of current AI?

Probably not.

With an arbitrator's hat on, and a standardizer's, too, being a bit of a polymath, I have long argued that legal disputes relating to autonomous vehicles cannot be resolved using traditional legal procedure. A hybrid, part technical, part human judgement will have to take its place.


At its simplest, data logging will have to determine whether anyone sitting in the driver's seat touched the controls prior to an accident but it will go well beyond this.

The presumption that if someone runs into the back of you they are at fault that often now applies might have to be reversed. An autonomous vehicle ahead might not have proceeded as any reasonable following human driver might have expected.

Where deep learning is applied to object recognition it may not be possible to generate the transparency of decision making necessary to certify trustworthiness.

Some may game the system, like pulling in front of an autonomous vehicle to force it to brake.

Others may inject malware into the data stream so that adversarial examples confuse the AI.

This goes beyond academic disciplines to highly pragmatic ones.

Initially, where AI has made decisions that impinge on fairness there should be an appeal to a human being or a panel of three people, possibly post the event.

So, hypothetically, if AI has decided that someone should not pass their driving test, or lose their driving licence, based on sensor data, a party questioning the fairness might have a chance to resit the test with just a human examiner in the former case or have a panel of three people review the sensor data and all other evidence in the latter case. Currently, judges and magistrates may or may not be qualified to review sensor data.