Autonomous vehicles Have Reached the Mainstream

What technology differentiates them from a conventional vehicle?

In short, autonomous here means driver-less, or else, having little to no need for human supervision. But within this, AVs can be deconstructed into ascending levels of autonomy. The Society of Automotive Engineers (SAE) categorized AVs in 2013, and then updated them again in 2018, developing 6 tiers of automation.[1] While it includes Level 0, where the car is completely manual, the automated versions span from Level 1, in which the vehicle only makes suggestions to a human driver, to Level 5, where the vehicle has complete control and passengers have confidence in not supervising. Between Levels 2 to 3, is the threshold between a human’s control and the vehicles; thus, Level 3 signals the beginning of vehicle autonomy. Despite some company’s misleading advertising, like Tesla’s “full self-driving” tech package, as yet, no company involved with AV technology has reached level 5.

This table details the different levels; the blue signifies human control while green is vehicle control.

Example of levels that includes more examples and year of initial or projected operation. Note: ADAS stands for Advanced Driver Assistance Systems.

A feature most people will recognize in their modern vehicles are AEBs - Automatic Emergency Braking systems, a form of brake assist designed to help the driver identify and avoid head on collisions.[2] According to SAE’s interpretation, they lie between level’s 0 and 1, just breaking the cusp of AV technology. These brakes operate by receiving a command from a computer once it receives information from built-in sensors. When these sensors detect possible impending obstacles, they relay that data to the computer, which decides if they are indeed obstacles to be avoided; if so, the computer gives the word to brake.

Level 5 driverless-ness is a tall order. While certainly more than the sum of its parts, when disassembled, we can see the complex systems that AVs must employ to function. Among technologies like GPS, automatic braking, steering, and locking, AVs have what Alan Amici, vice president of TE Connectivity, an automotive company at the forefront of sensory technology, identifies as the 3 most essential AV elements: sensors, connectivity, and software/control algorithms.

Sensors

Sensors are pivotal to the car’s ability to navigate and more than that, they do the data groundwork, feeding information to the other systems to build from. Sensors must observe their surroundings, detect objects on all sides, identify curves in the road, etc. We see these sensors in the form of 3 basic technologies working simultaneously: 360-degree LIDAR and radar for object detection[3], cameras for cross-checking object recognition, and ultrasound for park assist. These provide the AVs with a visual field comparable to the human eye, allowing them to “see” as a human would and orient themselves in relation to the objects around them. While at the moment sensors may not outcompete a human eye, several benefits of this technology are that sensors are constantly vigilant, consistent in their estimations, and attending to all sides of the vehicle at once.

Algorithms

All this sensory data, however, is useless to a system that can’t make confident, human-esque decisions; it needs an on-deck computer system to compile information and learn from the data accumulated until it has developed a situationally-aware, and in some ways, superhuman system[4]. In other words, it needs AI. Essentially like a teenager with a learner’s permit, the system garners skill through experience. And in doing so, the algorithm learns to discriminate between ambiguous objects like a plastic bag and a child, a shady patch and a pothole, or highway construction and a cyclist. Among the 50+ companies involved with autonomous vehicles, there are dozens of models available. I’ll discuss one model in more detail later.

Not only do these models need to identify what they are looking at, they are also need to reconcile speed and direction of surrounding objects. And in doing so, an algorithm should be able to predict the movements of these objects to inform its own navigational decisions. For instance, if the vehicle wants to switch lanes, it must determine if any entity is currently occupying the space it seeks to fill, or if there is an approaching vehicle about to pass. If so, it must calculate that vehicle’s speed and adjust accordingly. Alternatively, if a deer runs across the road, while perhaps unpredictable, it would be useful to know if that deer would already be out of the lane by the time the vehicle gets there. Thus, in calculating the deer’s speed and direction, or in other words, its velocity, the AI may determine if it only needs to slow down instead of stop, perhaps saving its passengers unnecessary stress.

Connectivity

Lastly, while the AVs are fully operational with just sensors and algorithms, they also need input from surrounding vehicles or wireless hubs (i.e. connectivity) if they are to be more efficient and operate in proximity. If a vehicle decides to stop because it thinks the plastic bag is a child, it needs to provide a forewarning to the next vehicle that it’s stopping, so the next car may simultaneously and safely stop or else provide a wide berth. Additionally, fleets of AVs on the roadway, if not interconnected, may be dangerous. Take for example two AVs on the outside lanes of a three-lane highway keeping pace with each other. What happens when they both want to switch into the middle lane at the same time? Perhaps an improbable situation, but one that could cause an accident, nonetheless. That may only occur, however, if they aren’t able to communicate their intentions and synchronize their movements. Thus, AVs cannot operate on independent systems.

More than that, connectivity could boost efficiency in driving. Traffic could synchronize[5], eliminating the-mile-long start/stop jerking; stop lights could become non-existent. In a large intersection, vehicles would automatically make space for each other. And, an overarching algorithmic system, essentially acting as an air traffic controller, could calculate each vehicle’s route, determine where each AV will be at any given moment, and ultimately, minimize everyone’s time in traffic. Vehicles could also receive moment by moment reports from traffic and accident centers, like The Department of Transportation, local newscasters, or Google Maps. This could be a boon for a vehicle’s passengers; AVs could automatically switch routes due to road closures, traffic, or iciness, saving passengers time and reducing contact with safety hazards.

While I’m not going to focus too much on AV connectivity technology, it’s interesting to note where the connectivity occurs. The picture below summarizes the big 5 in AV wireless tech; it just doesn’t identify the individual objects such as trees, buildings, rocks and the like, which may have sensors attached to them to inform AVs of their location. This could be helpful in circumnavigating unique roadway situations where a tree, for instance, is situated very closely to the road.


[1] Before the update there were only 5 tiers going from Levels 0-4. As technology progressed, Level 4 developed parts (a) and (b). It was eventually bisected into two separate levels. Images detailing AV tech, history, and projections refer to fully autonomous as Level 4 or Level 5 depending on when they were created.

[2] It’s worth noting, that since SAE created the autonomy levels, they have taken on a life of their own. It’s unclear if SAE sanctioned any of the changes or if they were simplified through the grapevine. But many other sources would not characterize AEBs as Level 0, only Level 1, however, SAE leaves room for them in Level 0.

[3] This article discusses how lidar and radar work, their differences, pros and cons, and which of the two sensors individual AV companies prefer.

[4] I won’t go into the different kinds of AI models employed by AV’s, however this article details many of the commonly used algorithms.

[5] Check out this video on how phantom traffic forms and this video for a more detailed look. While I prefer the first video because it provides more information in less time, the animation in the second video is hard to beat.