Autonomous cars often use a combination of normal two-dimensional cameras and units & # 39; LiDAR & # 39; of depth detection to recognize the world that surrounds them.
In LiDAR scanning (light and range detection), which is used by Waymo, one or more lasers send short pulses, which bounce when they hit an obstacle.
These sensors constantly scan the surrounding areas for information, acting as the "eyes" of the car.
While units provide depth information, their low resolution makes it difficult to detect small and distant objects without the help of a normal camera connected in real time.
In November of last year, Apple revealed details of its driverless car system that uses lasers to detect pedestrians and cyclists at a distance.
Apple researchers said they were able to obtain "highly encouraging results" by detecting pedestrians and cyclists with only LiDAR data.
They also wrote that they could beat other approaches to detect three-dimensional objects that use only LiDAR.
Other autonomous cars are generally based on a combination of cameras, sensors and lasers.
An example is the Volvo auto-driving cars that rely on around 28 cameras, sensors and lasers.
A computer network processes information that, together with GPS, generates a real-time map of stationary and moving objects in the environment.
Twelve ultrasonic sensors around the car are used to identify objects near the vehicle and to support autonomous operation at low speeds.
A wave radar and a camera placed on the windshield read traffic signals and the curvature of the road and can detect objects on the road like other road users.
Four radars behind the front and rear bumpers also locate objects.
Two long-range radars are used in the bumper to detect that fast-moving vehicles approach from a distance, which is useful on motorways.
Four cameras, two on the side mirrors, one on the grid and one on the rear bumper, monitor the objects near the vehicle and the lane markings.