The high bandwidth of the incoming sensor data represents a challenge for the data processing system in addition to the robustness required for permanent use and the need for continuous monitoring of one’s own perception horizon.
Schematic functional representation of the system for sensor data processing
For this reason, an Nvidia Drive AGX platform is used to process the sensor data for recording the surroundings, whose strengths lie in particular in the extraction of information from camera images. This system is supplemented by a separate CarPC for localization and self-state estimation. This merges the signals of the GNSS system with those of the IMU and the 360 ° lidare [see also chapter on localization].
By implementing it as a distributed system using the Robot Operating System (ROS), all information is available on both systems.
An environment model aggregates everything from the computer vision system, the eigenstate estimation and the C2X interface to form a consistent image of the vehicle environment. On the basis of this information, a third computer system (HAD-PC in the picture) can make a driving decision or plan a path to be followed.
In order to be able to use the data from the camera system, pre-trained ML models provided by Nvidia are used and expanded to recognize lane markings, objects and the drivable free space around the vehicle. This enables various moving objects, such as cars and pedestrians, but also traffic lights and signs, to be recognized and classified. Due to the fusion with point clouds of the lidars, a transformation of the information from the camera image into world coordinates is possible.
Video Link: [Coming Soon]
The different colors of the lane recognition indicate the position relative to the vehicle, the letter at the far end indicates the classification (B – border, S – solid line, D – dashed line). Object classes are identified by the color of their bounding box. For example, blue represents cars, yellow pedestrians and pink two-wheelers. Signs are framed in white, traffic lights are given the color of their detected light signal. With the clearance detection, the color of the line indicates the type of boundary. For example, green stands for the end of the road and blue for a potentially moving object.
Link to the Video
Video description of the LDM:
The video shows excerpts / an excerpt from a measurement drive on the test corridor in October 2021. The objects detected by the front sensors (Ibeo lidar, radar, camera), as well as the point cloud of the front ouster and the camera image of the front camera are shown. The different colors of the objects indicate the classification.