Our Vision is to Enable Full Autonomy

Environment Perception

  • Vehicles
  • Pedestrians
  • Roads

Roads

We achieve high performance in recognition of multiple lanes, signs, signals and drivable areas under conditions of darkness, light glare, poor weather, and roads that lack clear lane markers or boundaries.

Pedestrians

Our advanced perception technology detects pedestrians and identifies key body points to read their postures and possible intentions. We accurately measure the distance of every pedestrian from the car.

Vehicles

We keep track of every car in a 3D bounding box that is localized in real time on high-definition (HD) maps. We robustly detect the orientation of cars and precisely estimate their distance and direction.

Semantic HD Maps

We reconstruct 3D positions of roads, traffic signs and signals and surroundings by extracting semantic points of 2D images taken from multiple cars. Fused with data from GPS and IMU, we can create higher-precision maps. Our semantic HD mapping solution is much more scalable and production-ready: it is only 1/10 to 1/100 of the cost of other LIDAR-equipped data collection methods.

Data-Driven Path Planning

Our data-driven approach is to build a driver with billions of miles of driving experience. Crowdsourcing allows us to obtain billions of driving trajectories localized in semantic HD maps. By mapping from environment perception data to driving trajectories in semantic HD maps, we conduct autonomous driving planning. This provides us a unique and elegant framework to solve corner cases by adding corresponding data rather than adding rules.

Mass-market Adoption Ready Software

  • Embedded device friendly
  • 10-100x speed boost and 100x model size compression with retained accuracy
  • Sparse semantic points of map and video data enable crowdsourcing and testing at scale