A Platform for HD Maps

and automatic generation of HD maps from video

Waiting List

Our Process

  1. Point cloud data is extracted from video via deep learning, the resulting information forming the first layer of our maps.
  2. Annotations such as lanes, signs, and signals are extracted, forming the second layer.
  3. Batches of point cloud and annotation data are fused together into a single cohesive output.
  4. The now fused map information is georeferenced, aligning point cloud and annotation information to their respective coordinates.
  5. Vector maps are produced in the OpenDrive format, allowing for autonomous-driving simulations and research.
  6. Human behavioral data while driving is extracted and annotated using machine learning, producing the third layer of our maps.