A GPU-accelerated PyTorch re-implementation of M-Detector [1], an unsupervised algorithm for detecting moving points in LiDAR point cloud sequences. Developed as a personal research project exploring dynamic object detection for autonomous driving, evaluated on the nuScenes dataset.
A nuScenes scene with points labeled as dynamic (red), ground (green), or static background (grey). The detector requires an initialization period to build up history, visible by the blue points.
The core idea of M-Detector is to identify moving points without any supervision or learned priors, purely by detecting geometric inconsistencies over time: a point is considered dynamic if it consistently occludes parts of an established static map of the environment.
The pipeline consists of five stages:
- Ground removal — GPU-accelerated RANSAC to pre-label ground points
- Occlusion pass — fast coarse check against recent history to generate initial dynamic candidates
- Map Consistency Check (MCC) — filters candidates against a longer-term static map to reduce false positives
- Event-based tests — handles edge cases such as objects moving toward or away from the sensor
- Frame refinement — HDBSCAN clustering with convex hull expansion for cleaner object-level detections
Evaluated on nuScenes, the geometric pipeline achieves reasonable dynamic point recall of around 50-60%, depending on the scene. The frame refinement stage showed no significant improvement over the geometric baseline — bidirectional processing is a natural next step that was outside the scope of this project. Overall performance does not match original method, likely due to the original authors including the ego-vehicle in the detection, overinflating their reported detection score.
The primary contribution is the re-implementation using pytorch and the surrounding evaluation infrastructure, making this more suitable for pseudo-label generation for neural network training.
.
├── config/
│ └── m_detector_config.yaml # Main configuration file
├── src/
│ ├── core/ # Core algorithm (occlusion, MCC, event tests)
│ ├── data_utils/ # Ground truth generation and validation
│ ├── tuning/ # Ray + Optuna experiment infrastructure
│ └── utils/
└── scripts/ # Entry points for all pipeline stages
For setup instructions, dataset preparation, and full usage documentation, see DEVELOPER.md.
[1] Wu, H., Li, Y., Xu, W. et al. Moving event detection from LiDAR point streams. Nature Communications 15, 345 (2024). https://doi.org/10.1038/s41467-023-44554-8
