🤖 AI Summary
A new open-source ROS2 package implements a signboard detection pipeline that uses only 3D LiDAR point clouds and intensity — no cameras or semantic models. The node subscribes to pcd_segment_obs, filters points by distance (0.5–4.0 m), angular sector (10°–170°) and intensity (>=130), clusters the remaining points with DBSCAN (eps=0.7 m, min_samples=3), then attempts to match each cluster to a predefined template via initial alignment and ICP. Matches that exceed a fitness threshold of 0.8 are logged as detections and the filtered point cloud is published on filtered_pointcloud2. The repo includes ROS2 build/run instructions and an example ros2 bag for testing.
This approach is significant for robotics and autonomous-vehicle teams that need a lightweight, geometry-first detector that works when cameras are unavailable or unreliable (night, dust, glare). Key technical trade-offs are explicit: fixed distance/angle/intensity cutoffs and hand-tuned DBSCAN/ICP parameters make it fast and interpretable but potentially brittle across sensor setups or diverse sign geometries. The ICP fitness threshold (0.8) and template dependency mean performance hinges on template quality and sensor calibration. The package is a useful baseline for LiDAR-only sign detection and a practical starting point for extensions such as multiple templates, adaptive thresholds, learned descriptors for robust matching, or integration with semantic classifiers.
Loading comments...
login to comment
loading comments...
no comments yet