Add a part of the new rigid registration method [FRICP] #6199
Add a part of the new rigid registration method [FRICP] #6199yaoyx689 wants to merge 24 commits intoPointCloudLibrary:masterfrom
Conversation
|
Hello, thanks for the pull request. A few things:
I assume you mean
I am not completely sure what you mean with this. Would you have to duplicate code from this pull request to |
|
Please update the license as per https://pcl.readthedocs.io/projects/tutorials/en/latest/writing_new_classes.html#licenses and use the |
|
Thanks for your reply. I will modify them as you said.
In cases where the two point clouds used for registration have partial overlap or noise, our robust ICP algorithm can achieve higher accuracy compared to traditional ICP algorithms. On one hand, this is due to using the Welsch function instead of the L2-norm as the objective function. It reduces the impact of outliers, which is similar to the idea that ICP uses a distance threshold to filter outlier points, but it can achieve better results by giving each correspondence a different threshold instead of just 0,1. On the other hand, the Welsch function can provide a coarse-to-fine result by adjusting parameters (
By default, it will be set based on the maximum distance between corresponding points. This means it has the same effect as setting
In our method, an important part is that the parameters of the Welsch function can be adaptively adjusted. During the iteration process, the parameter |
|
Hello, We have recently added a subclass However, our PR is currently failing the automated checks because the server used for code verification runs out of disk space. Specifically, we are seeing the following error: It seems that #6364encountered a similar issue before. Could you please clean up the disk space on the server corresponding to the Ubuntu 24.04 GCC build environment so that our PR can proceed successfully? Thank you very much for your help. |
|
Hi, thanks for adding the new class. I will take a closer look and test your changes as soon as I have time, but that might take a few weeks. |
mvieth
left a comment
There was a problem hiding this comment.
Hello, I have finally found some time to test your code in detail. With several point clouds, FRICP was indeed able to find a more accurate alignment than other registration algorithms. So I think it would be great to have it in PCL, but first, here are a few review comments I would ask you to address. Thank you!
registration/include/pcl/registration/transformation_estimation_point_to_point_robust.h
Outdated
Show resolved
Hide resolved
registration/include/pcl/registration/transformation_estimation_point_to_point_robust.h
Outdated
Show resolved
Hide resolved
| matrixLog(const Matrix4d& transform) const; | ||
|
|
||
| RobustFunction robust_function_; | ||
| bool use_anderson_ = true; |
There was a problem hiding this comment.
I have tested several point clouds with and without Anderson acceleration enabled, and I have found that when it is enabled, the registration progress is often rather unpredictable and chaotic. Therefore I would rather make this option false by default and users can opt-in.
| @@ -0,0 +1,581 @@ | |||
| /* | |||
There was a problem hiding this comment.
Please use the shorter license text as described here: https://pcl.readthedocs.io/projects/tutorials/en/latest/writing_new_classes.html#licenses
|
Thanks for the detailed review — I’ve addressed all actionable comments from your latest review.
|
registration/src/transformation_estimation_point_to_point_robust.cpp
Outdated
Show resolved
Hide resolved
registration/include/pcl/registration/impl/transformation_estimation_point_to_point_robust.hpp
Show resolved
Hide resolved
registration/include/pcl/registration/transformation_estimation_point_to_point_robust.h
Show resolved
Hide resolved
Co-authored-by: Markus Vieth <39675748+mvieth@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
This PR adds initial building blocks for the FRICP (“Fast and Robust ICP”) registration method to PCL’s registration module, including a robust point-to-point transformation estimator, an FRICP ICP variant with optional Anderson acceleration, and a new API-level test.
Changes:
- Introduce
TransformationEstimationPointToPointRobustwith Welsch reweighting and configurable sigma. - Add
FastRobustIterativeClosestPoint(FRICP) implementation with dynamic Welsch scaling and optional Anderson acceleration. - Extend registration API tests and wire new headers/sources into the registration CMake target.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 13 comments.
Show a summary per file
| File | Description |
|---|---|
test/registration/test_registration_api.cpp |
Adds a new test case exercising FastRobustIterativeClosestPoint, including noisy/outlier target data. |
registration/src/transformation_estimation_point_to_point_robust.cpp |
Adds TU stub for the new robust point-to-point transformation estimation header. |
registration/include/pcl/registration/transformation_estimation_point_to_point_robust.h |
Declares the robust point-to-point transformation estimation API (Welsch weighting + sigma setter). |
registration/include/pcl/registration/impl/transformation_estimation_point_to_point_robust.hpp |
Implements weighted SVD alignment and helper routines for the robust estimator. |
registration/include/pcl/registration/fricp.h |
Declares the FastRobustIterativeClosestPoint API and configuration knobs (robust function, Anderson, dynamic Welsch). |
registration/include/pcl/registration/impl/fricp.hpp |
Implements the FRICP optimization loop, correspondence updates, dynamic Welsch schedule, and Anderson integration. |
registration/include/pcl/registration/anderson_acceleration.h |
Adds an Anderson acceleration helper used by FRICP. |
registration/CMakeLists.txt |
Registers the new headers/impl headers and the new source file in the registration build. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
registration/include/pcl/registration/transformation_estimation_point_to_point_robust.h
Outdated
Show resolved
Hide resolved
|
|
||
| // For each point in the cloud | ||
| // If the data is dense, we don't need to check for NaN | ||
| while (cloud_iterator.isValid()) { | ||
| // Check if the point is invalid | ||
| if (pcl::isFinite(*cloud_iterator)) { | ||
| accumulator[0] += weights[cp] * cloud_iterator->x; | ||
| accumulator[1] += weights[cp] * cloud_iterator->y; | ||
| accumulator[2] += weights[cp] * cloud_iterator->z; | ||
| ++cp; | ||
| } | ||
| ++cloud_iterator; | ||
| } | ||
|
|
||
| if (cp > 0) { | ||
| centroid = accumulator; | ||
| centroid[3] = 1; |
There was a problem hiding this comment.
computeWeighted3DCentroid indexes weights using cp, which is incremented only for finite points. If any point is NaN/Inf, this shifts the weight indexing and produces an incorrect centroid (and also fails to renormalize by the sum of weights actually used). Track the iterator position separately (increment a weight index every loop iteration), and normalize the accumulator by the sum of weights of valid points.
| // For each point in the cloud | |
| // If the data is dense, we don't need to check for NaN | |
| while (cloud_iterator.isValid()) { | |
| // Check if the point is invalid | |
| if (pcl::isFinite(*cloud_iterator)) { | |
| accumulator[0] += weights[cp] * cloud_iterator->x; | |
| accumulator[1] += weights[cp] * cloud_iterator->y; | |
| accumulator[2] += weights[cp] * cloud_iterator->z; | |
| ++cp; | |
| } | |
| ++cloud_iterator; | |
| } | |
| if (cp > 0) { | |
| centroid = accumulator; | |
| centroid[3] = 1; | |
| Scalar sum_weights = Scalar(0); | |
| Eigen::Index weight_idx = 0; | |
| // For each point in the cloud | |
| // If the data is dense, we don't need to check for NaN | |
| while (cloud_iterator.isValid()) { | |
| // Check if the point is invalid | |
| if (pcl::isFinite(*cloud_iterator)) { | |
| const Scalar w = weights[weight_idx]; | |
| accumulator[0] += w * cloud_iterator->x; | |
| accumulator[1] += w * cloud_iterator->y; | |
| accumulator[2] += w * cloud_iterator->z; | |
| sum_weights += w; | |
| ++cp; | |
| } | |
| ++weight_idx; | |
| ++cloud_iterator; | |
| } | |
| if (cp > 0 && sum_weights != Scalar(0)) { | |
| accumulator[0] /= sum_weights; | |
| accumulator[1] /= sum_weights; | |
| accumulator[2] /= sum_weights; | |
| centroid = accumulator; | |
| centroid[3] = Scalar(1); |
| Scalar sigma2; | ||
| if (sigma_ < 0) | ||
| sigma2 = square_distances.maxCoeff() / 9.0; | ||
| else | ||
| sigma2 = sigma_ * sigma_; | ||
|
|
||
| for (int i = 0; i < npts; i++) { | ||
| weights[i] = std::exp(-square_distances[i] / (2.0 * sigma2)); | ||
| } | ||
| weights = weights / weights.sum(); | ||
|
|
There was a problem hiding this comment.
sigma2 can become zero or extremely small (e.g., when all squared distances are 0, or when set_sigma(0) is used), which will cause division-by-zero/NaNs in the weight computation and in weights / weights.sum(). Clamp sigma2 to a small positive epsilon and handle the case where weights.sum() is ~0 (e.g., fall back to uniform weights).
registration/include/pcl/registration/impl/transformation_estimation_point_to_point_robust.hpp
Show resolved
Hide resolved
| const Eigen::VectorXd new_inner_prod = prev_dF_.col(column_index_).transpose() * | ||
| prev_dF_.block(0, 0, dimension_, m_k); | ||
| normal_eq_matrix_.block(column_index_, 0, 1, m_k) = new_inner_prod.transpose(); | ||
| normal_eq_matrix_.block(0, column_index_, m_k, 1) = new_inner_prod; |
There was a problem hiding this comment.
This looks like a compile-time type mismatch: prev_dF_.col(...).transpose() * prev_dF_.block(...) is a 1×m_k row vector, but it is being stored in an Eigen::VectorXd (m_k×1). This should be an Eigen::RowVectorXd (or explicitly transpose the product) so the subsequent assignments to normal_eq_matrix_ match dimensions.
| const Eigen::VectorXd new_inner_prod = prev_dF_.col(column_index_).transpose() * | |
| prev_dF_.block(0, 0, dimension_, m_k); | |
| normal_eq_matrix_.block(column_index_, 0, 1, m_k) = new_inner_prod.transpose(); | |
| normal_eq_matrix_.block(0, column_index_, m_k, 1) = new_inner_prod; | |
| const Eigen::RowVectorXd new_inner_prod = prev_dF_.col(column_index_).transpose() * | |
| prev_dF_.block(0, 0, dimension_, m_k); | |
| normal_eq_matrix_.block(column_index_, 0, 1, m_k) = new_inner_prod; | |
| normal_eq_matrix_.block(0, column_index_, m_k, 1) = new_inner_prod.transpose(); |
registration/include/pcl/registration/transformation_estimation_point_to_point_robust.h
Outdated
Show resolved
Hide resolved
| size_t N = cloud_src_demean.cols(); | ||
| PCL_DEBUG("[pcl::registration::TransformationEstimationPointToPointRobust::" | ||
| "getTransformationFromCorrelation] Loss: %.10e\n", | ||
| (cloud_tgt_demean - R * cloud_src_demean).squaredNorm() / N); |
There was a problem hiding this comment.
The debug-only loss computation does not compile: cloud_*_demean produced by demeanPointCloud(..., Eigen::Matrix<...>&) has 4 rows, but R is 3x3, so R * cloud_src_demean is dimensionally invalid. Use only the top 3 rows (or store de-meaned clouds as 3xN matrices) when computing the loss.
| (cloud_tgt_demean - R * cloud_src_demean).squaredNorm() / N); | |
| (cloud_tgt_demean.template topRows<3>() - | |
| R * cloud_src_demean.template topRows<3>()).squaredNorm() / N); |
registration/include/pcl/registration/impl/transformation_estimation_point_to_point_robust.hpp
Show resolved
Hide resolved
registration/include/pcl/registration/transformation_estimation_point_to_point_robust.h
Outdated
Show resolved
Hide resolved
|
Please address my latest review comments as well as Copilot's comments (unless you think one of Copilot's comments is wrong, in that case please explain why). Thank you! |
|
OK,I will address latest review comments as soon as possible |
Co-authored-by: Markus Vieth <39675748+mvieth@users.noreply.github.com>
|
Ok,I think I have addressed all the comments. |
Hi, thanks for the suggestions of the contributor (#6061 (comment)), I added a class,
TransformationEstimationPointToPointRobust, which contains the implementation of the robust ICP part of the FRICP (https://github.com/yaoyx689/Fast-Robust-ICP).Compared with
TransformationEstimation, I added a parameter setting function, because its adaptive setting is very important for the results. If there is no additional setting for the parameter, I define a default setting.If possible, I hope to add a
FastRobustIterativeClosestPointclass later, similar toIterativeClosestPoint, to perform adaptive parameter adjustment and pass it into theTransformationEstimationPointToPointRobustclass.Furthermore, for Anderson acceleration, it is not enough to just create an accelerated class of
TransformationEstimation. The iterative process needs to be modified. If possible, these parts can also be organized inFastRobustIterativeClosestPoint.