Skip to content

Add a part of the new rigid registration method [FRICP] #6199

Open
yaoyx689 wants to merge 24 commits intoPointCloudLibrary:masterfrom
yaoyx689:add_fricp
Open

Add a part of the new rigid registration method [FRICP] #6199
yaoyx689 wants to merge 24 commits intoPointCloudLibrary:masterfrom
yaoyx689:add_fricp

Conversation

@yaoyx689
Copy link

Hi, thanks for the suggestions of the contributor (#6061 (comment)), I added a class, TransformationEstimationPointToPointRobust, which contains the implementation of the robust ICP part of the FRICP (https://github.com/yaoyx689/Fast-Robust-ICP).

Compared with TransformationEstimation, I added a parameter setting function, because its adaptive setting is very important for the results. If there is no additional setting for the parameter, I define a default setting.

If possible, I hope to add a FastRobustIterativeClosestPoint class later, similar to IterativeClosestPoint, to perform adaptive parameter adjustment and pass it into the TransformationEstimationPointToPointRobust class.

Furthermore, for Anderson acceleration, it is not enough to just create an accelerated class of TransformationEstimation. The iterative process needs to be modified. If possible, these parts can also be organized in FastRobustIterativeClosestPoint.

@mvieth mvieth added changelog: new feature Meta-information for changelog generation module: registration labels Dec 16, 2024
@mvieth
Copy link
Member

mvieth commented Dec 16, 2024

Hello, thanks for the pull request. A few things:

  • Please make sure the formatting check passes. Formatting is done by clang-format-14 according to the rules in the .clang-format. You can use the script in .dev/format.sh, or run make format (at least on Linux, not sure if this also works on Windows or macOS)
  • Some sort of test is necessary, to make sure the new class runs as expected. I think adding it in test/registration/test_registration_api.cpp would make sense. There are some tests for other transformation estimation methods you can use as inspiration.
  • It has been some time since I read the paper: in which situations is the new estimation method better than reference methods? Basically, we have to make sure that the new method has a clear advantage over the other methods that are already implemented in PCL. Otherwise it would not make sense to add another method.

Compared with TransformationEstimation, I added a parameter setting function, because its adaptive setting is very important for the results. If there is no additional setting for the parameter, I define a default setting.

I assume you mean sigma. Do you have an estimate how sensitive the method is to this parameter? Does the default setting work okay in most situations, or does the user have to fine-tune sigma to their data?

If possible, I hope to add a FastRobustIterativeClosestPoint class later, similar to IterativeClosestPoint, to perform adaptive parameter adjustment and pass it into the TransformationEstimationPointToPointRobust class.

Furthermore, for Anderson acceleration, it is not enough to just create an accelerated class of TransformationEstimation. The iterative process needs to be modified. If possible, these parts can also be organized in FastRobustIterativeClosestPoint.

I am not completely sure what you mean with this. Would you have to duplicate code from this pull request to FastRobustIterativeClosestPoint? Or would you use TransformationEstimationPointToPointRobust in FastRobustIterativeClosestPoint? Code duplication needs to be avoided.

@larshg
Copy link
Contributor

larshg commented Dec 17, 2024

Please update the license as per https://pcl.readthedocs.io/projects/tutorials/en/latest/writing_new_classes.html#licenses and use the #pragma once instead of #ifdef xxx...

@yaoyx689
Copy link
Author

yaoyx689 commented Dec 18, 2024

Thanks for your reply. I will modify them as you said.

  • It has been some time since I read the paper: in which situations is the new estimation method better than reference methods? Basically, we have to make sure that the new method has a clear advantage over the other methods that are already implemented in PCL. Otherwise it would not make sense to add another method.

In cases where the two point clouds used for registration have partial overlap or noise, our robust ICP algorithm can achieve higher accuracy compared to traditional ICP algorithms. On one hand, this is due to using the Welsch function instead of the L2-norm as the objective function. It reduces the impact of outliers, which is similar to the idea that ICP uses a distance threshold to filter outlier points, but it can achieve better results by giving each correspondence a different threshold instead of just 0,1.

On the other hand, the Welsch function can provide a coarse-to-fine result by adjusting parameters (sigma), leading to higher accuracy.

I assume you mean sigma. Do you have an estimate how sensitive the method is to this parameter? Does the default setting work okay in most situations, or does the user have to fine-tune sigma to their data?

By default, it will be set based on the maximum distance between corresponding points. This means it has the same effect as setting setMaxCorrespondenceDistance in the IterativeClosestPoint class that the user defines.
In this case, the new method performs slightly better than ICP, and it exhibits the same level of sensitivity to parameters as ICP.

I am not completely sure what you mean with this. Would you have to duplicate code from this pull request to FastRobustIterativeClosestPoint? Or would you use TransformationEstimationPointToPointRobust in FastRobustIterativeClosestPoint? Code duplication needs to be avoided.

In our method, an important part is that the parameters of the Welsch function can be adaptively adjusted. During the iteration process, the parameter sigma gradually decreases as the number of iterations increases. This adjustment plays a significant role in improving accuracy. Therefore, if possible, I would like to introduce a new class called FastRobustIterativeClosestPoint (It uses TransformationEstimationPointToPointRobust), which implements the process of adaptive parameter adjustment based on the positions of the point clouds. It does not require additional user settings. In our experiments, this method significantly enhances the accuracy of the ICP algorithm.

@duanbotu123
Copy link

Hello,

We have recently added a subclass FastRobustIterativeClosestPoint in fricp.hpp, which inherits from IterativeClosestPoint and implements our new method. The techniques used to improve speed and robustness are implemented respectively in anderson_acceleration.h and TransformationEstimationPointToPointRobust.h.

However, our PR is currently failing the automated checks because the server used for code verification runs out of disk space. Specifically, we are seeing the following error:

failed to register layer: write /usr/share/proj/CHENYX06_etrs.gsb: no space left on device

It seems that #6364encountered a similar issue before. Could you please clean up the disk space on the server corresponding to the Ubuntu 24.04 GCC build environment so that our PR can proceed successfully?

Thank you very much for your help.

@mvieth
Copy link
Member

mvieth commented Nov 22, 2025

Hi, thanks for adding the new class. I will take a closer look and test your changes as soon as I have time, but that might take a few weeks.

Copy link
Member

@mvieth mvieth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello, I have finally found some time to test your code in detail. With several point clouds, FRICP was indeed able to find a more accurate alignment than other registration algorithms. So I think it would be great to have it in PCL, but first, here are a few review comments I would ask you to address. Thank you!

matrixLog(const Matrix4d& transform) const;

RobustFunction robust_function_;
bool use_anderson_ = true;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tested several point clouds with and without Anderson acceleration enabled, and I have found that when it is enabled, the registration progress is often rather unpredictable and chaotic. Therefore I would rather make this option false by default and users can opt-in.

@@ -0,0 +1,581 @@
/*
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@duanbotu123
Copy link

Thanks for the detailed review — I’ve addressed all actionable comments from your latest review.
Implemented updates:

  • Switched updateCorrespondences to use pcl::search::Search<pcl::PointXYZ>&.
  • Kept findKNearestMedian on KdTree and fixed the call site to pass the concrete tree_data object (this also resolves the test_registration_api compile error).
  • Made computeWeighted3DCentroid templated on PointT and updated the implementation signature accordingly.
  • Removed redundant registration/src/fricp.cpp and its CMake entry.
  • Replaced local median helpers with pcl::computeMedian.
  • Added documentation for the behavior-impact setters (including set_sigma semantics).
  • Set Anderson acceleration default to disabled.
  • Updated the file header to the short SPDX-style license text.

Co-authored-by: Markus Vieth <39675748+mvieth@users.noreply.github.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds initial building blocks for the FRICP (“Fast and Robust ICP”) registration method to PCL’s registration module, including a robust point-to-point transformation estimator, an FRICP ICP variant with optional Anderson acceleration, and a new API-level test.

Changes:

  • Introduce TransformationEstimationPointToPointRobust with Welsch reweighting and configurable sigma.
  • Add FastRobustIterativeClosestPoint (FRICP) implementation with dynamic Welsch scaling and optional Anderson acceleration.
  • Extend registration API tests and wire new headers/sources into the registration CMake target.

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 13 comments.

Show a summary per file
File Description
test/registration/test_registration_api.cpp Adds a new test case exercising FastRobustIterativeClosestPoint, including noisy/outlier target data.
registration/src/transformation_estimation_point_to_point_robust.cpp Adds TU stub for the new robust point-to-point transformation estimation header.
registration/include/pcl/registration/transformation_estimation_point_to_point_robust.h Declares the robust point-to-point transformation estimation API (Welsch weighting + sigma setter).
registration/include/pcl/registration/impl/transformation_estimation_point_to_point_robust.hpp Implements weighted SVD alignment and helper routines for the robust estimator.
registration/include/pcl/registration/fricp.h Declares the FastRobustIterativeClosestPoint API and configuration knobs (robust function, Anderson, dynamic Welsch).
registration/include/pcl/registration/impl/fricp.hpp Implements the FRICP optimization loop, correspondence updates, dynamic Welsch schedule, and Anderson integration.
registration/include/pcl/registration/anderson_acceleration.h Adds an Anderson acceleration helper used by FRICP.
registration/CMakeLists.txt Registers the new headers/impl headers and the new source file in the registration build.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +246 to +262

// For each point in the cloud
// If the data is dense, we don't need to check for NaN
while (cloud_iterator.isValid()) {
// Check if the point is invalid
if (pcl::isFinite(*cloud_iterator)) {
accumulator[0] += weights[cp] * cloud_iterator->x;
accumulator[1] += weights[cp] * cloud_iterator->y;
accumulator[2] += weights[cp] * cloud_iterator->z;
++cp;
}
++cloud_iterator;
}

if (cp > 0) {
centroid = accumulator;
centroid[3] = 1;
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

computeWeighted3DCentroid indexes weights using cp, which is incremented only for finite points. If any point is NaN/Inf, this shifts the weight indexing and produces an incorrect centroid (and also fails to renormalize by the sum of weights actually used). Track the iterator position separately (increment a weight index every loop iteration), and normalize the accumulator by the sum of weights of valid points.

Suggested change
// For each point in the cloud
// If the data is dense, we don't need to check for NaN
while (cloud_iterator.isValid()) {
// Check if the point is invalid
if (pcl::isFinite(*cloud_iterator)) {
accumulator[0] += weights[cp] * cloud_iterator->x;
accumulator[1] += weights[cp] * cloud_iterator->y;
accumulator[2] += weights[cp] * cloud_iterator->z;
++cp;
}
++cloud_iterator;
}
if (cp > 0) {
centroid = accumulator;
centroid[3] = 1;
Scalar sum_weights = Scalar(0);
Eigen::Index weight_idx = 0;
// For each point in the cloud
// If the data is dense, we don't need to check for NaN
while (cloud_iterator.isValid()) {
// Check if the point is invalid
if (pcl::isFinite(*cloud_iterator)) {
const Scalar w = weights[weight_idx];
accumulator[0] += w * cloud_iterator->x;
accumulator[1] += w * cloud_iterator->y;
accumulator[2] += w * cloud_iterator->z;
sum_weights += w;
++cp;
}
++weight_idx;
++cloud_iterator;
}
if (cp > 0 && sum_weights != Scalar(0)) {
accumulator[0] /= sum_weights;
accumulator[1] /= sum_weights;
accumulator[2] /= sum_weights;
centroid = accumulator;
centroid[3] = Scalar(1);

Copilot uses AI. Check for mistakes.
Comment on lines +152 to +162
Scalar sigma2;
if (sigma_ < 0)
sigma2 = square_distances.maxCoeff() / 9.0;
else
sigma2 = sigma_ * sigma_;

for (int i = 0; i < npts; i++) {
weights[i] = std::exp(-square_distances[i] / (2.0 * sigma2));
}
weights = weights / weights.sum();

Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sigma2 can become zero or extremely small (e.g., when all squared distances are 0, or when set_sigma(0) is used), which will cause division-by-zero/NaNs in the weight computation and in weights / weights.sum(). Clamp sigma2 to a small positive epsilon and handle the case where weights.sum() is ~0 (e.g., fall back to uniform weights).

Copilot uses AI. Check for mistakes.
Comment on lines +142 to +145
const Eigen::VectorXd new_inner_prod = prev_dF_.col(column_index_).transpose() *
prev_dF_.block(0, 0, dimension_, m_k);
normal_eq_matrix_.block(column_index_, 0, 1, m_k) = new_inner_prod.transpose();
normal_eq_matrix_.block(0, column_index_, m_k, 1) = new_inner_prod;
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a compile-time type mismatch: prev_dF_.col(...).transpose() * prev_dF_.block(...) is a 1×m_k row vector, but it is being stored in an Eigen::VectorXd (m_k×1). This should be an Eigen::RowVectorXd (or explicitly transpose the product) so the subsequent assignments to normal_eq_matrix_ match dimensions.

Suggested change
const Eigen::VectorXd new_inner_prod = prev_dF_.col(column_index_).transpose() *
prev_dF_.block(0, 0, dimension_, m_k);
normal_eq_matrix_.block(column_index_, 0, 1, m_k) = new_inner_prod.transpose();
normal_eq_matrix_.block(0, column_index_, m_k, 1) = new_inner_prod;
const Eigen::RowVectorXd new_inner_prod = prev_dF_.col(column_index_).transpose() *
prev_dF_.block(0, 0, dimension_, m_k);
normal_eq_matrix_.block(column_index_, 0, 1, m_k) = new_inner_prod;
normal_eq_matrix_.block(0, column_index_, m_k, 1) = new_inner_prod.transpose();

Copilot uses AI. Check for mistakes.
size_t N = cloud_src_demean.cols();
PCL_DEBUG("[pcl::registration::TransformationEstimationPointToPointRobust::"
"getTransformationFromCorrelation] Loss: %.10e\n",
(cloud_tgt_demean - R * cloud_src_demean).squaredNorm() / N);
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The debug-only loss computation does not compile: cloud_*_demean produced by demeanPointCloud(..., Eigen::Matrix<...>&) has 4 rows, but R is 3x3, so R * cloud_src_demean is dimensionally invalid. Use only the top 3 rows (or store de-meaned clouds as 3xN matrices) when computing the loss.

Suggested change
(cloud_tgt_demean - R * cloud_src_demean).squaredNorm() / N);
(cloud_tgt_demean.template topRows<3>() -
R * cloud_src_demean.template topRows<3>()).squaredNorm() / N);

Copilot uses AI. Check for mistakes.
@mvieth
Copy link
Member

mvieth commented Mar 25, 2026

Please address my latest review comments as well as Copilot's comments (unless you think one of Copilot's comments is wrong, in that case please explain why). Thank you!

@duanbotu123
Copy link

OK,I will address latest review comments as soon as possible

Co-authored-by: Markus Vieth <39675748+mvieth@users.noreply.github.com>
@duanbotu123
Copy link

Ok,I think I have addressed all the comments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

changelog: new feature Meta-information for changelog generation module: registration

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants