Wednesday, January 22, 2014

Step 5, Part 1: Extracting Planes from Point Clouds

Now that we were able to reconstruct the scene using the point clouds taken from the Kinect. We need to obtain some crucial information from each point now that the clouds are concatenated.  Some of data needed from the points are the normals of the points, and which cloud each point belongs to. To obtain this, we must break down each cloud into planes. In this case, a plane is a two-dimensional representation of a subset in a point cloud. There is a way to extract the planes from a point cloud using a combination of  built-in PCL classes called SAC Segmentation and Extract Indices. The functions of SAC Segmentation and Extract Indices are to identify the plane present in the point cloud based on the RANSAC parameters and to extract the indices of the plane from the rest of the point cloud.

To hold the necessary data after the extraction of a plane. I created my own data structure called Plane. This data structure holds a vector of points in the plane or planes, the ids of the planes, and the normals of the planes. The vector are correspond to each other based on the indices of the vector. Here is an example of the Clouds before and after the extraction of the planes:




Images of the combined clouds and all of the planes in that cloud




Based on our parameters for the extraction of the planes, there were 17 total planes extracted from the six point clouds. Because the planes were extracted from one point cloud at a time, some of the planes may possibly be the same planes in the combined cloud. To figure this out, we need to extract more data from the planes using a voxel grid.

Tuesday, January 14, 2014

Step 4, Part 3: Reconstructing an Entire Scene using Point Cloud Registration

Now is the time to put the process decided in the previous two posts to the test. I used the Microsoft Kinect to capture point clouds from -30 degrees all the way up to 30 degrees in 10-degree increments resulting in seven point clouds at different angles. The process is a little different now because moving all of the point clouds up from -30 degrees to 30 degrees and apply the ICP to each of them will make the process inefficient as the process goes on as the number of point start to increase with each new point cloud being added. In this case, the point clouds from -30 to 0 degrees will be combined together and point clouds from 10 to 30 degrees will be combined together, resulting in these two point clouds being combined at the end to produce one single point cloud. Here are some pictures of the results of the process:

Before applying rotation

After applying rotation (1,664,196 points)

After applying ICP to top and bottom halves of scene (Top -  678,301 points / Bottom - 985,895 points / 1,664,196 points Total)
After applying final ICP and combining all of the point clouds (1,664,196 points)


The final resulting point cloud has around 1.6 million points so applying the voxel grid is very useful for this point cloud. Now that we know that Point Cloud Registration is efficient and most importantly works, we can move on to using multiple Kinect and using the same process as described in the previous posts.

Tuesday, January 7, 2014

Step 4, Part 2: Registration of Point Clouds at Different Angles

Now that the point clouds have been rotated to a relative coordinate system, we can perform the Iterative Closest Point Algorithm efficiently and accurately. Like I stated in the Introduction of Registration post, the Iterative Closest Point Algorithm is a built-in class in the Point Cloud Library that method used to minimize the distance between two clouds of points.

Once the ICP is performed, the algorithm produces a transformation matrix, which contains a translation vector, rotation matrix, an axis of rotation vector, and an angle of rotation. For the ICP algorithm to work efficiently, some parameters have to be adjusted. Here are some of the parameters:
- Input Cloud - this is the point cloud that is being transformed
- Input Target Cloud - this is the point cloud that the input cloud is being set to look like
- Max Correspondence Distance - any corresponding points in the point clouds further than this value will be ignored by the algorithm (0.1 is the value for my project)
-  Max Iterations - once the algorithm runs for x amount of times, the algorithm will terminate (1000 is the value in my project)
- Transformation Epsilon and Euclidean Fitness Epsilon - tolerance parameters for estimating the minimum distance between points in two different point clouds. Algorithm will terminate if the computed epsilons are lower than these values (1E-10 ans 1E-2 are values for my project, respectively)

Here are some pictures of the process from two point clouds at different angles to applying a rotation matrix, performing the Iterative Closest Point Algorithm, and concatenating the point clouds to produce one point cloud taken at different angles. Note that the test were applied to point clouds taken at 0,10,20,30 degrees:

***

Before applying rotation

After applying rotation (924,903 points)
After applying ICP and voxel grid (44,209 points)

Note: the shadows of different point clouds start to mix during ICP and concatenation. This is not an error, but would prefer a evenly lighted room for a more accurate demonstration.

This process produces about 900,000 point clouds, making it inefficient for further processing. So for this case, it would be important to apply a filter to the final point cloud to down-sample the number of points in the cloud from around 900,000 points to about 45,000 points use another PCL built-in class called the Voxel Grid filter. The next step is to now obtain point clouds from the entire range of the Microsoft Kinect from -30 to 30 degrees in 10-degree increments, apply this process to the points clouds to  produce one combined point cloud and apply a Voxel Grid to the point cloud to decrease the number of points in the cloud.