Thursday, February 13, 2014

Step 5, Part 2: Saving Points in a Voxel Using Voxel Grid Filter

To figure out which points belong to each plane, we need to break down the combined point cloud into numerous simplified, yet detailed structures, The best way to do this is to apply a voxel grid to the combined point cloud. Since applying a voxel grid will create a set of cubes of a certain size called voxels, we can determine which points from the voxel belong to which points belong to which plane through a simple comparison of coordinates.

To extract the data from the voxels after the application of the voxel grid on the point cloud, we needed to use combination of methods already offered by the PCL VoxelGrid class. Here are the steps to getting the correct points:

1. Apply the Voxel Grid to the cloud
2. Get the grid coordinates of each point in the cloud (this will return the coordinates of a centroid for that point)
3. Using the centroid's coordinates, get the index of centroid in the filtered cloud
4. Using the centroid, we are able to determine which points are in each voxel

To save the information from created by the voxel grid, I created another data structure called voxel, which saves the centroid of that particular voxel, the points located in that centroid, and the corresponding planes of each point in that voxel.

*** Update: I was able to accomplish this, but it did not turn out like I hoped.
Here's the situation:

Files:

  • Registration.h - voxel function is in this header file; also computes normals for each plane
  • Segmentation - header file where planes are being extracted from point clouds
  • trans.cpp - main file where the point clouds are being loaded from files, planes are being divided into planes, and points in the planes are being divided into voxels

Commands:
cmake . && make && cd bin && ./transform

Output:

PointCloud (no filtering): 242540 data points.
PointCloud representing the planar component: 149589 data points.
Planar id: 1
PointCloud representing the planar component: 56901 data points.
Planar id: 2
PointCloud representing the planar component: 20883 data points.
Planar id: 3
PointCloud representing the planar component: 7365 data points.
Planar id: 4
PointCloud representing the planar component: 4875 data points.
Planar id: 5
PointCloud representing the planar component: 1755 data points.
Planar id: 6
PointCloud (no filtering): 241031 data points.
PointCloud representing the planar component: 112620 data points.
Planar id: 7
PointCloud representing the planar component: 93356 data points.
Planar id: 8
PointCloud representing the planar component: 21105 data points.
Planar id: 9
PointCloud representing the planar component: 7954 data points.
Planar id: 10
PointCloud representing the planar component: 3748 data points.
Planar id: 11
PointCloud (no filtering): 240900 data points.
PointCloud representing the planar component: 171566 data points.
Planar id: 12
PointCloud representing the planar component: 34734 data points.
Planar id: 13
PointCloud representing the planar component: 19348 data points.
Planar id: 14
PointCloud representing the planar component: 9069 data points.
Planar id: 15
PointCloud representing the planar component: 3957 data points.
Planar id: 16
PointCloud (no filtering): 242560 data points.
PointCloud representing the planar component: 213759 data points.
Planar id: 17
PointCloud representing the planar component: 17257 data points.
Planar id: 18
PointCloud representing the planar component: 7237 data points.
Planar id: 19
PointCloud representing the planar component: 3570 data points.
Planar id: 20
PointCloud (no filtering): 238562 data points.
PointCloud representing the planar component: 192368 data points.
Planar id: 21
PointCloud representing the planar component: 45577 data points.
Planar id: 22
PointCloud (no filtering): 216241 data points.
PointCloud representing the planar component: 147356 data points.
Planar id: 23
PointCloud representing the planar component: 68600 data points.
Planar id: 24
PointCloud (no filtering): 219348 data points.
PointCloud representing the planar component: 143190 data points.
Planar id: 25
PointCloud representing the planar component: 76039 data points.
Planar id: 26
Computing normals of planes...
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
...Done.
Combining Point Clouds...
...Done.
Cloud: 1641182
Creating Voxels...
...Done.
Assigning Points to Voxels...
...Done.
Voxel Size:550008 voxels.
Voxel Size:100359 voxels.
224
1 : 1 7 12 
2 : 2 3 4 5 6 8 12 17 21 
3 : 2 3 4 5 6 7 8 12 17 21 
4 : 2 3 4 5 6 8 12 17 21 
5 : 2 3 4 5 6 8 12 17 21 
6 : 2 3 4 5 6 12 17 21 
7 : 1 3 7 9 10 11 12 13 17 21 23 
8 : 2 3 4 5 8 17 
9 : 7 9 10 11 12 13 17 21 23 
10 : 7 9 10 11 12 13 17 21 23 
11 : 7 9 10 11 12 13 17 21 23 
12 : 1 2 3 4 5 6 7 9 10 11 12 13 14 15 17 21 23 
13 : 7 9 10 11 12 13 14 15 17 21 23 
14 : 12 13 14 17 21 23 
15 : 12 13 15 17 21 23 
16 : 16 
17 : 2 3 4 5 6 7 8 9 10 11 12 13 14 15 17 18 19 21 23 25 
18 : 17 18 19 21 23 25 
19 : 17 18 19 21 23 25 
20 : 20 
21 : 2 3 4 5 6 7 9 10 11 12 13 14 15 17 18 19 21 22 23 25 
22 : 21 22 23 24 25 
23 : 7 9 10 11 12 13 14 15 17 18 19 21 22 23 24 25 
24 : 22 23 24 25 26 
25 : 17 18 19 21 22 23 24 25 26 
26 : 24 25 26 
Elapsed time: 12456.968750 milliseconds

Problem:

  • The results are inconclusive since the resulting pictures and table showed that ALL of the planes and in the same plane, which makes no sense.
***

Wednesday, January 22, 2014

Step 5, Part 1: Extracting Planes from Point Clouds

Now that we were able to reconstruct the scene using the point clouds taken from the Kinect. We need to obtain some crucial information from each point now that the clouds are concatenated.  Some of data needed from the points are the normals of the points, and which cloud each point belongs to. To obtain this, we must break down each cloud into planes. In this case, a plane is a two-dimensional representation of a subset in a point cloud. There is a way to extract the planes from a point cloud using a combination of  built-in PCL classes called SAC Segmentation and Extract Indices. The functions of SAC Segmentation and Extract Indices are to identify the plane present in the point cloud based on the RANSAC parameters and to extract the indices of the plane from the rest of the point cloud.

To hold the necessary data after the extraction of a plane. I created my own data structure called Plane. This data structure holds a vector of points in the plane or planes, the ids of the planes, and the normals of the planes. The vector are correspond to each other based on the indices of the vector. Here is an example of the Clouds before and after the extraction of the planes:




Images of the combined clouds and all of the planes in that cloud




Based on our parameters for the extraction of the planes, there were 17 total planes extracted from the six point clouds. Because the planes were extracted from one point cloud at a time, some of the planes may possibly be the same planes in the combined cloud. To figure this out, we need to extract more data from the planes using a voxel grid.

Tuesday, January 14, 2014

Step 4, Part 3: Reconstructing an Entire Scene using Point Cloud Registration

Now is the time to put the process decided in the previous two posts to the test. I used the Microsoft Kinect to capture point clouds from -30 degrees all the way up to 30 degrees in 10-degree increments resulting in seven point clouds at different angles. The process is a little different now because moving all of the point clouds up from -30 degrees to 30 degrees and apply the ICP to each of them will make the process inefficient as the process goes on as the number of point start to increase with each new point cloud being added. In this case, the point clouds from -30 to 0 degrees will be combined together and point clouds from 10 to 30 degrees will be combined together, resulting in these two point clouds being combined at the end to produce one single point cloud. Here are some pictures of the results of the process:

Before applying rotation

After applying rotation (1,664,196 points)

After applying ICP to top and bottom halves of scene (Top -  678,301 points / Bottom - 985,895 points / 1,664,196 points Total)
After applying final ICP and combining all of the point clouds (1,664,196 points)


The final resulting point cloud has around 1.6 million points so applying the voxel grid is very useful for this point cloud. Now that we know that Point Cloud Registration is efficient and most importantly works, we can move on to using multiple Kinect and using the same process as described in the previous posts.

Tuesday, January 7, 2014

Step 4, Part 2: Registration of Point Clouds at Different Angles

Now that the point clouds have been rotated to a relative coordinate system, we can perform the Iterative Closest Point Algorithm efficiently and accurately. Like I stated in the Introduction of Registration post, the Iterative Closest Point Algorithm is a built-in class in the Point Cloud Library that method used to minimize the distance between two clouds of points.

Once the ICP is performed, the algorithm produces a transformation matrix, which contains a translation vector, rotation matrix, an axis of rotation vector, and an angle of rotation. For the ICP algorithm to work efficiently, some parameters have to be adjusted. Here are some of the parameters:
- Input Cloud - this is the point cloud that is being transformed
- Input Target Cloud - this is the point cloud that the input cloud is being set to look like
- Max Correspondence Distance - any corresponding points in the point clouds further than this value will be ignored by the algorithm (0.1 is the value for my project)
-  Max Iterations - once the algorithm runs for x amount of times, the algorithm will terminate (1000 is the value in my project)
- Transformation Epsilon and Euclidean Fitness Epsilon - tolerance parameters for estimating the minimum distance between points in two different point clouds. Algorithm will terminate if the computed epsilons are lower than these values (1E-10 ans 1E-2 are values for my project, respectively)

Here are some pictures of the process from two point clouds at different angles to applying a rotation matrix, performing the Iterative Closest Point Algorithm, and concatenating the point clouds to produce one point cloud taken at different angles. Note that the test were applied to point clouds taken at 0,10,20,30 degrees:

***

Before applying rotation

After applying rotation (924,903 points)
After applying ICP and voxel grid (44,209 points)

Note: the shadows of different point clouds start to mix during ICP and concatenation. This is not an error, but would prefer a evenly lighted room for a more accurate demonstration.

This process produces about 900,000 point clouds, making it inefficient for further processing. So for this case, it would be important to apply a filter to the final point cloud to down-sample the number of points in the cloud from around 900,000 points to about 45,000 points use another PCL built-in class called the Voxel Grid filter. The next step is to now obtain point clouds from the entire range of the Microsoft Kinect from -30 to 30 degrees in 10-degree increments, apply this process to the points clouds to  produce one combined point cloud and apply a Voxel Grid to the point cloud to decrease the number of points in the cloud.