Monday, February 18, 2019

Processing Pix4D Imagery with GCPs

Part 1 (Introduction):

Ground Control Points (GCP) are important in the processing of UAS data. GCPs allow for better orthorectification in that the help better align the data on the coordinate plane. Ground control points are points on the ground that have a known geographic location. GCPs can be used either as a GCP, or as a checkpoint. For a Ground Control Point, the rectification software will use the point to adjust the image, whereas with a checkpoint, the software will just give an error measurement from that checkpoint. This allows the user to see how accurately the data is being displayed or processed.

Part 2 (Methods):

Similar to the previous lab where we processed a set of data without the use of GCPs, this lab incorporated GCPs for processing the data. We first did initial processing of the data without the GCPs. Once the initial processing was complete, we added the GCPs.

To add the GCPs, we first told Pix4D which file was contained the GCPs. Note that the points had already been cleaned up, and the points that were to be used were the only ones contained in the file. Once this data had been imported, we instructed Pix4D as to which field contained the X value, which the Y value, and which the elevation value. Figure 18 below shows the GCP manager in Pix4D, and as you can see, it has detected each point as well as displayed the respective coordinate data.
Figure 18: GCP Manager
After the GCPs were added, we had to manually adjust the data to match the GCPs, as the plane from the UAS was not entirely accurate in the vertical direction. Figure 19 shows the sidebar where each GCP was adjusted, and Pix4D was told where on each image the control point was actually located.
Once this was complete, we told Pix4D to reoptimize, and it adjusted the data to incorporate the GCPs.
Figure 19: GCP Adjustment
Figure 20 below shows the difference between where the GCP was told to be placed, and where the GCP was actually adjusted to. This difference between the two points will be the error that was mentioned with checkpoints above, although this is GCP error.

Figure 20: GCP Difference
Once these steps were complete, we exported the data and created a few maps with the data. 

Part 3 (Discussion):

The first map below is the orthomosaic with the GCPs overlayed on top of the orthomosaic. One thing I noticed, that may be hard to tell as it is displayed, is how close each GCP is to its respective chevron on the map. This tells me that Pix4D was able to adjust the data to the GCPs fairly accurately, which I will examine more using the report.

Figure 21: Ortho Map with GCPs

Figure 22: DSM Map using GCPs
Attached below is the quality report for this data. If you scroll down to the quality check section on the first page, the georeferencing section shows a mean RMS error of 0.045m. This means that in 3D space, the average error of the points was 4.5 centimeters. If you scroll down to the bottom of the fifth page, you can see the error of each individual point. The highest error displayed is point 102 on the Z-axis, which shows an error of 0.142, or 14.2 centimeters. This is roughly 6 inches error in the vertical direction, which does not seem like very much for this data set and this large of an imaging area.

Part 4 (Conclusion):

It is clear that GCPs can have a large advantage to the accuracy of the data, and one place we see this is with the Z-axis and the vertical elevation. Without ground control, the data would have been processed as if it was on average 26 meters lower than it actually was, and this was due to the elevation data from the UAS. With ground control, however, that data was corrected to the actual elevation, and we have much more accurate data because of it. Not only that, but on average, the data was 6.94 meters off in the horizontal plane as well (Using Pythagorean's theorem, the X direction error and the Y direction error were used to calculate overall horizontal error), so we can see the overall benefit of GCPs to improve the accuracy of our data.

Tuesday, February 12, 2019

Construction of a point cloud data set, true orthomosaic, and digital surface model using Pix4D software

Part 1 (Introduction):

What is Pix4D?
Pix4D put simply is computer software that can find tie points, or common points between many images.

What products does it generate?
Pix4D can create orthomosaics, digital surface models, and 3-dimensional models such as point clouds or meshes.

Part 2 (Methods):

What is the overlap needed for Pix4D to process imagery?
Pix4D recommends at least 75% overlap in the flight direction and at least 60% overlap from side to side.

What if the user is flying over sand/snow, or uniform fields?
Pix4D recommends using at least 85% overlap in flight direction and at least 70% side overlap due to uniformity.

What is Rapid Check?
Rapid Check allows the software to create tie points quickly for fast results, as opposed to the higher quality results you would get through normal processing, which takes more time.

Can Pix4D process multiple flights? What does the pilot need to maintain if so?
Pix4D can process multiple flights, although the pilot needs to maintain enough overlap between the two flights to ensure there will be enough tie points.

Can Pix4D process oblique images? What type of data do you need if so?
For oblique images, you really should have ground control points or manual tie points between each set of images, as well as enough overlap between the sets of images.

What is the difference between a global and linear rolling shutter?
A global shutter takes the image instantly, whereas a linear rolling shutter does a scan to capture the image.

Are GCPs necessary for Pix4D? When are they highly recommended?
GCPs are not necessary for Pix4D. They are highly recommended when combining nadir images with aerial oblique and terrestrial images. GCPs are also highly recommended if capturing a tunnel system and a multiple tracks of images are not possible.

What is the quality report?
The quality report gives information as to what the software did. For example, it would show you how many images in the set were used, the average number of tie points per image, previews of the results, details about the block adjustment, and various other processing details. Attached below is the quality report associated with this dataset for use as an example.
Note: viewing this document will open the pdf, located on Google Drive, in a separate tab.
Quality Report.pdf

Methods:
In this project, we used Pix4D to process a set of images taken over someone's residence. Ground control points were not used during this project. We started out by creating a project within Pix4D, and then uploading our images to the software. After confirming camera settings, Pix4D began the task of processing our data. The total time for the initial processing was 31 minutes and 11 seconds. After initial processing, Pix4D used 66 out of 67 images.

After the initial processing, we confirmed the data was looking like it should, and began the orthomosaic generation as well as the digital surface model and the 3D models. Upon completion, we had an orthomosaic, digital surface model, as well as 3D files to view and work with.

Part 3 (Results and Discussion):

Below is a table that shows processing times for various segments. Each time is in minutes, and then seconds. Total processing time took approximately 65 minutes to complete. This data is also located in the quality report attached above.

Orthomosaic 15:06
Digital Surface Model 0:43
Point Cloud 27:50
3D Textured Mesh 4:15
Total 65:00

Using the data from Pix4D, a couple maps were created using ArcGIS software. In Figure 15 below, the orthomosaic was used to create a map. One thing to note is the edges of the orthomosaic. Because of the limited overlap around the edges, as well as because of the block adjustment, the edges have abrupt turns and incomplete photos. This is not a concern, just an interesting aspect of the orthorectification process. One interesting aspect about this map though is the trees. Pix4D can process some trees just fine, but struggles with other trees. Along the road are some coniferous trees, which Pix4D was able to process perfectly fine. Behind the house are a bunch of deciduous trees, which upon closer inspection, have some lines where it was obvious they were pieced together, as well as some blurred areas where either the images were not very clear, or where the images were blended in processing. A couple deciduous trees turned out fine, such as the red-orange tree near the road in the center of the orthomosaic. Most likely due to its isolation, this tree was precise, and each individual leaf on the ground was visible and distinguishable.

Figure 15: Pix4D orthomosaic
Moving on to the digital surface model and 3D model, I notice some interesting characteristics straight off. First, there seems to be an incongruity in the digital surface model, near the area to the north of the house. A hillshade effect was added to the DSM to easier identify areas of relief. In the center of the DSM lies an orange circle, which appears to mean that the land the house resides on rises, I am assuming so that water runs away from the house. Near the top and left of that orange circle, though, is a sharp change in the DSM, which I believe to be an error in processing, or an issue with the images, because I see no reason that the land would abruptly change in elevation at that point. This could be related to trees in that area, as trees have been known to cause some 3D and orthomosaicking issues.

It is interesting to see other topographic features of the area though, such as the trees along the road, each represented by a bright red dot, which means that the surface the camera sees is higher than the surrounding terrain. Each tree along the road is very distinguishable. One thing I would like to point out is the lack in elevation change where that red-orange tree should be. Following the curve of the road between the lines of trees, there should be another red dot near the center of the image, but there is not. This is also evidenced in an animation that was created using the Pix4D software, which I will point out momentarily, but I believe this is because of the issues Pix4D can have with trees.

Figure 16: Pix4D DSM and 3D Model
Below I have included an animation that was created using Pix4D. Figure 17 shows the trail that the camera followed in creating this animation, but the short clip below highlights another feature of this software.
Figure 17: Animation Trail


As I mentioned before, though it may be hard to detect on the first play through, the area on the ground where that red-orange tree should be is completely flat. Pix4D seems to have completely edited out or flattened that surface, I am assuming because it could not figure out what to do there, or the data was not clear enough that an elevation or surface change was present in that location. Either way, this is one issue that is frequent with orthorectification. 

Part 4 (Conclusions):

Pix4D is important for processing UAS data due to its capabilities and benefits. With a few clicks and some cash, any user can take data capture from an unmanned system and with the right tools, can create accurate maps of a feature, digital surface models, 3D point clouds, or even animations to showcase at their next meeting. This powerful software does have its drawback, some mentioned earlier with the issues of trees, but others being required time and processing power. This is no small task creating and processing this data. As given above, this data alone, with only 67 images to process, took all of 65 minutes. A project or data set with several hundred images will take significantly more time. Not only that, it takes a powerful machine to achieve these processing times. The machine this data was processed on contained an Intel i7-6700T and 32 gigabytes of RAM. This data processing is very heavy on the CPU, and without a powerful enough CPU, could cause the program to crash, or take an indeterminable amount of time, which is definitely something to consider for UAS data processing.

Wednesday, February 6, 2019

Construction of a point cloud data set and true orthomosaic using ArcPro software

Part 1 (Introduction):

What is photogrammetry?
Photogrammetry is simply accurately measuring objects and surfaces from pictures and other digital images.

What types of distortion does remotely sensed imagery have in its raw form?
Remotely sensed imagery contains geometric distortion such as perspective distortion, field of view distortion, lens distortion, earth curvature, radial displacement, and scanning distortion. These are affected by the sensor or lens being used, the focal length of said lens, and the method in which the sensor captures images. Other distortions can be caused by the angles at which they are photographed. For example, from above, a tall building might appear to be at an angle, whereas should really be just the outline of the building as seen from straight above the building.

What is orthorectification? What does it accomplish?
Orthorectification is the way remotely sensed images are corrected for distortion. This allows for the creation of a an accurate orthoimage that can be used to produce a map.


What is the Ortho Mapping Suite in ArcPro? How does it relate to UAS imagery?
The Ortho Mapping Suite in ArcPro is a set of tools to allow the user to create orthorectified images and products from UAS or satellite imagery. UAS imagery is often taken looking straight down and used for creating orthomosaics or digital surface models. The Ortho Mapping Suite is tailored for specifically these types of applications.

What is Bundle Block Adjustment?
Bundle block adjustment uses ground control and tie point information to adjust the exterior of each image so adjacent images alight properly. Once this is done, all the images are then adjusted to fit the ground. This produces the "best statistical fit" between all of the images.

What is the advantage of using this method? Is it perfect?
The advantage of using this method is the simplicity of combining all of the images into one, where the best statistical fit is used. This method is not perfect, and the process provides the user with a table of residual errors once the process is complete. The user can delete the points with high residual error, or manually move the point in error. The program will redo the adjustment until the overall error and residual error are within an acceptable range.


Part 2 (Methods):

What key characteristics should go into folder and file naming conventions?
Some key characteristics that should go into folder and file naming include date the mission was flown, sensor on board the aircraft, altitude flown, ideally the mission location, and the type of file that it is. For example, if I flew a mapping mission over my parents property on May 20th, 2018 using a DJI Inspire with the Zenmuse X5, I might name the file 20180520_zenmusex5_50m_yoderproperty_ortho.tiff or something similar with a varying location name.

Why is file management so key in working with UAS data?
File management is so key in working with UAS data because of the many file types and extensions used, as well as when it comes to sharing data with other persons or entities. Proper file management would make it easier for other parties to know what they are looking at.


What key forms of metadata should be associated with every UAS mission? Create a table that provides the key metadata for the data you are working with.
Metadata such as date flown, UAS platform, sensor, altitude captured, ground control GPS, coordinate system, and weather conditions should be included with every UAS mission.

Date Flown:                             Nov 8th, 2018
UAS Platform:                         Yuneec H520
Sensor:                                    Yuneec E90
Altitude Flown:                        70m
Ground Control GPS:              Propeller
Ground Control Coordinates:  NAD83(2011) UTM Zone 16
UAS Coordinates:                    WGS 84 DD
Pilot:                                         Joseph Hupy

Part 3 (Results):

Describe your maps in detail. Discuss their quality, and where you see issues in the maps. Are there areas on the map where the data quality is poor or missing?
Overall I would say that the map is not bad. It certainly is not the best and most accurate map that could be produced, but it is not bad. Some quality issues that I notice are in the wooded section of the map. There is quite a bit of discontinuance between areas of trees. Another quality issue is around the edges where ArcGIS Pro rectified the images. The edges have abrupt edges to them, and are not entirely straight. Although we are not using this part of the map, it is still a part of the map. The subject area over that center house, though, is accurate and turned out quite well. In ArcGIS Pro, there is the capability to zoom all the way in and see individual leaves on the ground.
Figure 14: ArcPro Map
How much time did it take to process the data? Create a table that shows the time it took.
The initial compilation only took 2 minutes and 8 seconds; however, the block adjustment took 1 hour 4 minutes and 44 seconds to complete.

Initial tie points                    2:08
Block adjustment            1:04:44

Part 4 (Conclusions):

Summarize the Orthomosaic Tool.
The orthomosaic tool allows you to complete the process of creating the photogrammetrically correct, orthorectified image. The orthomosaic tool will do color balancing and seamline generation based on the settings that you as the user choose from the menu. For the final orthomosaic output, the user is able to choose file format as well as pixel size and other compression options. In the end, ArcGIS Pro will create a photogrammetrically correct orthorectified image.

Summarize the process in terms of time invested and quality of output.
In summary, for a few hours of time between flying the mission and navigating the process through the ArcGIS Pro software, a user can create and use an orthomosaic to create maps and do other data processing. Depending on the resolution at which the images were captured, the overall quality of output could be extremely high, or mediocre, but altogether, ArcGIS Pro will produce comparable results to the data it was originally given.

Think of what was discussed with this orthomosaic in terms of accuracy. How might a higher resolution DSM (From LiDAR) make this more accurate? Why might this approach not work in a dynamic environment such as a mine?
A higher resolution digital surface model (DSM) could make this more accurate because of the elevation rectification process. One of the distortions mentioned is that of relief displacement, which is basically distortion caused by variable elevation above or below the datum. This causes a slight shift in the images position, which would be corrected based on the DSM. A higher quality DSM would more accurately rectify the images. This approach might not work in a dynamic environment such as this due to some of the terrain features, like the large quantity of trees that would affect the LiDAR dataset.