Connor's UAS Geospatial Data Portfolio
Tuesday, April 23, 2019
Thursday, April 11, 2019
Field Notes M600
Metadata
Altitude: 90m
Platform: DJI Matrice 600
Sensor: Zenmuse X5
GCP: AeroPoint Markers
Date: 4/2/19
Pilot: Jaspar Saadi-Klein
Weather
Clear skies
First flight: 11:30am EDT
Second flight: 12:00pm EDT
54 degrees F
The first flight was flown as a crosshatch pattern with a camera angle of 60 degrees.
The second flight was flown Nadir in a polygon that encapsulated the area.
For the GCPs, each GCP had an identifier, and the end 3 numbers of each GCP were unique, so only the last three numbers of each GCP were recorded for efficiency.
1. 217 was located near the northwest corner of the target area.
2. 943 was near the shed on the west side
3. 491 was near the 4 wire mesh boxes on the west side
4. 142 was near the southwest corner clearing
5. 500 was near the southeast corner in the same clearing
6. 360 was on the east side near the southeast corner the corner with the tall dead weeds
7. 205 was on the east side middle by the dead grass
8. 420 was in the center of the target area in the clearing
9. 034 was near the middle of the east side this time on the north side of the tall dead grass
10.747 was near the northeast corner by the truck in the dead grass
Altitude: 90m
Platform: DJI Matrice 600
Sensor: Zenmuse X5
GCP: AeroPoint Markers
Date: 4/2/19
Pilot: Jaspar Saadi-Klein
Weather
Clear skies
First flight: 11:30am EDT
Second flight: 12:00pm EDT
54 degrees F
The first flight was flown as a crosshatch pattern with a camera angle of 60 degrees.
The second flight was flown Nadir in a polygon that encapsulated the area.
For the GCPs, each GCP had an identifier, and the end 3 numbers of each GCP were unique, so only the last three numbers of each GCP were recorded for efficiency.
1. 217 was located near the northwest corner of the target area.
2. 943 was near the shed on the west side
3. 491 was near the 4 wire mesh boxes on the west side
4. 142 was near the southwest corner clearing
5. 500 was near the southeast corner in the same clearing
6. 360 was on the east side near the southeast corner the corner with the tall dead weeds
7. 205 was on the east side middle by the dead grass
8. 420 was in the center of the target area in the clearing
9. 034 was near the middle of the east side this time on the north side of the tall dead grass
10.747 was near the northeast corner by the truck in the dead grass
Thursday, March 28, 2019
Field Notes and Checklists H520
Yuneec H520 Flight 3/26/2019 Field Notes, Martell Forest
Metadata
Altitude: 80m
Platform: Yuneec H520
Sensor: E90
GCP: AeroPoint markers
For the preflight, we split up into two groups. The first group stayed with the platform and went through the calibrations and preflight steps to prepare the platform for flight. The second group departed to the flying area to lay out the ground control points. The weather for the flight was clear skies with little wind.
We started by powering up the transmitter and programming the intended flight path into the software.
We set the flight properties to 80 meters in altitude and camera straight down.
After programming the flight path, we powered up the platform and switched the transmitter's connection to the platform itself.
After connecting to the platform, we went through sensor calibration, calibrating the compass and then the accelerometer.
Checklist
Metadata
Altitude: 80m
Platform: Yuneec H520
Sensor: E90
GCP: AeroPoint markers
For the preflight, we split up into two groups. The first group stayed with the platform and went through the calibrations and preflight steps to prepare the platform for flight. The second group departed to the flying area to lay out the ground control points. The weather for the flight was clear skies with little wind.
We started by powering up the transmitter and programming the intended flight path into the software.
We set the flight properties to 80 meters in altitude and camera straight down.
After programming the flight path, we powered up the platform and switched the transmitter's connection to the platform itself.
After connecting to the platform, we went through sensor calibration, calibrating the compass and then the accelerometer.
Checklist
PRIOR TO TRAVEL
- ⬥Location confirmation▢ Confirm location with client(s)▢ Confirm date with client(s)▢ Confirm availability of observer▢ Minimum takeoff/landing area confirmed(50ft radius)▢ Obstacles are noted and mapped⬥Permission to fly▢ Property owners’ approval▢ FAA Part 107 approval(if flying <5miles of airport)▢ NOTAM filed(if flying <5miles of airport)▢ Participants’ approval(if flying over people)▢ Part 107 waiver filed and approved(if necessary)▢ Check current NOTAMs for flight area▢ Area clear of aircraft⬥Weather▢ Weather report printed▢ Temps: 32°-104°F▢ Wind: 0-22mph▢ Visibility: >3 sm▢ Ceiling: >500 ft⬥Aircraft▢ Software updated▢ Repairs made from previous flight▢ No damage to aircraft▢ SD card formatted▢ Spare propellers packed▢ Emergency repair kit packed⬥Controller▢ Software updated▢ Fully charged⬥Batteries▢ Fully charged_____________________PREFLIGHT⬥Set up▢ Verify area clear of obstacles▢ Measure area EMI▢ Place takeoff/landing pad▢ Observer present⬥Power up▢ Remove gimbal cover▢ Place aircraft on launchpad▢ Power up controller▢ Power up aircraft▢ Confirm controller-aircraft connection▢ GPS: >8 satellites▢ Calibrate IMU▢ Calibrate compass▢ Confirm video feed▢ Confirm gimbal movement⬥Failsafes▢ Return to home: Battery level <20%▢ Return to home: Lost link▢ Land in place: GPS signal lost_____________________IN FLIGHT⬥Takeoff▢ Select flight mode▢ Select manual/automatic takeoff▢ Takeoff⬥In Flight▢ Climb to mission altitude▢ Proceed to mission destination▢ Maintain visual line of sight▢ Monitor wind speeds/heading▢ Monitor aircraft GPS location▢ Monitor battery percentage▢ Complete mission⬥Landing▢ Verify clear path home▢ Select manual/automatic landing▢ Land safely_____________________POST FLIGHT⬥Power down▢ Power down aircraft▢ Power down controller▢ Remove props▢ Secure gimbal⬥Data collection▢ Download photo/video data▢ Record flight time in logbook▢ Record any damage
Monday, February 18, 2019
Processing Pix4D Imagery with GCPs
Part 1 (Introduction):
Ground Control Points (GCP) are important in the processing of UAS data. GCPs allow for better orthorectification in that the help better align the data on the coordinate plane. Ground control points are points on the ground that have a known geographic location. GCPs can be used either as a GCP, or as a checkpoint. For a Ground Control Point, the rectification software will use the point to adjust the image, whereas with a checkpoint, the software will just give an error measurement from that checkpoint. This allows the user to see how accurately the data is being displayed or processed.
Part 2 (Methods):
Similar to the previous lab where we processed a set of data without the use of GCPs, this lab incorporated GCPs for processing the data. We first did initial processing of the data without the GCPs. Once the initial processing was complete, we added the GCPs.
To add the GCPs, we first told Pix4D which file was contained the GCPs. Note that the points had already been cleaned up, and the points that were to be used were the only ones contained in the file. Once this data had been imported, we instructed Pix4D as to which field contained the X value, which the Y value, and which the elevation value. Figure 18 below shows the GCP manager in Pix4D, and as you can see, it has detected each point as well as displayed the respective coordinate data.
After the GCPs were added, we had to manually adjust the data to match the GCPs, as the plane from the UAS was not entirely accurate in the vertical direction. Figure 19 shows the sidebar where each GCP was adjusted, and Pix4D was told where on each image the control point was actually located.
Once these steps were complete, we exported the data and created a few maps with the data.
Attached below is the quality report for this data. If you scroll down to the quality check section on the first page, the georeferencing section shows a mean RMS error of 0.045m. This means that in 3D space, the average error of the points was 4.5 centimeters. If you scroll down to the bottom of the fifth page, you can see the error of each individual point. The highest error displayed is point 102 on the Z-axis, which shows an error of 0.142, or 14.2 centimeters. This is roughly 6 inches error in the vertical direction, which does not seem like very much for this data set and this large of an imaging area.
Ground Control Points (GCP) are important in the processing of UAS data. GCPs allow for better orthorectification in that the help better align the data on the coordinate plane. Ground control points are points on the ground that have a known geographic location. GCPs can be used either as a GCP, or as a checkpoint. For a Ground Control Point, the rectification software will use the point to adjust the image, whereas with a checkpoint, the software will just give an error measurement from that checkpoint. This allows the user to see how accurately the data is being displayed or processed.
Part 2 (Methods):
Similar to the previous lab where we processed a set of data without the use of GCPs, this lab incorporated GCPs for processing the data. We first did initial processing of the data without the GCPs. Once the initial processing was complete, we added the GCPs.
To add the GCPs, we first told Pix4D which file was contained the GCPs. Note that the points had already been cleaned up, and the points that were to be used were the only ones contained in the file. Once this data had been imported, we instructed Pix4D as to which field contained the X value, which the Y value, and which the elevation value. Figure 18 below shows the GCP manager in Pix4D, and as you can see, it has detected each point as well as displayed the respective coordinate data.
Figure 18: GCP Manager |
Once this was complete, we told Pix4D to reoptimize, and it adjusted the data to incorporate the GCPs.
Figure 20 below shows the difference between where the GCP was told to be placed, and where the GCP was actually adjusted to. This difference between the two points will be the error that was mentioned with checkpoints above, although this is GCP error.
Figure 19: GCP Adjustment |
Figure 20: GCP Difference |
Part 3 (Discussion):
The first map below is the orthomosaic with the GCPs overlayed on top of the orthomosaic. One thing I noticed, that may be hard to tell as it is displayed, is how close each GCP is to its respective chevron on the map. This tells me that Pix4D was able to adjust the data to the GCPs fairly accurately, which I will examine more using the report.
![]() |
Figure 21: Ortho Map with GCPs |
![]() |
Figure 22: DSM Map using GCPs |
Part 4 (Conclusion):
It is clear that GCPs can have a large advantage to the accuracy of the data, and one place we see this is with the Z-axis and the vertical elevation. Without ground control, the data would have been processed as if it was on average 26 meters lower than it actually was, and this was due to the elevation data from the UAS. With ground control, however, that data was corrected to the actual elevation, and we have much more accurate data because of it. Not only that, but on average, the data was 6.94 meters off in the horizontal plane as well (Using Pythagorean's theorem, the X direction error and the Y direction error were used to calculate overall horizontal error), so we can see the overall benefit of GCPs to improve the accuracy of our data.
Tuesday, February 12, 2019
Construction of a point cloud data set, true orthomosaic, and digital surface model using Pix4D software
Part 1 (Introduction):
What is Pix4D?
Pix4D put simply is computer software that can find tie points, or common points between many images.
What products does it generate?
Pix4D can create orthomosaics, digital surface models, and 3-dimensional models such as point clouds or meshes.
Part 2 (Methods):
What is the overlap needed for Pix4D to process imagery?
Pix4D recommends at least 75% overlap in the flight direction and at least 60% overlap from side to side.
What if the user is flying over sand/snow, or uniform fields?
Pix4D recommends using at least 85% overlap in flight direction and at least 70% side overlap due to uniformity.
What is Rapid Check?
Rapid Check allows the software to create tie points quickly for fast results, as opposed to the higher quality results you would get through normal processing, which takes more time.
Can Pix4D process multiple flights? What does the pilot need to maintain if so?
Pix4D can process multiple flights, although the pilot needs to maintain enough overlap between the two flights to ensure there will be enough tie points.
Can Pix4D process oblique images? What type of data do you need if so?
For oblique images, you really should have ground control points or manual tie points between each set of images, as well as enough overlap between the sets of images.
What is the difference between a global and linear rolling shutter?
A global shutter takes the image instantly, whereas a linear rolling shutter does a scan to capture the image.
Are GCPs necessary for Pix4D? When are they highly recommended?
GCPs are not necessary for Pix4D. They are highly recommended when combining nadir images with aerial oblique and terrestrial images. GCPs are also highly recommended if capturing a tunnel system and a multiple tracks of images are not possible.
What is the quality report?
The quality report gives information as to what the software did. For example, it would show you how many images in the set were used, the average number of tie points per image, previews of the results, details about the block adjustment, and various other processing details. Attached below is the quality report associated with this dataset for use as an example.
Note: viewing this document will open the pdf, located on Google Drive, in a separate tab.
Quality Report.pdf
Methods:
In this project, we used Pix4D to process a set of images taken over someone's residence. Ground control points were not used during this project. We started out by creating a project within Pix4D, and then uploading our images to the software. After confirming camera settings, Pix4D began the task of processing our data. The total time for the initial processing was 31 minutes and 11 seconds. After initial processing, Pix4D used 66 out of 67 images.
After the initial processing, we confirmed the data was looking like it should, and began the orthomosaic generation as well as the digital surface model and the 3D models. Upon completion, we had an orthomosaic, digital surface model, as well as 3D files to view and work with.
Part 3 (Results and Discussion):
Below is a table that shows processing times for various segments. Each time is in minutes, and then seconds. Total processing time took approximately 65 minutes to complete. This data is also located in the quality report attached above.
Using the data from Pix4D, a couple maps were created using ArcGIS software. In Figure 15 below, the orthomosaic was used to create a map. One thing to note is the edges of the orthomosaic. Because of the limited overlap around the edges, as well as because of the block adjustment, the edges have abrupt turns and incomplete photos. This is not a concern, just an interesting aspect of the orthorectification process. One interesting aspect about this map though is the trees. Pix4D can process some trees just fine, but struggles with other trees. Along the road are some coniferous trees, which Pix4D was able to process perfectly fine. Behind the house are a bunch of deciduous trees, which upon closer inspection, have some lines where it was obvious they were pieced together, as well as some blurred areas where either the images were not very clear, or where the images were blended in processing. A couple deciduous trees turned out fine, such as the red-orange tree near the road in the center of the orthomosaic. Most likely due to its isolation, this tree was precise, and each individual leaf on the ground was visible and distinguishable.
Moving on to the digital surface model and 3D model, I notice some interesting characteristics straight off. First, there seems to be an incongruity in the digital surface model, near the area to the north of the house. A hillshade effect was added to the DSM to easier identify areas of relief. In the center of the DSM lies an orange circle, which appears to mean that the land the house resides on rises, I am assuming so that water runs away from the house. Near the top and left of that orange circle, though, is a sharp change in the DSM, which I believe to be an error in processing, or an issue with the images, because I see no reason that the land would abruptly change in elevation at that point. This could be related to trees in that area, as trees have been known to cause some 3D and orthomosaicking issues.
What is Pix4D?
Pix4D put simply is computer software that can find tie points, or common points between many images.
What products does it generate?
Pix4D can create orthomosaics, digital surface models, and 3-dimensional models such as point clouds or meshes.
Part 2 (Methods):
What is the overlap needed for Pix4D to process imagery?
Pix4D recommends at least 75% overlap in the flight direction and at least 60% overlap from side to side.
What if the user is flying over sand/snow, or uniform fields?
Pix4D recommends using at least 85% overlap in flight direction and at least 70% side overlap due to uniformity.
What is Rapid Check?
Rapid Check allows the software to create tie points quickly for fast results, as opposed to the higher quality results you would get through normal processing, which takes more time.
Can Pix4D process multiple flights? What does the pilot need to maintain if so?
Pix4D can process multiple flights, although the pilot needs to maintain enough overlap between the two flights to ensure there will be enough tie points.
Can Pix4D process oblique images? What type of data do you need if so?
For oblique images, you really should have ground control points or manual tie points between each set of images, as well as enough overlap between the sets of images.
What is the difference between a global and linear rolling shutter?
A global shutter takes the image instantly, whereas a linear rolling shutter does a scan to capture the image.
Are GCPs necessary for Pix4D? When are they highly recommended?
GCPs are not necessary for Pix4D. They are highly recommended when combining nadir images with aerial oblique and terrestrial images. GCPs are also highly recommended if capturing a tunnel system and a multiple tracks of images are not possible.
What is the quality report?
The quality report gives information as to what the software did. For example, it would show you how many images in the set were used, the average number of tie points per image, previews of the results, details about the block adjustment, and various other processing details. Attached below is the quality report associated with this dataset for use as an example.
Note: viewing this document will open the pdf, located on Google Drive, in a separate tab.
Quality Report.pdf
Methods:
In this project, we used Pix4D to process a set of images taken over someone's residence. Ground control points were not used during this project. We started out by creating a project within Pix4D, and then uploading our images to the software. After confirming camera settings, Pix4D began the task of processing our data. The total time for the initial processing was 31 minutes and 11 seconds. After initial processing, Pix4D used 66 out of 67 images.
After the initial processing, we confirmed the data was looking like it should, and began the orthomosaic generation as well as the digital surface model and the 3D models. Upon completion, we had an orthomosaic, digital surface model, as well as 3D files to view and work with.
Part 3 (Results and Discussion):
Below is a table that shows processing times for various segments. Each time is in minutes, and then seconds. Total processing time took approximately 65 minutes to complete. This data is also located in the quality report attached above.
Orthomosaic | 15:06 |
Digital Surface Model | 0:43 |
Point Cloud | 27:50 |
3D Textured Mesh | 4:15 |
Total | 65:00 |
Using the data from Pix4D, a couple maps were created using ArcGIS software. In Figure 15 below, the orthomosaic was used to create a map. One thing to note is the edges of the orthomosaic. Because of the limited overlap around the edges, as well as because of the block adjustment, the edges have abrupt turns and incomplete photos. This is not a concern, just an interesting aspect of the orthorectification process. One interesting aspect about this map though is the trees. Pix4D can process some trees just fine, but struggles with other trees. Along the road are some coniferous trees, which Pix4D was able to process perfectly fine. Behind the house are a bunch of deciduous trees, which upon closer inspection, have some lines where it was obvious they were pieced together, as well as some blurred areas where either the images were not very clear, or where the images were blended in processing. A couple deciduous trees turned out fine, such as the red-orange tree near the road in the center of the orthomosaic. Most likely due to its isolation, this tree was precise, and each individual leaf on the ground was visible and distinguishable.
![]() |
Figure 15: Pix4D orthomosaic |
It is interesting to see other topographic features of the area though, such as the trees along the road, each represented by a bright red dot, which means that the surface the camera sees is higher than the surrounding terrain. Each tree along the road is very distinguishable. One thing I would like to point out is the lack in elevation change where that red-orange tree should be. Following the curve of the road between the lines of trees, there should be another red dot near the center of the image, but there is not. This is also evidenced in an animation that was created using the Pix4D software, which I will point out momentarily, but I believe this is because of the issues Pix4D can have with trees.
Below I have included an animation that was created using Pix4D. Figure 17 shows the trail that the camera followed in creating this animation, but the short clip below highlights another feature of this software.
Part 4 (Conclusions):
Pix4D is important for processing UAS data due to its capabilities and benefits. With a few clicks and some cash, any user can take data capture from an unmanned system and with the right tools, can create accurate maps of a feature, digital surface models, 3D point clouds, or even animations to showcase at their next meeting. This powerful software does have its drawback, some mentioned earlier with the issues of trees, but others being required time and processing power. This is no small task creating and processing this data. As given above, this data alone, with only 67 images to process, took all of 65 minutes. A project or data set with several hundred images will take significantly more time. Not only that, it takes a powerful machine to achieve these processing times. The machine this data was processed on contained an Intel i7-6700T and 32 gigabytes of RAM. This data processing is very heavy on the CPU, and without a powerful enough CPU, could cause the program to crash, or take an indeterminable amount of time, which is definitely something to consider for UAS data processing.
![]() |
Figure 16: Pix4D DSM and 3D Model |
![]() |
Figure 17: Animation Trail |
As I mentioned before, though it may be hard to detect on the first play through, the area on the ground where that red-orange tree should be is completely flat. Pix4D seems to have completely edited out or flattened that surface, I am assuming because it could not figure out what to do there, or the data was not clear enough that an elevation or surface change was present in that location. Either way, this is one issue that is frequent with orthorectification.
Part 4 (Conclusions):
Pix4D is important for processing UAS data due to its capabilities and benefits. With a few clicks and some cash, any user can take data capture from an unmanned system and with the right tools, can create accurate maps of a feature, digital surface models, 3D point clouds, or even animations to showcase at their next meeting. This powerful software does have its drawback, some mentioned earlier with the issues of trees, but others being required time and processing power. This is no small task creating and processing this data. As given above, this data alone, with only 67 images to process, took all of 65 minutes. A project or data set with several hundred images will take significantly more time. Not only that, it takes a powerful machine to achieve these processing times. The machine this data was processed on contained an Intel i7-6700T and 32 gigabytes of RAM. This data processing is very heavy on the CPU, and without a powerful enough CPU, could cause the program to crash, or take an indeterminable amount of time, which is definitely something to consider for UAS data processing.
Wednesday, February 6, 2019
Construction of a point cloud data set and true orthomosaic using ArcPro software
Part 1 (Introduction):
What is photogrammetry?
Photogrammetry is simply accurately measuring objects and surfaces from pictures and other digital images.
What types of distortion does remotely sensed imagery have in its raw form?
Remotely sensed imagery contains geometric distortion such as perspective distortion, field of view distortion, lens distortion, earth curvature, radial displacement, and scanning distortion. These are affected by the sensor or lens being used, the focal length of said lens, and the method in which the sensor captures images. Other distortions can be caused by the angles at which they are photographed. For example, from above, a tall building might appear to be at an angle, whereas should really be just the outline of the building as seen from straight above the building.
What is orthorectification? What does it accomplish?
Orthorectification is the way remotely sensed images are corrected for distortion. This allows for the creation of a an accurate orthoimage that can be used to produce a map.
What is the Ortho Mapping Suite in ArcPro? How does it relate to UAS imagery?
The Ortho Mapping Suite in ArcPro is a set of tools to allow the user to create orthorectified images and products from UAS or satellite imagery. UAS imagery is often taken looking straight down and used for creating orthomosaics or digital surface models. The Ortho Mapping Suite is tailored for specifically these types of applications.
What is Bundle Block Adjustment?
Bundle block adjustment uses ground control and tie point information to adjust the exterior of each image so adjacent images alight properly. Once this is done, all the images are then adjusted to fit the ground. This produces the "best statistical fit" between all of the images.
What is the advantage of using this method? Is it perfect?
The advantage of using this method is the simplicity of combining all of the images into one, where the best statistical fit is used. This method is not perfect, and the process provides the user with a table of residual errors once the process is complete. The user can delete the points with high residual error, or manually move the point in error. The program will redo the adjustment until the overall error and residual error are within an acceptable range.
Part 2 (Methods):
What key characteristics should go into folder and file naming conventions?
Some key characteristics that should go into folder and file naming include date the mission was flown, sensor on board the aircraft, altitude flown, ideally the mission location, and the type of file that it is. For example, if I flew a mapping mission over my parents property on May 20th, 2018 using a DJI Inspire with the Zenmuse X5, I might name the file 20180520_zenmusex5_50m_yoderproperty_ortho.tiff or something similar with a varying location name.
Why is file management so key in working with UAS data?
File management is so key in working with UAS data because of the many file types and extensions used, as well as when it comes to sharing data with other persons or entities. Proper file management would make it easier for other parties to know what they are looking at.
What key forms of metadata should be associated with every UAS mission? Create a table that provides the key metadata for the data you are working with.
Metadata such as date flown, UAS platform, sensor, altitude captured, ground control GPS, coordinate system, and weather conditions should be included with every UAS mission.
Date Flown: Nov 8th, 2018
UAS Platform: Yuneec H520
Sensor: Yuneec E90
Altitude Flown: 70m
Ground Control GPS: Propeller
Ground Control Coordinates: NAD83(2011) UTM Zone 16
UAS Coordinates: WGS 84 DD
Pilot: Joseph Hupy
Part 3 (Results):
Describe your maps in detail. Discuss their quality, and where you see issues in the maps. Are there areas on the map where the data quality is poor or missing?
Overall I would say that the map is not bad. It certainly is not the best and most accurate map that could be produced, but it is not bad. Some quality issues that I notice are in the wooded section of the map. There is quite a bit of discontinuance between areas of trees. Another quality issue is around the edges where ArcGIS Pro rectified the images. The edges have abrupt edges to them, and are not entirely straight. Although we are not using this part of the map, it is still a part of the map. The subject area over that center house, though, is accurate and turned out quite well. In ArcGIS Pro, there is the capability to zoom all the way in and see individual leaves on the ground.
How much time did it take to process the data? Create a table that shows the time it took.
The initial compilation only took 2 minutes and 8 seconds; however, the block adjustment took 1 hour 4 minutes and 44 seconds to complete.
Initial tie points 2:08
Block adjustment 1:04:44
Part 4 (Conclusions):
Summarize the Orthomosaic Tool.
The orthomosaic tool allows you to complete the process of creating the photogrammetrically correct, orthorectified image. The orthomosaic tool will do color balancing and seamline generation based on the settings that you as the user choose from the menu. For the final orthomosaic output, the user is able to choose file format as well as pixel size and other compression options. In the end, ArcGIS Pro will create a photogrammetrically correct orthorectified image.
Summarize the process in terms of time invested and quality of output.
In summary, for a few hours of time between flying the mission and navigating the process through the ArcGIS Pro software, a user can create and use an orthomosaic to create maps and do other data processing. Depending on the resolution at which the images were captured, the overall quality of output could be extremely high, or mediocre, but altogether, ArcGIS Pro will produce comparable results to the data it was originally given.
Think of what was discussed with this orthomosaic in terms of accuracy. How might a higher resolution DSM (From LiDAR) make this more accurate? Why might this approach not work in a dynamic environment such as a mine?
A higher resolution digital surface model (DSM) could make this more accurate because of the elevation rectification process. One of the distortions mentioned is that of relief displacement, which is basically distortion caused by variable elevation above or below the datum. This causes a slight shift in the images position, which would be corrected based on the DSM. A higher quality DSM would more accurately rectify the images. This approach might not work in a dynamic environment such as this due to some of the terrain features, like the large quantity of trees that would affect the LiDAR dataset.
What is photogrammetry?
Photogrammetry is simply accurately measuring objects and surfaces from pictures and other digital images.
What types of distortion does remotely sensed imagery have in its raw form?
Remotely sensed imagery contains geometric distortion such as perspective distortion, field of view distortion, lens distortion, earth curvature, radial displacement, and scanning distortion. These are affected by the sensor or lens being used, the focal length of said lens, and the method in which the sensor captures images. Other distortions can be caused by the angles at which they are photographed. For example, from above, a tall building might appear to be at an angle, whereas should really be just the outline of the building as seen from straight above the building.
What is orthorectification? What does it accomplish?
Orthorectification is the way remotely sensed images are corrected for distortion. This allows for the creation of a an accurate orthoimage that can be used to produce a map.
What is the Ortho Mapping Suite in ArcPro? How does it relate to UAS imagery?
The Ortho Mapping Suite in ArcPro is a set of tools to allow the user to create orthorectified images and products from UAS or satellite imagery. UAS imagery is often taken looking straight down and used for creating orthomosaics or digital surface models. The Ortho Mapping Suite is tailored for specifically these types of applications.
What is Bundle Block Adjustment?
Bundle block adjustment uses ground control and tie point information to adjust the exterior of each image so adjacent images alight properly. Once this is done, all the images are then adjusted to fit the ground. This produces the "best statistical fit" between all of the images.
What is the advantage of using this method? Is it perfect?
The advantage of using this method is the simplicity of combining all of the images into one, where the best statistical fit is used. This method is not perfect, and the process provides the user with a table of residual errors once the process is complete. The user can delete the points with high residual error, or manually move the point in error. The program will redo the adjustment until the overall error and residual error are within an acceptable range.
Part 2 (Methods):
What key characteristics should go into folder and file naming conventions?
Some key characteristics that should go into folder and file naming include date the mission was flown, sensor on board the aircraft, altitude flown, ideally the mission location, and the type of file that it is. For example, if I flew a mapping mission over my parents property on May 20th, 2018 using a DJI Inspire with the Zenmuse X5, I might name the file 20180520_zenmusex5_50m_yoderproperty_ortho.tiff or something similar with a varying location name.
Why is file management so key in working with UAS data?
File management is so key in working with UAS data because of the many file types and extensions used, as well as when it comes to sharing data with other persons or entities. Proper file management would make it easier for other parties to know what they are looking at.
What key forms of metadata should be associated with every UAS mission? Create a table that provides the key metadata for the data you are working with.
Metadata such as date flown, UAS platform, sensor, altitude captured, ground control GPS, coordinate system, and weather conditions should be included with every UAS mission.
Date Flown: Nov 8th, 2018
UAS Platform: Yuneec H520
Sensor: Yuneec E90
Altitude Flown: 70m
Ground Control GPS: Propeller
Ground Control Coordinates: NAD83(2011) UTM Zone 16
UAS Coordinates: WGS 84 DD
Pilot: Joseph Hupy
Part 3 (Results):
Describe your maps in detail. Discuss their quality, and where you see issues in the maps. Are there areas on the map where the data quality is poor or missing?
Overall I would say that the map is not bad. It certainly is not the best and most accurate map that could be produced, but it is not bad. Some quality issues that I notice are in the wooded section of the map. There is quite a bit of discontinuance between areas of trees. Another quality issue is around the edges where ArcGIS Pro rectified the images. The edges have abrupt edges to them, and are not entirely straight. Although we are not using this part of the map, it is still a part of the map. The subject area over that center house, though, is accurate and turned out quite well. In ArcGIS Pro, there is the capability to zoom all the way in and see individual leaves on the ground.
![]() |
Figure 14: ArcPro Map |
The initial compilation only took 2 minutes and 8 seconds; however, the block adjustment took 1 hour 4 minutes and 44 seconds to complete.
Initial tie points 2:08
Block adjustment 1:04:44
Part 4 (Conclusions):
Summarize the Orthomosaic Tool.
The orthomosaic tool allows you to complete the process of creating the photogrammetrically correct, orthorectified image. The orthomosaic tool will do color balancing and seamline generation based on the settings that you as the user choose from the menu. For the final orthomosaic output, the user is able to choose file format as well as pixel size and other compression options. In the end, ArcGIS Pro will create a photogrammetrically correct orthorectified image.
Summarize the process in terms of time invested and quality of output.
In summary, for a few hours of time between flying the mission and navigating the process through the ArcGIS Pro software, a user can create and use an orthomosaic to create maps and do other data processing. Depending on the resolution at which the images were captured, the overall quality of output could be extremely high, or mediocre, but altogether, ArcGIS Pro will produce comparable results to the data it was originally given.
Think of what was discussed with this orthomosaic in terms of accuracy. How might a higher resolution DSM (From LiDAR) make this more accurate? Why might this approach not work in a dynamic environment such as a mine?
A higher resolution digital surface model (DSM) could make this more accurate because of the elevation rectification process. One of the distortions mentioned is that of relief displacement, which is basically distortion caused by variable elevation above or below the datum. This causes a slight shift in the images position, which would be corrected based on the DSM. A higher quality DSM would more accurately rectify the images. This approach might not work in a dynamic environment such as this due to some of the terrain features, like the large quantity of trees that would affect the LiDAR dataset.
Sunday, January 27, 2019
Building a Map with UAS Data
Part 1 (Introduction):
Proper cartographic skills are essential in working with UAS data because of the people who will be viewing your data or your maps. If the people who are viewing your maps do not know what they are looking at, the scale of what they are looking at, where the data is located, and other such information, the data is useless to them.
Some items required to turn a drawing or aerial image into a map are a scale bar, a locator map, and a north arrow. Aerial images are not inherently maps. They have to be turned into maps for that classification to be applicable.
Spatial patterns of data can show a reader or user how effective UAS data can be in that area. For example, with UAS data, we can easily gather images and data that depict spatial patterns in areas such as cities, layouts of towns, distribution of trees in a forest, utilization of farmland, and other areas that can have spatial patterns.
The objectives of this lab is to create a map that is functional and distinguishable from a generic image taken from the air.
Part 2 (Methods):
What key characteristics should go into folder and file naming conventions?
File names should include important information regarding the data in the file. For instance, if you are working with data that has a digital terrain model, an orthomosaic, and the ground control points, and all the data was taken on January 15th, 2019 at the "Oxford mines", it would be helpful to include that. Example file names could be 20190115_oxfordmine_dtm, 20190115_oxfordmine_mosaic, and 20190115_oxfordmine_gcp. The first two parts of the file naming would aid in organization and grouping of the data, and the last part would allow you to easily differentiate between the data.
Why is file management so key in working with UAS data?
File management is so key in working with UAS data because of the many file types and extensions used, as well as when it comes to sharing data with other persons or entities. Proper file management would make it easier for other parties to know what they are looking at.
What key forms of metadata should be associated with every UAS mission?
Metadata such as date flown, UAS platform, sensor, altitude captured, ground control GPS, coordinate system, and weather conditions should be included with every UAS mission.
Create a table that provides the key metadata for the data you are working with.
Date Flown: June 13th, 2017
UAS Platform: M600 Pro
Sensor: Zenmuse X5
Altitude Flown: 70m
Ground Control GPS: Trimble UX5
Ground Control Coordinates: WGS84 UTM Zone 16
UAS Coordinates: WGS 84 DD
Pilot: Peter Menet
Add a basemap of your choice. What basemap did you use? Why?
The basemap I chose was the National Geographic World Map because of some of the terrain and natural surface features, such as lakes. The basemap was not pivotal in the map making process.
What is the difference between a DSM and DEM?
The difference between a DSM and a DEM is that one is a digital surface model and the other is a digital elevation model. A digital surface model is taken from the top down, and will show the tops of buildings, trees, and other objects. A digital elevation model, however, will attempt to exclude those features and interpolate where the surface of the earth is, and create an elevation model based off that.
Go into the Properties for the DSM and record the following descriptive statistics.
Cell Size, Units, Projection, Highest Elevation, Lowest Elevation. Enter those statistics into a table. Why are these important?
Cell size (X,Y): | 0.02077, 0.02077 |
Units: | Meter |
Projection: | WGS_1984_UTM_Zone_16N |
Highest Elevation: | 323.089 meters |
Lowest Elevation: | 281.047 meters |
These statistics are important for two reasons. First, it defines the projection of the data, which is important in working with data because if it is projected incorrectly, any analysis derived from that data is inaccurate. Second, the size and elevation data, as well as the units gives scale to our data. With this information, we can exactly determine distances, as well as project the data vertically based on the elevation data.
Generate a Hillshade for the DSM. Then set the original DSM to a color ramp of your choice and set its transparency to your choice over the shaded DSM. What does hillshading do towards being able to visualize relief and topography.
Hillshading allows you to differentiate various elevations based on color changes. As seen in Figure 9, the depicted areas in red are those of higher elevation, whereas those towards the green are of lower elevation. This allows you at a glance to be able to build a sight picture of the elevation changes of the area.
![]() |
Figure 9: Hillshade on DSM |
If you look at Figure 10, the left area in red on the DSM which tells us the elevation is higher relates to the mosaic beneath. Likewise, the white circle on the mosaic appears to be some sort of silo or storage bin. As you can see on the color-shaded DSM, the top part of that circle shows solid red, which would agree, as those storage containers tend to be tall.
![]() |
Figure 10: Swipe tool between DSM and mosaic |
I believe the purpose of vertical exaggeration is to visually accentuate the differences in elevation. There is a bit of vertical exaggeration in Figure 11, and as you can see, the differences in terrain are more pronounced than they normally would be. I did vertical exaggeration based off extent, that way the vertical exaggeration is not too extreme, but there is still enough to stand out to the eye.
![]() |
Figure 11: Vertical exaggeration |
The color ramp I chose was from a darker green to a bright red, and the reason for this is because a lot of times with elevation changes you see lower areas represented in green, and higher areas in red. This is also true of Ground Proximity Warning Systems (GPWS) as used in manned aircraft. The higher terrain that is close to you is depicted in red, with the lower terrain that is more safely underneath you depicted in green. Also, the green in the image and in other applications gives the appearance of grass or other early terrain.
What are the advantages of using ArcScene to view UAS DSM data vs. the overhead shaded relief in ArcMap. What are the disadvantages?
Advantages of using ArcScene as opposed to the overhead view include a better ability to see the relief in the image. In Figure 11, ArcScene makes it easy to visualize the scene because of its 3D view. A disadvantage I would argue would be accuracy. With ArcMap, it is easy to use the identify tool and view the attribute data of any one point on the DSM to get actual elevation data. With ArcScene, it is more used as a visual aid to represent the data and the scene, and there are areas of the image that ArcScene makes 3D and the image doesn't blend and contains sharp turns and edges and doesn't appear exactly like it would in real life.
Is this export a map? Why or why not?
The export shown in Figure 11 is not a map. A map needs to have scale, and this image does not include any reference to scale, a scale bar, any indication of size or direction, and therefore is not a map. Figures 12 and 13 below are maps, and they include all of these criteria. The first is a map of the mosaic, and the second is a map of the DSM of the same geographic location.
![]() |
Figure 12: Map of orthomosaic |
![]() |
Figure 13: Map of DSM |
Some things that make UAS data useful as a tool to the cartographer and GIS user is the quality of data that can be capture with a UAS. Between the digital surface model that gives us elevation differences and the orthomosaic that gives us terrain and ground images, those forms of data can be very useful to a cartographer who needs or wants to map a certain area. To the GIS user, both this data and other potential data that can be collected with sensors mounted to a UAS, a wide variety of attribute data can be collected to make a map smart.
Limitations this data have is quality. The sensor and altitude determines a lot about how the quality of the data will be, both with resolution as well as accuracy. Other limitations would be consistency or accuracy. These images were stitched together, and there could be inconsistencies in the stitching process. The user should know these limitations, as well as the information that should be included in the metadata.
Other forms of data this data could be combined with to make it more useful could potentially be still images taken from the ground to help the user identify the scene. I would argue that a wider range of elevation measurements would make this data more useful overall. The ground control points used are useful for the original purpose of the data, but this data could have been used for a wider variety of purposes had it had more accurate and a larger quantity of ground control points and checkpoints.
Subscribe to:
Posts (Atom)