Introduction
When it comes to analyzing data collected from Unmanned Aerial Systems (UAS) there are generally two options. The first is to look at the raw data on an individual basis. This can be images, metadata, or a variety of other files generated directly from a sensor. Using this method can be tedious, and time consuming, however you are looking at what was exactly captured. Often those in the industry will use and showcase “orthomosaics” which are reconstructions of the area of data capture into a single raster file. These allow for easier analysis of the area, at the cost of working with altered data. In this post I want to look at current possible workflows for analyzing both processed and raw data in such a way that you get the best of both worlds, while outlining considerations necessary for analyzing raw and processed data.
How raw data can be tainted
When I refer to “raw data” what I am referring to is the state of data, as is, when it is initially retrieved off the sensor. This can be images on an SD card, positioning data pulled off an internal drive, or any other storage medium. For a way data can be tainted we will use the example of capturing data using a standard RGB camera integrated into a UAS.
Cameras have a variety of settings, some the user can change, and some innate to the sensor. Resolution, color balance, shutter speed, type of shutter, distortion, and the list goes on and on. All of these different options impact the image that is captured. An image taken in the same place, at the same time, with the same camera, can be very different based on these options. These are things that we accept as part of the ability to capture the world digitally, and makes it important to keep notes about your sensor setup, to the level of detail relevant to your application.
Depending on what you plan to analyze, standardizing data collection can be very important. What can help with ensuring consistency between data collection can be the use of ground targets. Ground targets that are specially calibrated to reflect light a certain way, either as a grey or colorized target, can be used to make sure images are captured as similarly as possible. Having targets like this also allow you to “correct” or process the raw data. While this is technically tainting the raw data, this is doing so in a consistent, repeatable, and understandable way.
Issues with orthomosaics
With all the considerations that should be understood in regard to data collection, there are additional considerations for orthomosaics. As commented in the introduction to this post, yes, orthomosaics are formed using altered data. Orthomosaics are created through algorithms that find matching features. These algorithms work in a variety of ways, with a variety of priorities, as well as tradeoffs.
If you look at an orthomosaic closely, you will be able to find areas of stitching. These are areas that are not smoothly aligned to each other. Many applications do offer a variety of software settings to improve the appearance of an orthomosaic. However, while visually improving the orthomosaic, it is further distorting the data from reality. At the cost of altering that data, however, you can easily assess a much larger area, in a faster amount of time. For operations that need that overhead view of the whole area at once, or are less concerned with individual features, orthomosaics certainly still have a place.
Workflow of mixed data analysis
So, there are pros and cons to working with either raw data or orthomosaics, but is there a best of both worlds? The answer is yes, and quite simple. If you can have both, use both. This is not an example of a time where you can only have one, or the other. However, the answer on how to use both together is a bit complicated. Primarily, not all data is collected or processed the same way. For this evaluation we will be keeping it simple. Below we discuss workflows for accomplishing this using RGB imagery that is geolocated via two different means.
Geotagged Images
We’ll start with a common situation to start. A camera sensor that geotags images directly in the metadata of the image file. This is common for a camera sensor built-in to a UAS or with first-party integrations. Once you complete a flight, retrieve your geotagged images from the UAS or sensor, and process them in your choice of software. I won’t layout an exact setup here to keep this post more generalized. However, as we addressed earlier, the more specific your application, the more notes you should take about how data is captured and processed, as it may be important in the future.
Once you have processed your orthomosaic, open up QGIS and start a new project. If you are unfamiliar with QGIS, just know that it is a free and open-source geographic information system (GIS) that is common in the industry. There is plenty of information online about it, if you have not heard of it before, and you can view their website here. Go ahead and import your orthomosaic into the project, and make sure it looks as expected. Next, we’ll need to install a plugin called “ImportPhotos”. As the name describes, this plugin will allow us to import our raw geo-tagged photos into QGIS. It also is important to note any discrepancies between the coordinate system of your images and orthomosaic. This is likely not to be the case but is always good to double check. If they are in different coordinate systems for some reason, your data may not align.
Now what we have is our orthomosaic and raw imagery georeferenced with each other in the same project. With this setup you can toggle the visibility of the raw imagery, and analyze the orthomosaic, until we want to compare what we see there versus the raw data in a quick and easy way. Additional photos can be amended at any time to the project.
Image A (left): Georeferenced images overlayed processed orthomosaic.
Image B (right): Image B: Image view within QGIS project.
Non-Geotagged Images
How do we accomplish this objective when we do not have geotagged images? There are a few ways depending on how the images are taken and processed. One method is by having a separate file containing the location information at each time on the image is triggered. It is then possible to edit the data within the images to make it geotagged, however more common in this scenario is to just associate the location data with the images in software. This allows for your images to still be used to produce a geolocated orthomosaic as well. From this it is possible to compare raw data and an orthomosaic similarly as we did above in a GIS software.
Image C (left): Pix4D Image Properties Editor. (Credit: Pix4D)
Image D (right): Pix4D Geolocation File options. (Credit: Pix4D)
Another option is a bit different, and to an extent provides more information than the above option. Some software’s, like GRYFN Processing Tool, can output something called or similar to an “EOP map”. An EOP map shows what image is used in a part of the orthomosaics creation. This allows you to know for sure what image you should be looking at. This is also possible with geotagged images, but this would be the only way in the event there is no location information for an image at all. And for both circumstances having an EOP map is very helpful as UAS data collection has an amount of overlap. Without this information, there could be a variety of images that a specific part of an orthomosaic could be from.
Image E: Example of what an EOP map could look like with numbers representing the image number used in the orthomosaic creation. (Processed in GRYFN Processing Tool)
Conclusion
This was an introductory look and analysis of working with imagery in different ways. One day I would love to revisit this topic by looking at even more types of data, and the approaches to viewing both the raw data and the processed data. What I hope you take away from this is that data needs to be handled with care. As remote sensing hardware gets easier to use, and more people get into imagery and data analysis, the more important it is for everyone to understand how data can be impacted from the ways we handle it, present it, and use it. Data needs to be approached from a variety of perspectives, as there is no one best solution for working with it. However, I hope this post has helped showcase ways to get the best of both worlds from images collected by a UAS.
And on that note of perspectives, I will leave you with this. Look back at Image E, within image section 191? Do you see the area of pink? Below is another perspective of that field. Would you have been able to guess what it was?