Summary

This is an ongoing personal project to collect, process, and display orthomosaic imagery collected from a Mavic Mini SE drone. Mission planning and execution are conducted using the Dronelink web and mobile applications and currently all mission flights are flown using a DJI Mavic Mini SE. After collection, imagery are processed into orthomosaics using the free, Docker command line, version of OpenDroneMap. The imagery are then compressed and converted into Cloud Optimized GeoTIFFS (COG) and uploaded to S3 cloud storage. Finally, the data are displayed using OpenLayers which supports COGs.

COGs allow for easy and efficient storage and retrieval of raster data without the need for a dedicated imagery server. Data are retrieved based on view scale, limiting the data transfer required to view the imagery. The data can be viewed at full source resolution when viewed at smaller scales. This method also requires much less server-side storage space than raster tiles, at the cost of client-side processing, which must be pre-processed and stored at the desired scales.

Drone - Mavic Mini SE

The drone used for imagery collection is the Mavic Mini SE, DJIs class of lightweight drone. It weighs 249 grams, right under the FAAs mass limit for non-commerical registration. I primarily use hand launch and landings due to the poor ground prop clearance. The default flight control app, DJI Fly, is barebones and does not include advanced functions for mission planning or waypoints. However, the Mavic Mini has an open APK that third party vendors can use in their own flight applications.

Mission Planning

The Dronelink service is used for mission planning and execution, this service is not free and requires a one time purchase for non-commerical use. Note that I am operating under a legacy license which includes more advanced features such as Terrain follow, I believe this now requires a higher level license to access. Dronelink supports many drones, including the Mavic Mini, by using a Virtual Stick command method which sends virtual stick commands to the drone through the controller, as if they were issued by an human operator. This approach requires constant connection between the phone, controller, and drone, and can cause input delays and loss of automated control if the link is lost. However, it allows more advanced missions than normally available to cheap/simple drones like the Mini, which has no native support for waypoints to be uploaded to the drone.

Missions are planned on a desktop using the Dronelink web browser application. This application supports a variety of mission types, but the Map option is used for planning.

Upon opening the Map option, a small plan area will be drawn on the screen, it can be adjusted to the desired mission area using the polygon corner handles. Next, the desired paramaters for camera type, flight altitude, altitude reference, overlaps and max speed are selected. The Ground Sample Distance and flight lines will update as paramaters are adjusted. Example from a flight near Via Gaitero road in Goleta:

While the software supports multi-battery missions, the flight area was adjusted such that a single battery can be used for each planned mission, the mission estimation window gives the estimated battery use:

The ground sample distance goal for most missions is 0.75 in/px and front/side overlaps of 80/70% are used. For the Mini this means flying at 170 ft altitude, this also puts the drone well above buildings, trees, and utility poles in the area. Using the Terrain Follow reference option allows the drone to maintain a constant height above the terrain level ensuring correct GSD and overlap for the entire mission. In the mission preview window, more detailed estimates can be viewed, including 3D flight lines adjusted for terrain:

However, using Terran Follow requires that the drone be lauched from within a specfic pre-planned area so its important to place it in a safe and easy to find location. Fortunately, this location can be adjusted using the mobile application if needed.

Mission Execution

Flight execution is straight foward using the Dronelink mobile application as the application handles all flight input and provides drone telemetry. When using Terrain Follow it can be challenging to find the correct launch point as the application doesn't do a great job showing your current location, this can be compounded when the basemap imagery is old or low quality.

The application always shows poor connection, presumably due to a weak transmitter on the drone. I also ran into difficulties with the application reporting good calibration metrics but being unable to properly start missions due to location problems. This appears to be fixed by always performing a drone calibration when arriving on site.

Geofencing and authorization zones around airports can be troublesome to deal with as they do not match FAA LAANC areas:

In my area, LAANC approval can be automatically recieved for areas and altitudes that DJI will prevent the drone from flying within. Since Dronelink is beholden to limits set by the DJI APK, it cannot operate within these zones. This has lead to issues with executing flights that are allowed within LAANC but not DJI. I've also ran into issues when using Terran Follow. I believe that in the DJI APK the drone calculates its altitude in reference to the take-off location and not its current location. This means that if the drone climbs in altitude to maintain height above terrain it may hit a DJI enforced altitude ceiling, preventing further mission execution. The solution for this problem appears to be taking off at the highest point possible, ensuring that the drone never climbs too heigh in relation to the start location.

Both these problems could possibly be solved by going through the DJI authorization process, which I have not done yet.

Orthomosaic Processing

After collection, imagery are processed using the free command line version of OpenDroneMap. This version of ODM runs within a Docker Container which greatly simplifies the install process since all requirements are included within the container.

While the free version does not have a GUI, the command line commands are relatively easy to understand. The optional commands are described here. A variety of outputs are supported, including orthomosaics, DSMs, DTMs, meshes, and 3D models. ODM supports loading GCPs, but they must be configured using 3rd party applications. I briefly experimented with this process using Google Imagery locations as control points, I do not have access to a high accuracy GPS unit, but ran into odd issues with the data not ended up in the correct location.

Here is an example ODM command, in Docker, to create and ortho, dtm, and dsm at a resolution of 1.905 cm, or 0.75 inch.


	docker run -ti --rm -v path/to/project:/datasets opendronemap/odm --project-path /datasets project --dtm --dsm --time --orthophoto-resolution 1.905

Note that the geotifss created by ODM caused errors when loaded into OpenLayers, discussed below. This necessitated a lot of troubleshooting of GDAL tools and options to create a compatible file. Eventually, I found that the following operations needed to be performed in GDAL:

Finally, the COGs are compressed using the JPEG format to reduce file size. This results in the following two GDAL commands:


 gdalwarp -s_srs EPSG:32610 -t_srs EPSG:3857 -of GTiff odm_output.tif warptemp.tif
 gdal_translate warptemp.tif outputcog.tif -b 1 -b 2 -b 3 -co TILING_SCHEME=GoogleMapsCompatible -a_nodata 0 -co COMPRESS=JPEG -co QUALITY=65 -of cog
 

Serving Imagery Data - COGs

The orthos are stored and served as Cloud-Optimized Geotiffs(COG). This is a raster geotiff format designed for storage in the cloud, it utilizes internal views called by HTTP requests to minimize data transfer to the client. Instead of loading an entire geotiff, smaller pre-cached rasters are returned dynamically based on current view extent.

These COGS are stored in a, as of writing, non-public AWS S3 bucket. A separate dataset of the ortho extents and attributes is maintained and served by the pygeoapi runnining within Flask. This dataset is used to provide pre-signed, temporary, links to the orthomosaics.

Viewing the Imagery Data - OpenLayers

Finally, orthos are displayed within OpenLayers, which is a open source Javascript mapping library that supports many GIS file formats. As discussed above, this provided challenging since the COGs generated by ODM threw exceptions.

The library does not directly display the COGs to the viewer, instead they are internally converted to Raster Tiles by the client. This likely has a performance hit on the client-side compared to generating the Raster Tiles on the client side and serving them. This method allows for much smaller cloud storage space compared to the Raster Tiles, maintaining the ability to view the full source resolution.

Unfortunately, the COGs display JPEG artifacts around the perimeter of the datasets. While these artifacts are hardly visible in the full, zoomed in, resolution version, they are very apparent in the lower resolution internal views. I attempted to use a variety of GDAL options to remove them, but it appears that they will always be present as long as the JPEG compression method is used. I do not intend to leave the images uncompressed as this drastically reduces file size with minimal loss of information for qualitative analysis.

Final Thoughts

Developing the workflow, datasets, and application for this project required a lot of learning and troubleshooting, but ultimately it resulted in an interesting application for viewing high resolution imagery. While working through the workflow, I also developed my own custom GUI, using PyQT, for generating the ODM commands and uploading the orthos and extent/attribute data. I also started building a Qfield application for guiding and recording flight information. Once completed, these will likely get there own project pages.

There is a lot of room for improvment throughout the process. In particular, I would like to make the COGs accessible from a Geoserver install, create a viewer for the 3D data that can be linked to from popups, and add a time slider using the collection date. I also intend to make a separate viewer for non-orthomosaic drone imagery that uses location and pitch information to provide view boxes.

I plan to continue to collect and display orthomosaics, as the number of collections grows, I will likely need to make various improvements to the web viewer.