Optimizing aerial imagery collection and processing parameters for drone-based individual tree mapping in structurally complex conifer forests

This is a Preprint and has not been peer reviewed. The published version of this Preprint is available: https://doi.org/10.1111/2041-210X.13860. This is version 9 of this Preprint.

Add a Comment

You must log in to post a comment.


Comments

There are no comments or no comments have been made public for this article.

Downloads

Download Preprint

Supplementary Files
Authors

Derek Jon Nies Young, Michael J Koontz, Jonah Weeks

Abstract

Recent advances in remotely piloted aerial systems (“drones”) and imagery processing enable individual tree mapping in forests across broad areas with low-cost equipment and minimal ground-based data collection. One such method involves collecting many partially overlapping aerial photos, processing them using “structure from motion” (SfM) photogrammetry to create a digital 3D representation, and using the 3D model to detect individual trees. SfM-based forest mapping involves myriad decisions surrounding methods and parameters for imagery acquisition and processing, but it is unclear how these individual decisions or their combinations impact the quality of the resulting forest inventories.

We collected and processed drone imagery of a moderate-density, structurally complex mixed-conifer stand. We tested 22 imagery collection methods (altering flight altitude, camera pitch, and image overlap), 12 imagery processing parameterizations (image resolutions and depth map filtering intensities), and 286 tree detection methods (algorithms and their parameterizations) to create 7,568 tree maps. We compared these maps to a 3.23-ha ground reference map of 1,775 trees > 5 m tall that we created using traditional field survey methods.

The accuracy of individual tree detection (ITD) and the resulting tree maps was generally maximized by collecting imagery at high altitude (120 m) with at least 90% image-to-image overlap, photogrammetrically processing images into a canopy height model (CHM) with a 2-fold upscaling (coarsening) step, and detecting trees from the CHM using a variable window filter after applying a moving-window mean smooth to the CHM. Using this combination of methods, we mapped trees with an accuracy exceeding expectations for structurally complex forests (for overstory trees > 10 m tall, sensitivity = 0.69 and precision = 0.90). Remotely measured tree heights corresponded to ground-measured heights with R2 = 0.95. Accuracy was higher for taller trees and lower for understory trees, and would likely be higher in less dense and less structurally complex stands.

Our results may guide others wishing to efficiently produce broad-extent individual-tree maps of conifer forests without investing substantial time tailoring imagery acquisition and processing parameters. The resulting tree maps create opportunities for addressing previously intractable ecological questions and informing forest management.

DOI

https://doi.org/10.32942/osf.io/p7ygu

Subjects

Ecology and Evolutionary Biology, Forest Sciences, Life Sciences, Research Methods in Life Sciences, Terrestrial and Aquatic Ecology

Keywords

Dates

Published: 2021-09-11 22:40

Last Updated: 2022-04-14 01:36

Older Versions
License

CC-By Attribution-ShareAlike 4.0 International

Additional Metadata

Data and Code Availability Statement:
Data to be published concurrently with manuscript publication in an academic journal

Conflict of interest statement:
DJNY is employed by both the University of California, Davis, and Vibrant Planet, PBC. The other authors do not declare any potential real or perceived conflicts of interest.