This is a Preprint and has not been peer reviewed. This is version 1 of this Preprint.
Downloads
Authors
Abstract
Automated detection of acoustic signals is crucial for effective monitoring of vocal animals and their habitats across large spatial and temporal scales. Recent advances in deep learning have made high performance automated detection approaches more accessible to more practitioners. However, there are few deep learning approaches that can be implemented natively in R. The 'torch for R' ecosystem has made the use of transfer learning with convolutional neural networks accessible for R users. Here we provide an R package and workflow to use transfer learning for the automated detection of acoustics signals from passive acoustic monitoring (PAM) data collected in Sabah, Malaysia. The package provides functions to create spectogram images from PAM data, compare the performance of different pre-trained CNN architectures trained on the ImageNet dataset, deploy trained models over directories of sound files, and extract embeddings from trained models. The R programming language remains one of the most commonly used languages among ecologists, and we hope that this package makes deep learning approaches more accessible to this audience. In addition, these models can serve as important benchmarks for more state-of-the-art approaches.
DOI
https://doi.org/10.32942/X2G61D
Subjects
Life Sciences
Keywords
Deep learning, Passive acoustic monitoring, gibbon
Dates
Published: 2024-07-14 18:52
License
CC BY Attribution 4.0 International
Additional Metadata
Language:
English
Conflict of interest statement:
None.
Data and Code Availability Statement:
https://github.com/DenaJGibbon/gibbonNetR
There are no comments or no comments have been made public for this article.