Seeing rare birds where there are none: self-rated expertise predicts correct species identification, but also more false rarities

This is a Preprint and has not been peer reviewed. This is version 2 of this Preprint.

Add a Comment

You must log in to post a comment.


Comments

There are no comments or no comments have been made public for this article.

Downloads

Download Preprint

Supplementary Files
Authors

Nils Bouillard, Rachel Louise White, Hazel Jackson, Gail Austen, Julia Schroeder

Abstract

The use of crowdsourced data is growing rapidly, particularly in ornithology. Citizen science greatly contributes to our knowledge, however, little is known about the reliability of data collected in that way. We found, using an online picture quiz, that self-proclaimed expert birders were more likely to misidentify common British bird species as exotic or rare species, compared to people who rated their own expertise more modestly. This finding suggests that records of rare species should always be considered with caution even if the reporters consider themselves to be experts. In general, however, we show that self-rated expertise in bird identification skills is a reliable predictor of correct species identification. Implementing the collection of data on self-rated expertise is easy and low-cost. We therefore suggest it as a useful tool to statistically account for variability in bird identification skills of citizen science participants and to improve the accuracy of identification data collected by citizen science projects. Edited: broken link fixed (12/3/2019)

DOI

https://doi.org/10.32942/osf.io/9z63v

Subjects

Animal Sciences, Biodiversity, Life Sciences, Ornithology

Keywords

biodiversity, citizen science, Ornithology, Species identification, twitching

Dates

Published: 2019-02-04 12:53

Last Updated: 2019-03-12 10:05

Older Versions
License

CC-By Attribution-ShareAlike 4.0 International