This is a Preprint and has not been peer reviewed. This is version 1 of this Preprint.

Downloads
Authors
Abstract
Hundreds of thousands of peer-reviewed articles and grey literature reports are published every year in ecology and conservation biology. This ever-growing body of knowledge presents new challenges. Indeed, it is becoming increasingly challenging for researchers to stay current on new information and to identify knowledge gaps. Here, we argue that Large Language Models (LLMs) such as OpenAI’s GPT-4o mini offer a powerful yet accessible solution to help overcome this challenge, as LLMs require only effective prompt engineering rather than specialized AI expertise. We present a streamlined LLM-driven pipeline for filtering and extracting information from large volumes of literature, illustrating its potential through two case studies. Our findings show that, by combining LLMs with short, iterative prompting workflows and targeted manual validation checks, researchers can rapidly obtain structured outputs—such as study locations, biome types, or quantitative measures—while minimizing model hallucinations and misinterpretations. We emphasize that domain experts remain integral for shaping prompts, verifying results, and ensuring the extracted information aligns with real-world research and conservation needs. Overall, this pipeline underscores the synergy between human expertise and LLM capabilities, promising more efficient literature reviews for a broad range of ecological and conservation applications.
DOI
https://doi.org/10.32942/X26W6Q
Subjects
Life Sciences
Keywords
large language models, literature review, prompt engineering, information extraction, evidence synthesis, research gaps and trends
Dates
Published: 2025-02-08 08:28
License
CC BY Attribution 4.0 International
Additional Metadata
Language:
English
Conflict of interest statement:
None
Data and Code Availability Statement:
All the data and codes used in the study are made available via the following github repository: https://github.com/sruthimoorthy/LLM-Lit-Review-Codes
There are no comments or no comments have been made public for this article.