Non-compliance and Under-performance in Australian Human-induced Regeneration Projects

This is a Preprint and has not been peer reviewed. The published version of this Preprint is available: https://doi.org/10.1071/RJ24024. This is version 3 of this Preprint.

Add a Comment

You must log in to post a comment.


Comments

Comment #171 Megan Catherine Evans @ 2024-07-30 13:45

We thank Dr Brack for his comments on the draft manuscript.

There are five main problems with his critique.

1. The critique suggests HIR projects are intended to result in relatively small increases in tree cover (or woody biomass) over large areas and it is therefore not appropriate to use databases derived from satellite imagery to assess their performance because the databases are not sufficiently accurate to reliably detect such small changes. For example, in the opening paragraphs, the critique states:

This preprint introduces the potential importance of Human Induced Regeneration (HIR) for sequestering small amounts of Carbon over large areas.

After given this context, the critique then claims that:

their 2024 work (Macintosh et al. 2024) was focused on land with less than 20% canopy cover and that NFSW [National Forest & Sparse Woody database] is not very reliable even in classifying areas as non-woody, sparse woody or forest (accuracy 66%) when the canopy cover is 5 - 20%. The accuracy of change in canopy cover would be even poorer.

Later, the critique states that:

the preprint authors … use WCF [Woody Cover Fraction] estimates as if these are accurate and able to reliably measure changes in cover of less than a few percent - the authors should note the Figure in Brack (2023), copied from Lao (2022) that shows the regression of WCF against LiDAR is about +/- 7% when the cover is about 7%.

Contrary to the impression given by these comments, HIR projects are supposed to be, and are credited on the basis they are, regenerating even-aged native forests across the entirety of their credited areas. That is, they are being credited for relatively large increases in biomass across a large area. As this manuscript details, the average credited sequestration of the 117 projects in the sample at 30 June 2023 was 22.0 tCO2 ha-1 and 74% (n = 87) of the projects had credited sequestration ≥13.2 tCO2 ha-1 (mean 26.2 tCO2 ha-1), marking the point at which projects should have near 100% forest cover across their credited areas, even if they started with zero biomass. Yet, a substantial proportion of the assessed cells did not satisfy the first generation gateway test (i.e. they did not have ≥7.5% canopy cover in 2023 or experience a ≥5% increase in canopy cover between project commencement and 2023) and average canopy cover in most of the assessed cells over the 2020-2023 La Niña was not materially above the levels recorded in the two previous comparable La Niña events (2010-2012) and (1998-2001). These results align with the findings in Macintosh et al. (2024) [see: https://www.nature.com/articles/s43247-024-01313-x], which analysed 182 HIR projects using the NFSW database and found little evidence of regeneration in credited areas. Seventy-five (n = 75) of the projects analysed by Macintosh et al. (2024) had credited sequestration ≥13.2 tCO2 ha−1, yet forest cover in the credited areas of these projects increased by only 1.8% relative to when the projects were registered. Dr Brack’s critique ignores this context.

If HIR projects were supposed to result in only small increases in tree cover and were being credited on this basis, his critique of our reliance on the WCF and NFSW database would be valid. Similarly, if the HIR projects in the sample had only recently commenced, and there had not been much time for the regeneration to emerge, it would not be appropriate to assess project performance and compliance using WCF, NFSW and other similar databases like TERN Persistent Green. However, this is not the case with the projects in the sample. The sample was confined to projects registered prior to 2017 to ensure the projects had been registered for at least 7 years prior to the analysis. Most of the projects in the sample are likely to have modelled regeneration for between 10-16 years. Given the age of the modelled regeneration, and the extent of credited sequestration, the effects of the projects on canopy cover should be readily apparent in databases like WCF and NFSW, even with their limitations. If the projects were performing in accordance with how they are being credited, any effects of measurement errors in the databases would be largely irrelevant, particularly when analysis is conducted at a portfolio scale, as we have done in the manuscript and Macintosh et al. (2024).

2. The critique diverts attention from the main shortcoming of the Brack review of the regeneration gateway checks (Brack review) [see: https://cer.gov.au/document/gateway-regeneration-checks-human-induced-regeneration-projects], namely, it did not apply the correct rules when quantitatively assessing compliance with the regeneration gateway checks. Under the gateway checks, the land included in each project is supposed to be divided into cells of a prescribed size and the credited area in each cell is supposed to be assessed against the specified canopy cover requirements. At the first regeneration gateway check, the prescribed cell size is 100 hectares. The Brack review’s quantitative analysis did not reflect these requirements. The quantitative analysis was done at the wrong scale (it analysed 19 6.25 ha cells in the credited areas of each of the 25 projects in the sample rather than all 100 ha cells in each project) and compliance was assessed at the wrong level using the wrong metrics (it assessed compliance at the project level based on whether average canopy cover across the sampled cells met one of the requirements (canopy cover ≥7.5%) rather than at a cell level based on whether each cell met at least one of the requirements). Note that the Brack assessment was also based on maximum canopy cover levels over the period 2020-2022, rather than canopy cover in the most recent applicable dataset immediately prior to the submission of the relevant offset report, as required under the rules.

The critique responds to this by claiming the Brack review ‘reviewed the 100 ha scale threshold checks previously completed by CER using other sources of evidence (including NFSW, Persistent Green and series of remote sensing images) and had found these to be competently undertaken and efficacious’. However, apart from the quantitative analysis conducted using Australia’s Environment Explorer (AEX), no verifiable information is provided, either in the Brack review or the critique, regarding how conclusions were reached about the competency and efficacy of the Regulator’s reviews.

The Brack review gives a high-level description of the methods used by proponents and describes how the Regulator reviews proponents’ regeneration maps using the TERN Persistent Green and National Forest & Sparse Woody (NFSW) database and uses ‘[d]etailed remote sensing, including ESRI World Imagery Wayback’ to explore cells containing ‘less than the threshold canopy cover or where there are substantial differences between PG and NFSW estimates of cover’. According to the review, ‘[p]roponents are asked to justify any below canopy cover threshold 100 ha areas or otherwise take action’. The review describes how these same processes applied to projects that were found to have an average maximum canopy cover of <7.5% over the period 2020-2022 in its AEX analysis. However, no verifiable information is provided on any of these processes or the outcomes of the Regulator’s assessments. The review does not even provide the Project ID’s of the 25 projects it analysed.

The only verifiable information provided in the review is the analysis conducted using AEX, which did not correctly apply the rules as discussed above. While a sampling method involving cells of less than 100 hectares could be validly used, no explanation is provided as to why the Brack review chose to assess compliance at the project level based on the averages from the sampled cells. As pointed out in our critique of the Brack review [see: https://law.anu.edu.au/files/2024-04/Brack%20report%20analysis%20040424F%20%282%29.pdf], the data presented in Figure 6 in the Brack review show that a substantial proportion of the sample cells had maximum canopy cover <7.5% over the period 2020-2022, indicating high levels of non-compliance. Neither the Brack review nor Dr Brack’s critique of this manuscript explain why compliance was assessed at the project level, or engage with the implications of doing so.

3. The critique suggests our reliance on the NFSW and WCF databases is misplaced, without acknowledging that:

a) the NFSW database is used as the sole data source for analysing regeneration and changes in woody cover in Australia’s greenhouse accounts [see: https://www.dcceew.gov.au/climate-change/publications/national-inventory-reports] and that generating abatement that can be counted in Australia’s greenhouse accounts is the primary function of the Australian carbon credit unit (ACCU) scheme [see section 3(2) of the Carbon Credits (Carbon Farming Initiative) Act 2011];

b) most of the projects analysed in the manuscript and in Macintosh et al. (2024) used the NFWS as the primary data source for the purposes of stratifying their credited areas – this can be verified by overlaying the credited area data of projects on NFWS maps;

c) by law, the NFWS database is currently treated as a definitive source of evidence for the purposes of assessing compliance with the forest cover attainment rule [see s 9AA(4)(a)(i) of the Carbon Credits (Carbon Farming Initiative) Rule 2015];

d) the NFWS was used in the Regulator’s only detailed published analysis on the additionality of the regeneration associated with HIR projects – the findings of which are similar to Macintosh et al. (2024) [The report is available here: https://www.dcceew.gov.au/climate-change/emissions-reduction/accu-scheme/assurance-committee#toc_3. A related report of the Emission Reduction Assurance Committee that relied on the analysis is available here: https://www.dcceew.gov.au/sites/default/files/documents/erac-findings-human-induced-regeneration-method.pdf].

e) the Regulator has relied on data from the NFSW database to defend the performance of HIR projects [see: https://law.anu.edu.au/files/2024-01/Response%20to%20CER%20HIR%20graphs%20190623.pdf]; and

f) despite downplaying its role in the critique, the quantitative analysis of canopy cover conducted using the AEX was central to the findings and conclusions in the Brack review (this is demonstrated by the review’s reliance on the AEX analysis to support its conclusion that ‘the CEAs appear to be regenerating well in the project areas, especially since 2020 and on average are significantly (p=0.05) above the 7.5% canopy cover threshold’).

These facts are relevant to a critique that suggests it is improper to rely on NFWS, WCF and other similar databases for the purposes of analysing the performance of HIR projects.

4. Neither the Brack review nor Dr Brack’s critique of this manuscript engage with the question of whether the observed changes in canopy cover are attributable to the project activities (predominantly grazing control). This is a core part of the analysis presented in the manuscript and in Macintosh et al. (2024) and is central to the integrity of HIR projects and whether they comply with the applicable regulatory requirements. The Brack review makes passing reference to how canopy cover in the credited areas of the sampled projects have increased and decreased over time in response to drought and wet years. However, no analysis or commentary is provided on what this might mean for compliance or the performance of the projects.

5. The critique seeks to defend the performance of HIR projects and criticise our work by pointing to unpublished data and unobservable processes. For example, the critique emphasises the assurance provided by third party audits, even though audit reports are not published under the ACCU scheme. Similarly, the critique notes instances where proponents have been required by the Regulator to ‘produce in situ observations (measurements or georeferenced photographs)’. However, by law, the Regulator cannot publish these data and the project proponents choose not to release it. Relatedly, the critique claims that many HIR projects use ‘a much better spatial resolution for their remote sensing and higher quality local data than the national scale models the preprint authors are relying on’, again, even though these data are not published. Critiques based on unpublished data and unobservable processes do not have a place in scientific discourse.

Comment #169 Cris L Brack @ 2024-07-18 08:11

The quantitative analyses of this preprint have significant shortcomings and do not support the strength of the conclusions.

The authors of this preprint do not appear to understand the methods used to stratify, classify and measure the regeneration in HIR projects and do not appreciate that many of these projects use a much better spatial resolution for their remote sensing and higher quality local data than the national scale models the preprint authors are relying on to make their conclusions.

This preprint introduces the potential importance of Human Induced Regeneration (HIR) for sequestering small amounts of Carbon over large areas. It notes the Australian Government administers the program under the Clean Energy Regulator (CER) but fails to note CER maintains a list of authorised, independent auditors to provide assurance the HIR proponents use good practice stratification, classification, boundary definition, management, measurement and modelling of regeneration. The preprint does note CER processes and outcomes themselves have been reviewed by the former Australian Chief Scientist; the Australian National Audit Office; and Brack (2023) and found to be robust and reliable. In contrast, they quote Macintosh et al (2024) who used estimates from a single, national scale model (NFSW) to conclude there is limited evidence of regeneration and questions all the work by the independent auditors, Chief Scientist, Audit Office and Brack (2023). The preprint does not acknowledge that their 2024 work was focused on land with less than 20% canopy cover and that NFSW is not very reliable even in classifying these areas as non-woody, sparse woody or forest (accuracy 66%) when the canopy cover is 5 - 20%. The accuracy of change in canopy cover would be even poorer. The analytical rigour of Macintosh et al (2024) appears to be significantly poorer than that of those this preprint questions.

Later in this preprint, selective quotes are taken from Brack (2023) to support the approach taken, but none of these quotes include the context or the conclusions from Brack (2023). For example, the preprint justifies the use of the Woody Cover Fraction (WCF) used by Brack (2023) because the use by Brack is “demonstrating its acceptance by the Regulator and other relevant stakeholders as an appropriate remote sensing product for [stratification and compliance] purposes” However, Brack (2023) used this tool to help verify NFSW and other tools used by CER. He described his choice to use WCF as it was embedded in the Australian Environment Explorer (AEE) package which also displayed annual trends of WCF along with estimates of rainfall, moisture inundation, temperature, fire events and remote imagery of the cell and surrounds. Thus, Brack (2023) used WCF as just one piece of evidence in a set of multi-evidence verification of regeneration in the projects - he did not use it as the single “accurate” tool as implied by the preprint authors. Brack (2023) did not, and has not authority to, determine that WCF could be used an a single, independent verification of stratification or regeneration.

The preprint authors do use WCF estimates as if these are accurate and able to reliably measure changes in cover of less than a few percent - the authors should note the Figure in Brack (2023), copied from Lao (2022) that shows the regression of WCF against LiDAR is about +/- 7% when the cover is about 7% (e.g., even when cover was at the 7.5% threshold, WCF may return a value between 0 - 15%). Over a reasonable sample size where the estimates are independent and not spatially correlated, the mean error may revert to 0% - but the preprint authors model contiguous blocks of data which means they cannot assume a reversion of 0% error or freedom from bias. The sample based approach by Brack (2023) does allow an unbiased estimate of the mean, and thus allowed a hypothesis test where the null hypothesis was H0: CC < 7.5%

The preprint authors did not note Brack’s (2023) conclusion that a single model could not be used to estimate whether thresholds had been exceeded and that multiple sources of evidence were required.

The preprint also criticised Brack (2023) for not using WCF to repeat the analysis of regeneration thresholds being met on a 100 ha scale. The authors of the preprint apparently did not notice Brack’s conclusion that he had reviewed the 100 ha scale threshold checks previously completed by CER using other sources of evidence (including NFSW, Persistent Green and series of remote sensing images) and had found these to be competently undertaken and efficacious. Brack (2023) used AEE models and contextual information to validate the CER approach and verify the reliability of the major predictive models being used (i.e. NFSW and Persistent Green). The preprint also failed to note that Brack (2023) found several instances where AEE, NFSW and Persistent Green produced conflicting results and where CER had required proponents to either produce in situ observations (measurements or georeferenced photographs) to demonstrate the models were inaccurate and that there was adequate regeneration to meet the thresholds. If proponents could not produce such evidence they removed the area from crediting. It was a substantive conclusion by Brack (2023) that the national scale models like AEE and NFSW could significantly underestimate regeneration on projects, especially where soil colour differences or species habit were not adequately represented in the model. In situ observations should always “trump” national scale models.

Comment #168 Cris L Brack @ 2024-07-17 12:06

The quantitative analyses of this preprint have significant shortcomings and do not support the strength of the conclusions.

The authors of this preprint do not appear to understand the methods used to stratify, classify and measure the regeneration in HIR projects and do not appreciate that many of these projects use a much better spatial resolution for their remote sensing and higher quality local data than the national scale models the preprint authors are relying on to make their conclusions.

This preprint introduces the potential importance of Human Induced Regeneration (HIR) for sequestering small amounts of Carbon over large areas. It notes the Australian Government administers the program under the Clean Energy Regulator (CER) but fails to note CER maintains a list of authorised, independent auditors to provide assurance the HIR proponents use good practice stratification, classification, boundary definition, management, measurement and modelling of regeneration. The preprint does note CER processes and outcomes themselves have been reviewed by the former Australian Chief Scientist; the Australian National Audit Office; and Brack (2023) and found to be robust and reliable. In contrast, they quote Macintosh et al (2024) who used estimates from a single, national scale model (NFSW) to conclude there is limited evidence of regeneration and questions all the work by the independent auditors, Chief Scientist, Audit Office and Brack (2023). The preprint does not acknowledge that their 2024 work was focused on land with less than 20% canopy cover and that NFSW is not very reliable even in classifying areas as non-woody, sparse woody or forest (accuracy 66%) when the canopy cover is 5 - 20%. The accuracy of change in canopy cover would be even poorer. The analytical rigour of Macintosh et al (2024) appears to be significantly poorer than that of those this preprint questions.

Later in this preprint, selective quotes are taken from Brack (2023) to support the approach taken, but none of these quotes include the context or the conclusions from Brack (2023). For example, the preprint justifies the use of the Woody Cover Fraction (WCF) used by Brack (2023) because the use by Brack is “demonstrating its acceptance by the Regulator and other relevant stakeholders as an appropriate remote sensing product for [stratification and compliance] purposes” However, Brack (2023) used this tool to help verify NFSW and other tools used by CER. He described his choice to use WCF as it was embedded in the Australian Environment Explorer (AEE) package which also displayed annual trends of WCF along with estimates of rainfall, moisture inundation, temperature, fire events and remote imagery of the cell and surrounds. Thus, Brack (2023) used WCF as just one piece of evidence in a set of multi-evidence verification of regeneration in the projects - he did not use it as the single “accurate” tool as implied by the preprint authors.

The preprint authors do use WCF estimates as if these are accurate and able to reliably measure changes in cover of less than a few percent - the authors should note the Figure in Brack (2023), copied from Lao (2022) that shows the regression of WCF against LiDAR is about +/- 7% when the cover is about 7% (e.g., even when cover was at the 7.5% threshold, WCF may return a value between 0 - 15%). Over a reasonable sample size where the estimates are independent and not spatially correlated, the mean error may revert to 0% - but the preprint authors model contiguous blocks of data which means they cannot assume a reversion of 0% error or freedom from bias. The sample based approach by Brack (2023) does allow an unbiased estimate of the mean, and thus allowed a hypothesis test where the null hypothesis was H0: CC < 7.5%

The preprint authors did not note Brack’s (2023) conclusion that a single model could not be used to estimate whether thresholds had been exceeded and that multiple sources of evidence were required.

The preprint also criticised Brack (2023) for not using WCF to repeat the analysis of regeneration thresholds being met on a 100 ha scale. The authors of the preprint apparently did not notice Brack’s conclusion that he had reviewed the 100 ha scale threshold checks previously completed by CER using other sources of evidence (including NFSW, Persistent Green and series of remote sensing images) and had found these to be competently undertaken and efficacious. Brack (2023) used AEE models and contextual information to validate the CER approach and verify the reliability of the major predictive models being used (i.e. NFSW and Persistent Green). The preprint also failed to note that Brack (2023) found several instances where AEE, NFSW and Persistent Green produced conflicting results and where CER had required proponents to either produce in situ observations (measurements or georeferenced photographs) to demonstrate the models were inaccurate and that there was adequate regeneration to meet the thresholds. If proponents could not produce such evidence they removed the area from crediting. It was a substantive conclusion by Brack (2023) that the national scale models like AEE and NFSW could significantly underestimate regeneration on projects, especially where soil colour differences or species habit were not adequately represented in the model. In situ observations should always “trump” national scale models. Where there is a high risk of thresholds not being met, CER does require in situ measurements.

Downloads

Download Preprint

Supplementary Files
Authors

Andrew Macintosh, Megan Catherine Evans , Don Butler, Pablo Larraondo, Chamith Edirisinghe, Kristen Hunter, Dean Ansell, Marie Waschka

Abstract

The ‘boom-and-bust’ nature of rangeland ecosystems makes them ill-suited to nature-based solution (NbS) carbon offset projects involving sequestration in vegetation and soils. The variability in these systems makes it difficult to determine whether observed carbon stock changes are attributable to project activities, creating additionality risks. The low and variable rainfall in rangelands also means carbon stock increases will often be impermanent, being susceptible to reversals in droughts, a risk magnified by climate change. The small potential for gains per unit area over vast regions adds further complications, making it difficult to accurately measure carbon stock changes at low cost. This creates pressure to trade accuracy for simplicity in measurement approaches, increasing the risk of measurement errors. Despite these risks, rangelands have been advanced as suitable for offset projects because of low opportunity cost and a perception they are extensively degraded. The most prominent example globally is human-induced regeneration (HIR) projects under the Australian carbon credit unit (ACCU) scheme, which are purporting to regenerate permanent even-aged native forests (areas with ≥20% canopy cover from trees ≥2 metres high) across millions of hectares of largely uncleared rangelands, predominantly by reducing grazing pressure from livestock and feral animals. Previous research found limited forest regeneration in the credited areas of these projects, and that most of the observed changes in tree cover were attributable to factors other than the project activities, most likely variable rainfall. Here we extend this research by evaluating compliance of a sample of 116 HIR projects with regulatory requirements and their performance in increasing sequestration in regeneration. The results suggest most HIR projects are non-compliant with key regulatory requirements that are essential to project integrity and that they have had minimal impact on woody vegetation cover in credited areas. The findings point to major administrative and governance failings in Australia’s carbon credit scheme, and a significant missed opportunity to restore biodiversity-rich woodlands and forests in previously cleared lands via legitimate carbon offset projects.

DOI

https://doi.org/10.32942/X2162R

Subjects

Ecology and Evolutionary Biology

Keywords

carbon offsets, rangeland ecology, environmental markets, vegetation ecology, climate change

Dates

Published: 2024-07-04 21:42

Last Updated: 2024-10-11 05:33

Older Versions
License

CC-BY Attribution-NonCommercial-ShareAlike 4.0 International

Additional Metadata

Language:
English

Conflict of interest statement:
Andrew Macintosh is a non-executive director of Paraway Pastoral Company Ltd. Paraway Pastoral Company Ltd has offset projects under Australia's offset scheme. Paraway Pastoral Company Ltd does not have any human-induced regeneration projects. Andrew Macintosh, Don Butler, Dean Ansell and Marie Waschka advise public and private entities on environmental markets and Australia’s carbon offset scheme, including on the design of carbon offset methods.

Data and Code Availability Statement:
The data that support this study will be shared upon reasonable request to the corresponding author.