Process data and assess metrics

After suitable data have been gathered, additional processing is often required to derive metric values that will form the reassessment results. All data processing is conducted outside of the LandScale platform. A brief description of the processing conducted should be documented in the LandScale platform's methods description accompanying each metric. This documentation will be reviewed as part of Step C validation and may also streamline future reassessments.

In Step C, open the metric you are assessing to document how it was processed.
For each metric, fill in the context details, including result year, methodology, and limitations.

This section is specific to reassessment. For further information relevant to processing and assessing metrics, refer to the baseline assessment guidelines.

Data processing guidelines

Data processing requirements vary depending on the metric and the nature of the data. LandScale does not prescribe uniform methods for all metrics; however, specific guidelines and recommendations are provided in the Performance Metrics Description Table. Assessment teams should adhere to these guidelines when required and follow recommendations as closely as possible. These may include simple procedural advice, published methods, or links to modeling tools.

Recommendations for processing data for reassessments include:

  • Maintain consistency across assessments: Metric results should be entered in the same format as in the baseline assessment. If results require reformatting (e.g., unit conversions), this should be documented, along with any implications for comparability.

  • Account for differences in data precision or methodology: When newer datasets with improved precision are used, assessment teams must exercise caution in interpreting apparent trends. For example, an ecosystem conversion metric may show an increase or decrease due to improved satellite resolution rather than actual landscape changes. To mitigate this risk, teams should:

    • Cross-check results against additional datasets or contextual landscape knowledge before drawing conclusions.

    • Clearly document cases where methodological improvements may impact comparability.

  • Ensure compliance with data privacy regulations: If using proprietary or confidential data, teams must anonymize any identifying information in accordance with the data ownership, use, and privacy section in the baseline assessment guidelines.

Populate metric results

The LandScale platform allows users to enter numeric or categorical results for each metric selected in Step B that has sufficient suitable data, as determined during the data evaluation phase. Metrics found to lack adequate data are indicated as 'data deficient' within the platform.

Enter results for each metric. If data is unavailable and a result cannot be provided, a justification must be included.

Assessment teams must ensure that:

  • Results are entered in the same format—or a convertible format—as the baseline assessment: If changes to the format are necessary, they must be clearly documented, along with justifications for the change and an assessment of any implications on the comparability or interpretation of the results.

  • Thematic or spatial disaggregation is used when applicable: Thematic disaggregation (e.g., by ecosystem type, gender) is required for some metrics but optional for others. Spatial disaggregation (e.g., by jurisdiction or catchment) is not mandatory but can offer more detailed insights.

  • Any significant changes in data sources are documented: If a different dataset is used, the rationale for switching, an assessment of comparability, and plans for mitigating misinterpretation risks must be included.

Document limitations to metric results

In some cases, metric results may have significant limitations that affect their interpretability, even when they are derived from suitable datasets. It is important to document any such limitations to ensure that users of the reassessment results fully understand the context and implications of these results. Properly documenting limitations enables more informed decision-making and responsible application of the results.

Limitations may include:

  • Limitations from source dataset constraints: If the source dataset was determined as suitable but with limitations, the assessment team should carefully determine whether these constraints have introduced significant limitations to the resulting metric. For example, if a land-use change dataset is missing 25% coverage due to cloud cover in the source remote sensing data, this would typically require documentation of a limitation for any results derived from this dataset. In contrast, if only 2% of the landscape is missing data, this could be considered a minor defect, unlikely to necessitate a limitation statement.

  • Other limitations to metric result quality or interpretability: Limitations may also arise that are unrelated to the source data itself. For example, if the participatory process used to generate metric results for a governance indicator reveals a wide diversity of stakeholder opinions regarding the integrity of governance processes, the assessment team might report an averaged value for the metric. However, they should also document a limitation to indicate the low confidence in or lack of consensus about the result due to the varying perspectives or evidence.

  • Changes in methodology or data source: If the reassessment uses a different methodology or data source compared to the baseline assessment, it may affect comparability. Any differences in data collection techniques, technologies, data providers, or other relevant elements outlined in the select data resources section of the baseline assessment guidelines must be clearly documented. In addition, a plan must be developed to mitigate risks of misinterpretation and ensure that any changes do not compromise the accuracy of the results.

All identified limitations should be documented directly in the platform alongside the corresponding metric results. This documentation will be reviewed as part of the LandScale and local review processes during Step C validation.

Visualize and interpret results

Following data collection and processing, the next step is to develop meaningful visualizations and provide clear interpretation of results. This enables end users to better understand the significance of the reassessment findings. For guidance on how to do this, refer to the baseline assessment guidelines.

Last updated

Was this helpful?