Process data and assess metrics

After suitable data have been gathered, additional processing is often required to derive metric values that will form the LandScale assessment results. All data processing is conducted outside of the LandScale platform. A brief description of the processing conducted should be documented in the LandScale platform's methodology description accompanying each metric. This documentation will be reviewed as part of Step C validation and may also streamline future reassessments.

For each metric, complete the context information (methodology and limitations) and enter the result of the data processing. Optionally, upload visualizations and set targets and milestones.

Data processing guidelines

Data processing requirements vary depending on the metric and the nature of the data. LandScale does not prescribe uniform methods for all metrics; however, specific guidelines and recommendations are provided in the Performance Metrics Description Table. Assessment teams should adhere to these guidelines when required and follow recommendations as closely as possible. These may include simple procedural advice, published methods, or links to modeling tools.

General recommendations for deriving metric values from source datasets include:

  • Use simpler methods when suitable: Opt for simpler data transformation methods whenever they yield suitable metric values. Simpler methods are easier for others to understand and to replicate during future assessments.

  • Undertake advanced processing only when necessary: Use sophisticated processing or modeling only if the assessment team has the expertise and capacity to do so. Ensure familiarity with the chosen methods and confidence that they will enhance accuracy or provide more informative results.

  • Document methods and limitations: Thoroughly document all processing steps, including any limitations. This transparency is critical for validation and ensures stakeholders can fully understand and interpret the results.

After data processing is complete, document the methodology used and the result year for each metric on the platform. You can also upload supporting files in the context and metadata section.

Ensure metric results protect privacy and comply with data terms of use

When using data that are proprietary or subject to confidentiality or anonymity provisions, the assessment team must ensure that all reported metric values and supporting documentation comply with these requirements. In all cases, it is critical that no metric results nor any other information entered into the platform reveal identifying details about individual land units or people within the landscape.

If source datasets contain identifying information, the assessment team must apply processing and analysis steps to anonymize and generalize both spatial and non-spatial data. This may include removing or obscuring personal data, names, or exact geographic locations of features to prevent identification.

For further guidelines on handling data responsibly, see the earlier section on data ownership, use, and privacy.

Document limitations to metric results

In some cases, metric results may have significant limitations that affect their interpretability, even when they are derived from suitable datasets. It is important to document any such limitations to ensure that users of the assessment results fully understand the context and implications of these results. Properly documenting limitations enables more informed decision-making and responsible application of the results.

Limitations may include:

  • Limitations from source dataset constraints: If the source dataset was determined as suitable but with limitations, the assessment team should carefully determine whether these constraints have introduced significant limitations to the resulting metric. For example, if a land-use change dataset is missing 25% coverage due to cloud cover in the source remote sensing data, this would typically require documentation of a limitation for any results derived from this dataset. In contrast, if only 2% of the landscape is missing data, this could be considered a minor defect, unlikely to necessitate a limitation statement.

  • Other limitations to metric result quality or interpretability: Limitations may also arise that are unrelated to the source data itself. For example, if the participatory process used to generate metric results for a governance indicator reveals a wide diversity of stakeholder opinions regarding the integrity of governance processes, the assessment team might report an averaged value for the metric. However, they should also document a limitation to indicate the low confidence in or lack of consensus about the result due to the varying perspectives or evidence.

Document the result year, methodology, and limitations for each metric.

All identified limitations should be documented directly in the platform alongside the corresponding metric results. This information will be reviewed during the LandScale and local review processes during Step C validation.

Populate metric results

The LandScale platform allows users to enter numeric or categorical results for each metric selected in Step B that has sufficient suitable data, as determined during the data evaluation phase. Metrics found to lack adequate data are indicated as data deficient within the platform.

Example results for a metric under the human well-being pillar.
If data is unavailable and a result cannot be provided, a justification must be included.

It is essential to enter results in the specific form required by each metric. If results are not in the required form, the data should be re-analyzed to produce results that meet the metric's requirements. Alternatively, the assessment team may revisit Step B and propose an alternative metric that aligns with the form of the available results. Any newly proposed or adjusted metrics must adhere to the guidelines established for selecting metrics and be submitted for validation.

The platform supports two types of results disaggregation:

  • Thematic disaggregation: This involves breaking down results by specific thematic characteristics (e.g., ecosystem type, gender). While thematic disaggregation is required for some metrics, it remains optional for others.

  • Spatial disaggregation: This involves dividing results by spatial characteristics, such as jurisdiction, catchment, or other defined spatial areas. Although spatial disaggregation is not required for any metric, it can offer a more detailed understanding of metric performance across the landscape. For example, disaggregating water quality results by sub-catchments can reveal spatial variations, providing richer insights compared to a single, landscape-wide measure.

Last updated

Was this helpful?