LandScale Documentation
  • Profile setup & landscape initiative maturity
  • Assessment guidelines
  • About
  • Get started
    • Get started with LandScale assessments
    • Introduction to the LandScale system
      • LandScale assessment framework
  • Step A (Overview)
    • Set up landscape initiative
    • Define the landscape boundary
      • Boundary limitations and adjacency analysis
    • Provide landscape overview
    • Register assessment team members
    • Develop stakeholder engagement plan
    • Set up documentation storage system
    • Review and submit for validation
  • Step B (Indicator and data selection)
    • Design the assessment scope
    • Select indicators
    • Select metrics
    • Select data resources
      • Analyze data limitations
      • Manage data gaps
    • Review and submit for validation
  • Step C (Results)
    • Process data and assess metrics
    • Visualize and interpret results
    • Set targets and milestones (optional)
    • Identify and register local reviewers
    • Review and submit for first review
      • Address findings from the first review
    • Review and submit for the local review
      • Address feedback from the local review
    • Review and submit for final validation
    • Complete the assessment
  • Additional resources
    • Human rights assessment guidelines
    • Archived resources
Powered by GitBook
On this page
  • Data processing guidelines
  • Ensure metric results protect privacy and comply with data terms of use
  • Document limitations to metric results
  • Populate metric results

Was this helpful?

Export as PDF
  1. Step C (Results)

Process data and assess metrics

PreviousReview and submit for validationNextVisualize and interpret results

Last updated 21 days ago

Was this helpful?

After suitable data have been gathered, additional processing is often required to derive metric values that will form the LandScale assessment results. All data processing is conducted outside of the LandScale platform. A brief description of the processing conducted should be documented in the LandScale platform's methodology description accompanying each metric. This documentation will be reviewed as part of Step C validation and may also streamline future reassessments.

Data processing guidelines

General recommendations for deriving metric values from source datasets include:

  • Use simpler methods when suitable: Opt for simpler data transformation methods whenever they yield suitable metric values. Simpler methods are easier for others to understand and to replicate during future assessments.

  • Undertake advanced processing only when necessary: Use sophisticated processing or modeling only if the assessment team has the expertise and capacity to do so. Ensure familiarity with the chosen methods and confidence that they will enhance accuracy or provide more informative results.

  • Document methods and limitations: Thoroughly document all processing steps, including any limitations. This transparency is critical for validation and ensures stakeholders can fully understand and interpret the results.

After data processing is complete, document the methodology used and the result year for each metric on the platform. You can also upload supporting files in the context and metadata section.

Ensure metric results protect privacy and comply with data terms of use

When using data that are proprietary or subject to confidentiality or anonymity provisions, the assessment team must ensure that all reported metric values and supporting documentation comply with these requirements. In all cases, it is critical that no metric results nor any other information entered into the platform reveal identifying details about individual land units or people within the landscape.

If source datasets contain identifying information, the assessment team must apply processing and analysis steps to anonymize and generalize both spatial and non-spatial data. This may include removing or obscuring personal data, names, or exact geographic locations of features to prevent identification.

Document limitations to metric results

In some cases, metric results may have significant limitations that affect their interpretability, even when they are derived from suitable datasets. It is important to document any such limitations to ensure that users of the assessment results fully understand the context and implications of these results. Properly documenting limitations enables more informed decision-making and responsible application of the results.

Limitations may include:

  • Limitations from source dataset constraints: If the source dataset was determined as suitable but with limitations, the assessment team should carefully determine whether these constraints have introduced significant limitations to the resulting metric. For example, if a land-use change dataset is missing 25% coverage due to cloud cover in the source remote sensing data, this would typically require documentation of a limitation for any results derived from this dataset. In contrast, if only 2% of the landscape is missing data, this could be considered a minor defect, unlikely to necessitate a limitation statement.

  • Other limitations to metric result quality or interpretability: Limitations may also arise that are unrelated to the source data itself. For example, if the participatory process used to generate metric results for a governance indicator reveals a wide diversity of stakeholder opinions regarding the integrity of governance processes, the assessment team might report an averaged value for the metric. However, they should also document a limitation to indicate the low confidence in or lack of consensus about the result due to the varying perspectives or evidence.

All identified limitations should be documented directly in the platform alongside the corresponding metric results. This information will be reviewed during the LandScale and local review processes during Step C validation.

Populate metric results

The platform supports two types of results disaggregation:

  • Thematic disaggregation: This involves breaking down results by specific thematic characteristics (e.g., ecosystem type, gender). While thematic disaggregation is required for some metrics, it remains optional for others.

  • Spatial disaggregation: This involves dividing results by spatial characteristics, such as jurisdiction, catchment, or other defined spatial areas. Although spatial disaggregation is not required for any metric, it can offer a more detailed understanding of metric performance across the landscape. For example, disaggregating water quality results by sub-catchments can reveal spatial variations, providing richer insights compared to a single, landscape-wide measure.

Data processing requirements vary depending on the metric and the nature of the data. LandScale does not prescribe uniform methods for all metrics; however, specific guidelines and recommendations are provided in the . Assessment teams should adhere to these guidelines when required and follow recommendations as closely as possible. These may include simple procedural advice, published methods, or links to modeling tools.

For further guidelines on handling data responsibly, see the earlier section on .

The LandScale platform allows users to enter numeric or categorical results for each metric selected in Step B that has sufficient suitable data, as determined during the . Metrics found to lack adequate data are indicated as data deficient within the platform.

It is essential to enter results in the specific form required by each metric. If results are not in the required form, the data should be re-analyzed to produce results that meet the metric's requirements. Alternatively, the assessment team may revisit Step B and propose an alternative metric that aligns with the form of the available results. Any newly proposed or adjusted metrics must adhere to the and be submitted for validation.

data evaluation phase
guidelines established for selecting metrics
Performance Metrics Description Table
data ownership, use, and privacy
For each metric, complete the context information (methodology and limitations) and enter the result of the data processing. Optionally, upload visualizations and set targets and milestones.
Document the result year, methodology, and limitations for each metric.
Example results for a metric under the human well-being pillar.
If data is unavailable and a result cannot be provided, a justification must be included.