Process data and assess metrics
Last updated
Was this helpful?
Last updated
Was this helpful?
After suitable data have been gathered, additional processing is often required to derive metric values that will form the LandScale assessment results. All data processing is conducted outside of the LandScale platform. A brief description of the processing conducted should be documented in the LandScale platform's methods description accompanying each metric. This documentation will be reviewed as part of Step C validation and may also streamline future reassessments.
[Insert screenshot of sample data processing description]
Data processing requirements vary depending on the metric and the nature of the data. LandScale does not prescribe uniform methods for all metrics; however, specific guidelines and recommendations are provided in the . Assessment teams should adhere to these guidelines when required and follow recommendations as closely as possible. These may include simple procedural advice, published methods, or links to modeling tools.
General recommendations for deriving metric values from source datasets include:
Use simpler methods when suitable: Opt for simpler data transformation methods whenever they yield suitable metric values. Simpler methods are easier for others to understand and to replicate during future assessments.
Undertake advanced processing only when necessary: Use sophisticated processing or modeling only if the assessment team has the expertise and capacity to do so. Ensure familiarity with the chosen methods and confidence that they will enhance accuracy or provide more informative results.
Document methods and limitations: Thoroughly document all processing steps, including any limitations. This transparency is critical for validation and ensures stakeholders can fully understand and interpret the results.
When using data that are proprietary or subject to confidentiality or anonymity provisions, the assessment team must ensure that all reported metric values and supporting documentation comply with these requirements. In all cases, it is critical that no metric results nor any other information entered into the platform reveal identifying details about individual land units or people within the landscape.
If source datasets contain identifying information, the assessment team must apply processing and analysis steps to anonymize and generalize both spatial and non-spatial data. This may include removing or obscuring personal data, names, or exact geographic locations of features to prevent identification.
For further guidelines on handling data responsibly, see the earlier section on .
The LandScale platform allows users to enter numeric or categorical results for each metric selected in Step B that has sufficient suitable data, as determined during the data evaluation phase. Metrics found to lack adequate data are indicated as data deficient within the platform.
[Insert screenshot of sample results entry on platform]
[Insert screenshot of sample data deficient metric]
The platform supports two types of results disaggregation:
Thematic disaggregation: This involves breaking down results by specific thematic characteristics (e.g., ecosystem type, gender). While thematic disaggregation is required for some metrics, it remains optional for others.
Spatial disaggregation: This involves dividing results by spatial characteristics, such as jurisdiction, catchment, or other defined spatial areas. Although spatial disaggregation is not required for any metric, it can offer a more detailed understanding of metric performance across the landscape. For example, disaggregating water quality results by subcatchments can reveal spatial variations, providing richer insights compared to a single, landscape-wide measure.
In some cases, metric results may have significant limitations that affect their interpretability, even when they are derived from suitable datasets. It is important to document any such limitations to ensure that users of the assessment results fully understand the context and implications of these results. Properly documenting limitations enables more informed decision-making and responsible application of the results.
Limitations may include:
Limitations from source dataset constraints: If the source dataset was determined as suitable but with limitations, the assessment team should carefully determine whether these constraints have introduced significant limitations to the resulting metric. For example, if a land-use change dataset is missing 25% coverage due to cloud cover in the source remote sensing data, this would typically require documentation of a limitation for any results derived from this dataset. In contrast, if only 2% of the landscape is missing data, this could be considered a minor defect, unlikely to necessitate a limitation statement.
Other limitations to metric result quality or interpretability: Limitations may also arise that are unrelated to the source data itself. For example, if the participatory process used to generate metric results for a governance indicator reveals a wide diversity of stakeholder opinions regarding the integrity of governance processes, the assessment team might report an averaged value for the metric. However, they should also document a limitation to indicate the low confidence in or lack of consensus about the result due to the varying perspectives or evidence.
All identified limitations should be documented directly in the platform alongside the corresponding metric results. This documentation will be reviewed as part of the LandScale and local review processes during Step C validation.
[Insert screenshot of sample results limitations entry]
It is essential to enter results in the specific form required by each metric. If results are not in the required form, the data should be re-analyzed to produce results that meet the metric's requirements. Alternatively, the assessment team may revisit Step B and propose an alternative metric that aligns with the form of the available results. Any newly proposed or adjusted metrics must adhere to the and be submitted for validation.