Downscaled Data Evaluation
Assessing impacts-relevant climate data products
- Home
- Climate Modeling
- Model Evaluation: Downscaled Data Evaluation
Quick facts
- Climate model output often has to be post-processed to achieve spatial and temporal resolutions that are useful to decision-makers
- LLNL researchers undertook a project to better understand the strengths and weaknesses of these downscaled products
- The resulting interactive dashboard makes it easier for users to select a product best suited for a given project
To ascertain risk from extreme weather events, a broad community of decision-makers, planners, analysts, and other users rely on historical reconstructions and future projections of local to regional climate. However, to be of value, climate data must be produced in a manner consistent with physical laws and relevant for informing the decision-making process. To achieve relevance without extreme computational cost, most climate models are run on a fairly coarse scale (for instance, resolving features no smaller than tens to hundreds of kilometers wide) and subsequently must be downscaled to finer spatial and temporal regimes. For instance, projections of wind speed over wind farms may need wind speeds every 10-minutes. Similarly, urban flooding requires details on water depth at scales of only a few tens of meters.
Numerous climate data products have recently emerged that represent the contiguous United States at scales valuable for local policymakers. These impacts-relevant climate data products include dynamically downscaled products, which use self-consistent regional climate models to simulate regional meteorology, and statistically downscaled products, which make use of functional relationships between spatially coarse climate model outputs and local-scale meteorology. While these products have been used to assess both historic and future climate risk, few efforts have focused on how well these products serve this purpose. Researchers and practitioners often use a given product because of word-of-mouth or standing collaborations.
Leveraging long-standing institutional expertise in model analysis and intercomparison, a team of LLNL researchers has developed an approach for evaluating multiple impact-relevant climate data products and identifying of the best data product(s) for particular climate questions. For this effort, they compiled new impacts-focused metrics derived from tagging and tracking of high-impact extreme weather features, including heat waves, tropical cyclones, extratropical cyclones, mesoscale convective systems, atmospheric rivers, and other extreme temperature, precipitation, and wind events. Using a new analysis framework they created, together with this set of metrics, they examined how well the various data products captured past extreme events relative to historic observations. They also compared future projections between the data products at fixed warming levels.
A key outcome from the project was a novel visualization dashboard that supports expert guidance on the results of these investigations, as well as providing data that can be used for technical analyses. It allows users to compare and contrast various data products, identify strengths or deficiencies among products, and identify which metrics are consistent among products. These new metrics are also enabling a deeper understanding of the processes governing extreme weather features, and how well those processes are represented in modern climate modeling systems. This work may further illuminate why certain biases may be present in those systems, and point to how those biases can be mitigated.
Coming soon
This project is expected to be a foundation for an ongoing effort to help a broad community of scientists and end-users find the climate data that they need for decision-making. Efforts are already underway to build a community of practice around these data products that would support lines of communication between data producers, analysts, and end-users.
As new data products come online, the team expects to be able to rapidly analyze those data products for consistency with existing products and to better understand their strengths and weaknesses.