14th International Meeting on Statistical Climatology
Centre International de Conférences - Météo-France - Toulouse - France June 24-28, 2019
CIC meetings
 
 
sessions
committees
 
 
 
 
1 Climaterecords: data homogenization, dataset creation, and uncertainty - John Kennedy, Xiaolan Wang
details
2 Interactionsof weather and climate with human and natural systems - Bryson Bates, Richard Chandler
details
3 Statisticalissues working with large datasets and model outputs - Erich Fischer, Doug Nychka
details
4 Space-timestatistics for modeling and analyzing climate variability - Peter Craigmile, Tim Delsole
details
5 Weather/climatepredictability and forecast evaluation - Matilde Rusticucci, Thordis Thorarinsdottir, David Stephenson
details
6 Statisticsfor climate models, ensemble design, uncertainty quantification, model tuning - Ben Sanderson
details
7 Statisticaland machine learning in climate science - Philippe Naveau, Michael Wehner
details
8 Long-termD&A and emergent constraints on future climate projections - Dorit Hammerling, Reto Knutti
details
9 Attributionand analysis of single weather events - Chris Paciorek, Sarah Perkins-Kirkpatrick
details
10 Changesin extremes including temperature, hydrologic, and multivariate compound events - Seung-Ki Min, Jana Sillmann
details
11 Extremevalue analysis for climate applications - Richard Smith, Xuebin Zhang
details
12 Fromglobal change to regional impacts, downscaling and bias correction - Alex Cannon, Mathieu Vrac
details

     
  1. Climate records: data homogenization, dataset creation, and uncertainty quantification
Conveners: John Kennedy, Xiaolan Wang
 
 

Observationally-based climate data sets are indispensable for many aspects of climate research. In an ideal world, climate data sets would be accurate, free from artificial trends in time or space and globally complete. Unfortunately, in the real world, there are severe challenges to realising this dream – measurements are noisy, stations move, instruments change or age, satellite orbits drift and the density of the observing network constantly changes – all of which can lead to artificial, non-climatic changes if untreated or treated improperly. Such artificial effects could be detrimental to climate analyses that use these data, especially analyses of trends and extremes. In order to make the data sets that climate science needs, it is important not only to address these problems, but also to understand and effectively communicate reliable and comprehensive information about uncertainty in those data sets.

This session calls for contributions that are related to the problem of creating climate data sets and quantifying their uncertainties, for example: bias correction and homogenization of in situ or satellite data, quality control of observations, infilling of spatially incomplete data, modelling complex error structures. It also calls for contributions that use homogeneous climate data to assess climate trends, variability and extremes and their uncertainties.


top

     
  2. Interactions of weather and climate with human and natural systems
Conveners: Bryson Bates, Richard Chandler
 
 

From a societal perspective, the importance of climate science lies in the potential for living with, adapting to, protecting against and exploiting the weather. Application areas are ubiquitous and include the management of water resources; the assessment of risk to infrastructure from weather-related hazards such as floods, droughts and tropical cyclones; the assessment of climate change impacts on human health and natural ecosystems; the development of agricultural policies to support resilient, sustainable and healthy food systems in a changing world; and the evaluation of renewable energy resources. Planners, decision-makers and stakeholders in these and other sectors must incorporate knowledge and information about weather- and climate-related events, forecasts and projections into their decision making, with a view to increasing benefits and reducing harm and financial losses.

In this context, there is a need for climate scientists to deliver information that can support plausible, defensible, accessible and actionable decisions, with appropriate recognition of the inevitable accompanying uncertainties. However, there is often a mismatch between the type and volume of data provided by climate scientists and the expectations of end-users regarding information type, accuracy and precision.

We invite abstracts describing the application of modern statistical methods that have informed, or are likely to inform, management and planning decisions for climate-dependent systems (e.g. built and natural environments and primary industries). Examples of relevant methods include, but are not limited to: approaches to the quantification and management of climate and weather risk in specific sectors; statistical frameworks to support decision-making under uncertainty in climate-related contexts; the evaluation of future infrastructure investment options; and the communication of uncertain climate information for purposes of decision support and stakeholder engagement. Especially welcome are contributions that: involve the integration of multiple disciplines in climate and socioeconomic sciences, resource managers, planners and policy makers; and demonstrate how the use of modern statistical methods improved the outcomes of impact, vulnerability and adaptation assessments.


top

     
  3. Statistical issues working with large datasets and model outputs
Conveners: Erich Fischer, Doug Nychka
 
 

The rapid increase in the size of geophysical datasets from remotely sensed observations, ambitious compilations of historical archives or large climate model ensembles challenges the ability for statistical data analysis. For larger data volumes it is often difficult to conduct exploratory data analysis, data checking, or to fit statistical models using standard approaches. One well known area is in the analysis of spatial geophysical data where the computational and storage requirements can easily outstrip a researchers computing resources.

Another area is in Bayesian statistics where standard applications of Monte Carlo algorithms typically require passing through the full data set many times. This session will focus on emerging ideas to approximate statistical methods for large climate problems or to devise new statistical models that are more efficient to implement for massive data sets. One important feature of these strategies is to exploit parallel algorithms and to break up large problems into independent or weakly dependent subsets. A goal is to harness the supercomputing facilities that are used for numerical model experiments for a companion analysis of the output.


top

     
  4. Space-time statistical methods for modeling and analyzing climate variability
Conveners: Peter Craigmile, Tim Delsole
 
 

The climate system is a complex set of interacting processes over the land, sea, and air. Understanding the natural variations of these processes leads to the analysis of climate variability. As we observe more diverse and complex climate datasets and computer model output, we are able to better model and understand these variations. The analysis of fluctuations over seasonal and multi-seasonal spatial and temporal scales requires the development of advanced statistical techniques, especially when climate processes are nonstationary and/or non-Gaussian.

We welcome contributions of time series, spatial, and spatio-temporal analyses of climate variability. This could include, for example, the development of methods that allow for the investigation of interactions between multiple climate processes, possibly using hierarchical statistical models. Methods that are able to capture long range variations such as climate teleconnections or can be used to derive multivariate indices of climate variability are of interest. We also encourage contributions on the application of multivariate methods such as Canonical Correlation Analysis, Principal and Predictable Component Analysis, and Discriminant Analysis to climate analyses, especially involving regularization methods for dealing with ill-posed problems. Applications involving dynamically constrained space-time filtering, such as projecting on theoretical eigenmodes, are also welcome. Methods for the analysis of climate variability in a changing climate system are also encouraged.


top

     
  5. Weather/climate predictability and forecast evaluation
Conveners: Matilde Rusticucci, David Stephenson, Thordis Thorarinsdottir
 
 

Weather and climate prediction systems require reliable evidence-based approaches for a) assessing the performance of previous forecasts (verification) and b) assessing potential future forecast skill (predictability). These activities invariably involve the statistical comparison of forecasts with observations, which can be issued in a wide variety of different data formats e.g. univariate, multivariate, spatial gridded fields etc.. Furthermore, the forecasts can be either single deterministic ones, multiple ensembles of forecasts, or probability estimates.

This session welcomes contributions on predictability and/or forecast verification that demonstrate either interesting new statistical approaches or novel applications of existing methodologies to data. In the interests of a wide and lively discussion, we are happy to receive talks and posters that address any forecast lead time from daily weather to seasonal and decadal climate made with either dynamical and/or statistical prediction systems. We also welcome contributions that range from operational applications to research that is focussed on more fundamental issues in predictability and verification.


top

     
  6. Statistics for climate models, ensemble design, uncertainty quantification, model tuning
Conveners: Ben Sanderson
 
 

The CMIP ensembles remain the primary source for the assessment of future climate impacts using Earth System Models, yet they pose challenges for formal analysis in terms of the quantification of model skill and interdependency for future projections. In this session, we consider novel approaches for integrating results from model ensembles; both ‘ensembles of opportunity’ such as the CMIP archive, and designed ensembles formed from parameter or structural perturbations within a common framework.

We would welcome abstracts on model parameter perturbation and model tuning and optimization, uncertainty analysis applied to ensemble output for climate impact risk assessment and novel approaches for addressing structural error and interdependency in multi-model archives.


top

     
  7. Statistical and machine learning in climate science
Conveners: Philippe Naveau, Michael Wehner
 
 

Advances in high performance computing has enabled the production of much larger climate model datasets that are of higher spatial and temporal resolution and/or encompassing more realizations than ever before possible. There is considerable information in these datasets that is difficult or impossible to obtain by standard manual techniques. Statistical and machine learning, albeit at an early stage, offers the promise of both obtaining information that we know is contained in these datasets as well as the discovery of otherwise unknown information. Feature detection, for instance, can be used to identify known weather or climate patterns, such as storms in a supervised but non-heuristic manner. Furthermore, causal conditions leading to such features may also be identified.

In addition to transferring existing learning techniques to the climate arena, new machine learning techniques geared towards modelling geophysically-based spatial-temporal specificities (e.g., underlying physical processes, chaotic nature, climate variability) are strongly encouraged. For example, tailored developments based on non-linear approaches for predictions (deep learning, random forests, etc), learning approaches (e.g., analogs/nearest neighbour, supervised or unsupervised techniques), causality algorithms (DAG or others), probability representations of geophysical processes (e.g.. via residual and/or recurrent neural networks), dimension reduction (e.g. variables selections) are welcome.

We also invite contributions exploring the use of a wide variety of statistical learning techniques and their application to the climate sciences. Of particular interest are applications that enable analyses that are not otherwise practical, either for conceptual or computational limitations.


top

     
  8. Long-term detection and attribution and emergent constraints on future climate projections
Detection and attribution of long-term climate change, large-scale climate projections and associated uncertainties, equilibrium climate sensitivity, observational constraints.
Conveners: Dorit Hammerling, Reto Knutti
 
 

Detection and attribution is an important area in the climate sciences, specifically in the study of climate change, and has recently also attracted the attention of the statistics community. Climate change detection and attribution refers to a set of statistical tools to relate observed changes to external forcings, specifically to anthropogenic influences. While this issue can be viewed in different ways, the most commonly applied framework is linear regression. The problem formulation per se seems straight forward, but the challenges lie in the high dimensionality of the problem and the large number of unknown quantities in the context of limited observations.

Current methods differ in the complexity of the problem formulation and what assumptions are being made to reduce the dimensionality of the problem. Most methods implemented so far are of frequentist nature and Bayesian implementations have only recently appeared on the scene. Lacking comprehensive comparison studies, it is still largely unclear which method is optimal under which circumstances, and where the focus of further methodological development should lie. There is also a dearth of available software to allow scientists to easily apply detection and attribution methods, in particular the more recently developed methods.

We invite presentations on new methodological and computational developments, software implementations, and comparisons between methods. We further invite presentations applying detection and attribution methods in any area of scientific study.


top

     
  9. Attribution and analysis of single weather events
Conveners: Chris Paciorek, Sarah Perkins-Kirkpatrick
 
 

The attribution of extreme weather events to anthropogenic influence is an increasingly popular field. Traditionally, the magnitude of an extreme event is defined in the observational record via arbitrary spatial and temporal bounds, and its frequency compared between model simulations that respectively omit and include historical greenhouse gas emissions. The result indicates how the likelihood of the event of interest has changed due to anthropogenic influence on the global climate. Resampling statistics such as bootstrapping are commonly employed to provide confidence estimates around the attribution statement. Recent developments in event attribution have moved beyond historical assessments based on the event magnitude and span a number of key areas. Examples include, but are not limited to: the inclusion of key physical processes that initiated and/or sustained the event, via multivariate or conditional analyses; the inclusion of other coincident events or physical mechanisms (i.e., compound events); forecasting the attribution of a specific event before it has occurred; future attribution assessments under prescribed global warming thresholds; the attribution of impacts of a specific event on human, biophysical or physiological systems due to anthropogenic influence; and exploring the uncertainty in attribution assessments depending on the methods, models and datasets employed. Such developments are working towards improving the robustness of attribution assessments.

We invite presentations that perform attribution assessments of recent extreme weather events in the context of anthropogenic influence or take a systematic approach to analyzing anthropogenic influence on the likelihood of extreme events. We particularly encourage presentations that aim to advance the field of event attribution, such as innovative statistical techniques, the inclusion of compound events and/or key physical mechanisms, consideration of selection bias from analyzing only events that have occurred, assessment of model fitness for purpose, and multi-method approaches. Presentations that attribute changes in specific extreme events to anthropogenic activity other than greenhouse gas emissions are also welcome.


top

     
  10. Changes in extremes including temperature, hydrologic, and multi-variate compound events
Conveners: Seung-Ki Min, Jana Sillmann
 
 

Climate extremes and their changes are of particular relevance to society and ecosystems due to their potentially severe impacts. Correspondingly, the demand for consistent and robust projections of future changes in climate extremes has rapidly increased over the past decade. Changes in climate extremes like heat waves, meteorological droughts or convective rain storms can be expressed and analyzed in terms of single variables, such as temperature and precipitation. Furthermore, the combination of multiple variables and the interaction of different physical processes can lead to severe impacts, also referred to as “compound events”. Understanding the observed changes in univariate extreme events are challenging given its rareness and larger uncertainties. Compound events constitute a bigger statistical challenge with higher dimensionality and more sparse sampling. On the other hand, dependencies between climate variables can produce larger departures from natural variability than individual variables, and thus can provide opportunity to better detect changes in climate and to better understand variability in extremes and associated mechanisms. Advanced statistical methods are required to account for multivariate extreme values and climate variability in observational datasets and climate model simulations.

This session solicits studies analyzing changes in uni-variate and multi-variate extreme events, using statistical methods for challenging questions such as bias-correction of climate model data as input for impact models, evaluation of dynamical processes in climate models with respect to their performance in terms of climate extremes, as well as analysis and detection of changes in climate extremes and compound events under future climate change.


top

     
  11. Extreme value analysis for climate applications
Conveners: Richard Smith, Xuebin Zhang
 
 

Interest in climate extremes remains high, both from the point of view of how climate change affects extremes and characterizations of extreme events themselves. The basic methods of univariate extreme value theory (e.g. Generalized Extreme Value distribution, Generalized Pareto distribution, and numerous extensions or variants) are by now well established in the climate literature and have been incorporated into a number of software packages. However, methods beyond univariate extreme value theory are still rarely used in the climate context. Examples include multivariate extreme value theory (including the dichotomy between asymptotic dependence and asymptotic independence of two variables, and methods appropriate for higher dimensions) and models for spatial extremes including max-stable and max-infinitely divisible processes. However, there are also problems involving combinations of multiple variable that don't necessarily require the individual variables all be extreme (e.g. the combined role of temperature and humidity in heatwave deaths) and may require new developments in statistical theory.

Our primary objective in this session is to identify interesting problems concerning climate extremes that are not well handled by currently well established methods, and solutions that require the development of more advanced methods, including but not limited to multivariate extreme value distributions, max-stable processes, and related concepts. We also welcome expository talks that will introduce these concepts to climate scientists.


top

     
  12. From global change to regional impacts, downscaling and bias correction
Conveners: Alex Cannon, Mathieu Vrac
 
 

Climate projections are often based on rather coarse resolution global and regional climate models. Many users interested in climate projections (such as impact modelers), however, act on regional to local scales and often desire high-resolution model output. This demand is especially evident for assessing the occurrence of climate and weather extremes. One way to bridge this scale-gap is by means of statistical downscaling, either by so-called perfect prognosis approaches or model output statistics, including statistical bias correction methods of global and regional climate models.

This session seeks to present and discuss recent methodological and conceptual developments in statistical downscaling and bias correction. We especially welcome contributions addressing spatial-temporal and multi-variable variability (in particular of extreme events); non-stationary methods; the development of statistical models for sub-daily variability such as convective events; the integration of process understanding into downscaling and bias correction methods; the selection of predictors to capture climate change; the performance and added value of downscaling methods on seasonal to centennial scales (including the ability to extrapolate beyond observed values); the development of process-based validation diagnostics for statistical downscaling;the assessment of advantages and inherent limitations of different approaches; and the application of these methods in various impact studies.


top
 
 
  © Météo-France - Communication et Documentation Toulouse