the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Added value of seasonal hindcasts for UK hydrological drought outlook
Wilson C. H. Chan
Nigel W. Arnell
Geoff Darch
Katie Facer-Childs
Theodore G. Shepherd
Maliko Tanguy
Abstract. The UK has experienced recurring periods of hydrological droughts in the past, including the latest drought declared in summer 2022. Seasonal hindcasts, consisting of a large sample of plausible weather sequences, can be used to add value to existing approaches to water resources planning. In this study, the drivers of winter rainfall in the Greater Anglia region are investigated using the ECMWF SEAS5 hindcast dataset, which includes 2850 plausible winters across 25 ensemble members and 3 lead times. Four winter clusters were defined using the hindcast winters based on possible combinations of various atmospheric circulation indices (such as North Atlantic Oscillation, East Atlantic Pattern and the El-Niño Southern Oscillation). Using the 2022 drought as a case study, we demonstrate how storylines of the event could be created in autumn 2022 to provide an outlook of drought conditions and to explore plausible worst cases over winter 2022/23 and beyond. The winter clusters span a range of temperature and rainfall response in the Anglian region and represent circulation storylines that could have happened over winter 2022/23. Although winter 2022/23 has now passed, we aim to demonstrate the added value of this approach to provide outlooks during an ongoing event with a brief retrospective of how winter 2022/23 transpired. Storylines created from the hindcast winters were simulated using the GR6J catchment hydrological model and the groundwater level model Aquimod at selected catchments and boreholes in the Anglian region. Results show that drier than average winters characterised by predominantly NAO-/EA- and NAO+/EA- circulation patterns would result in the continuation of the drought with a high likelihood of below normal to low river flows across all selected catchments and boreholes by spring and summer 2023. Catchments in Norfolk are particularly vulnerable to a dry summer in 2023 as river flows are not estimated to recover to normal levels even with wet La Niña winters characterised predominantly by NAO-/EA+ and NAO+/EA+ circulation patterns due to insufficient rainfall to overcome previous dry conditions and the slow response nature of groundwater-dominated catchments. Storylines constructed in this way provide outlooks of ongoing events and supplement traditional weather forecasts to explore a wider range of plausible outcomes.
- Preprint
(2272 KB) - Metadata XML
-
Supplement
(1750 KB) - BibTeX
- EndNote
Wilson C. H. Chan et al.
Status: final response (author comments only)
-
RC1: 'Comment on nhess-2023-74', Anonymous Referee #1, 03 Jul 2023
Summary
The paper demonstrates the use of seasonal hindcasts for creating outlooks of ongoing droughts of river flows and groundwater levels conditioned on atmospheric circulation patterns. These outlooks can supplement traditional weather forecasts for exploring a wider range of plausible outcomes. The demonstration is made on the 2022 drought. Among others, the results of the case study indicate that drier than average winters would result in the continuation of the drought with a high likelihood of below normal to low river flows by spring and summer.
General evaluation
Overall, I find that the paper is interesting, meaningful and very well-prepared with some room for improvement, which mainly concerns additional discussions that could be made. Specifically, I recommend minor revisions according to the comments provided here below.
Specific minor comments
- Lines 63−64 state that “the Anglian region is particularly vulnerable to prolonged dry weather and droughts as it is the driest region in the UK”. Maybe discussions could be added on whether and how the added value of the practical framework proposed is expected to deviate across regions and across climatic conditions.
- Lines 305−318 discuss the value of the proposed framework by comparing its output for the case examined with the observed winter 2022/23. In my view, these discussions are important for the paper and maybe they could also be supported by one or more figures. For instance, Figure S7 could be moved in the main manuscript.
- Right after lines 305−318, extensive discussions (and maybe a figure as well) could be added to highlight the benefits of applying the new framework along with the use of traditional weather forecasts, instead of using such forecasts only, in the case examined.
Citation: https://doi.org/10.5194/nhess-2023-74-RC1 - AC1: 'Reply on RC1', Wilson Chan, 15 Sep 2023
-
RC2: 'Comment on nhess-2023-74', Anonymous Referee #2, 23 Jul 2023
Review of ”Added value of seasonal hindcasts for UK hydrological drought” by Chan et al. This paper describes the use of storylines to assess worst-case scenarios of continuation of drought conditions into the next year, using 2022/2023 as an example. Meteorological hindcasts are from the SEAS5 system together with the GR6J hydrological model in the Greater Anglian region. The storylines uses clustering of the meteorological forcings using large-scale indices. The study highlights a useful tool to assess the risk of persisting drought over the winter period, and I recommend that it is published following a minor revision. However, I think the paper would eb strengthened by a more rigorous discussion on the skill and reliability of seasonal forecasts in Europe.
Major comments
Uncertainty in seasonal forecasts. Seasonal forecasts are inherently uncertain, especially across Europe. The skill of the forecasts are not great on lead times longer than a month, and beyond that it is very questionable how much use the forecasts have. I think this needs to be discussed more in the paper
Verification. The authors could have used for example ERA5 reanalysis for verification of the seasonal forecasts. This could also have shed some light on the reliability of the forecasts, and through that have assessed the reliability and usefulness of the methodology
Minor comments
- L31. You mention UK as the area here, but the study area is rather eastern England.
- The abstract is a bit too long and the methods is described quite convoluted. I would suggest to shorten this a bit.
- L42. SEAS5 provides forecasts up to 215 days, which is just over 7 months.
- L76-78. The argument that there are 2850 winters in the hindcast dataset is a bit exaggerated since the ensemble members from each initialisation are not independent runs. Also, the model runs tend to approach the model cliamte over time. I think this can be further elaborated.
- L99. The indices were calculated from MSLP. I would put that in the first sentencefor clarity.
- Figure 2. From the figure it looks like the EA has a clear correlation with precipitation anomaly, whereas NINO3.4 and NAO has very weak correlation with precipitation.
- L184. The standard nowadays is to use KGE rather than NSE
- L199. Why do you state here that ”Although winter 2022/23 has now passed..”? The approach is valid regardless if applied to previous cases or in real-time?
- L214. What is slightly lower? How much of normal precipitation expressed in percentage?
- L216. Yes, soil moisture would be depleted during a dry summer, but lower reservoirs would not be as much affected, which drought conditions are you discussing here?
- Figure 5. The colour scale of the percentiles are not great, since normal conditions have a similar colour scheme as the below normal (yellow-orange-red). I would suggest to use a greenish colour to denote normal conditions.
Citation: https://doi.org/10.5194/nhess-2023-74-RC2 - AC2: 'Reply on RC2', Wilson Chan, 15 Sep 2023
-
RC3: 'Comment on nhess-2023-74', Anonymous Referee #3, 26 Jul 2023
This study assesses ECMWF SYS5 for use in drought outlooks over the UK. The paper is beautifully written, the figures are nicely presented and I could clearly follow at every point what the authors did. I really regret to say, then, that I thought the paper was ultimately misguided and that I do not recommend the paper to be published, for the following reasons:
1) The paper makes use of methods from climate studies (notably assessments of climatological distributions from the UNSEEN project) that are inadequate for ensemble forecast verification. As I note in the specific comments below, the accuracy and skill of forecasts can be assessed directly using well-established forecast verification methods, as can the appropriateness of ensemble spread, which consider the correlation of forecasts with observations. I recommend the authors familiarise themselves with the fundamentals of forecast verification (see, e.g., Joliffe & Stephenson 2011) before reconfiguring their paper.
2) To me, the use of story lines is fundamentally at odds with the aims of ensemble forecasting. The conception of story lines is appropriate in climate projections, where ensembles of different GCMs are not formally statistically exchangeable, and thus should not be used to express, for example, quantitative confidence intervals. Story lines in climate change projections can be thought of as hypotheses. In ensemble forecasting, however, ensemble members should be formally exchangeable (i.e., each member is equally probable), and we can directly test the appropriateness of the ensemble following (1). This means ensembles can be used to assign probabilities to events. Developing story lines based on climate drivers essentially does away with this probabilistic information in preference to a narrative-drive prediction method. The purpose of ensemble weather and climate forecasting can be thought of as an attempt to get away from narrative-style predictions: basically, the uncertainty in weather is irreducible and cannot be distilled to one or two 'story lines'. Because weather is chaotic, when ensembles are constructed correctly, they should be on average more accurate than any single-value forecast.
3) Following on from (2), it's not appropriate to assess probabilistic forecasts - especially of extremes - on a single event as is done in this paper. Probabilistic forecasts must be assessed on a population of events, and the population must be unbiased. Selecting such a population on the basis of when an extreme event (e.g., a drought) is observed isn't correct: it produces a biased population. The reasons for this are both intuitive and highly technical - e.g., it's not possible to assess false alarms when an extreme event is always observed in your population; for more technical reasons see Lerch et al. (2017) and Bellier et al. (2017)
4) Given (1) and (2) , it's not appropriate to assess the usefulness of ECMWF SYS5 for drought prediction solely on its ability to replicate a climatology; nor to use it to characterise climate drivers of precipitation. If the authors wanted to reconfigure their paper, reusing some of their techniques to identify climate drivers, the authors might consider:
i) Comparing whether observed teleconnections (as seen in, e.g., reanalyses) are reproduced in SYS5, and whether this changes with lead time
ii) Whether drivers of teleconnections are well forecast by SYS5 (following (3), when the forecasting system predicts such an event, rather than if one is observed)
iii) Using (i) and (ii) to address questions on the conditional skill of SYS5 forecasts. For example, it could be used to answer questions such as: a) how well does SYS5 predict key drivers of drought? (b) how well does it do at reproducing observed teleconnections and (c) how do (a) and (b) relate to forecast skill?
These are merely suggestions of course; I leave it to the authors as to what they may wish to pursue. In any case, if the authors did follow these recommendations, I think it would be a fundamentally different paper to what is presented, and hence my recommendation to reject (rather than revise the paper).It's always difficult to reject a paper like this, in which the authors show a good command of statistical and climate analyses, not to mention clear scientific writing and presentation, but which is in other ways seriously flawed (at least in my view). I really hope the authors don't find my review too discouraging - I wish them well reconfiguring their work and analyses to better align with the precepts of forecast verification.
Specific comments
P3 L78-79 "Each ensemble member is perturbed with different initial atmospheric and ocean initial conditions" On first reading this I though this wording implied that the authors are (re) perturbing ensemble members, which I would think is unlikely given the computational demands of SYS5 and the authors' earlier declaration that they are using the retrospective forecast dataset. I assume the authors mean something like: "SYS5 ensemble members are generated by perturbing initial conditions", so if I'm right I suggest the authors go with something like this. In addition, it's my understanding ECMWF perturbs model physics in SYS5, which the authors might also want to mention.P4 Section 2.1. I found this method of forecast verification basically inappropriate, as follows:
1) Using the UNSEEN framework isn't really appropriate here: that paper tried to put a flood event in climatological context, therefore assembling a large ensemble to assess describe the climatology makes sense. But we are dealing with forecasts here, which are expected to be correlated with observations. Assessing a simple model climatology is not enough to demonstrate the value of forecasts.
2) Even given (1), the idea that the model performs well at simulating a climatology isn't well demonstrated by Figure 1. For example, the figure shows that variance of winter rainfall is understated by SYS5. Further, variance can change with lead time, as information from initial conditions wanes and the model reverts to its internal climatology. Finally, it is quite possible to demonstrate bias etc. across multiple sites and lead times, rather than restricting it to a single site and pooling lead times, which occludes valuable information about the utility of the forecasts.
3) As noted in (1) Forecasts are expected to be correlated with observations. This means that the utility of forecasts is usually measured by:
1) Forecast skill (i.e., forecast errors with respect to a climatological forecast, computed with appropriate error scores such as the continuous ranked probability score), conditioned on both location and lead time.
2) The reliability of ensemble spread, using appropriate measures such as probability integral transforms, attributes diagrams or spread-error diagrams.
In addition, useful information about forecasts is also:
a) The sharpness of the forecasts
b) ability to replicate observed climatology - e.g., with bias/variance/etc. It is only this last criterion that is assessed in the paper.References
Bellier J, Zin I, Bontron G. 2017. Sample Stratification in Verification of Ensemble Forecasts of Continuous Scalar Variables: Potential Benefits and Pitfalls. Monthly Weather Review 145: 3529-3544. DOI: 10.1175/mwr-d-16-0487.1.Lerch S, Thorarinsdottir TL, Ravazzolo F, Gneiting T. 2017. Forecaster's Dilemma: Extreme Events and Forecast Evaluation. Statist. Sci. 32: 106-127. DOI: 10.1214/16-sts588.
Joliffe IT, Stephenson DB (eds) (2011) Forecast Verification: A Practioners Guide. DOI: 10.1002/9781119960003
Citation: https://doi.org/10.5194/nhess-2023-74-RC3 - AC3: 'Reply on RC3', Wilson Chan, 15 Sep 2023
Wilson C. H. Chan et al.
Wilson C. H. Chan et al.
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
596 | 149 | 26 | 771 | 45 | 12 | 14 |
- HTML: 596
- PDF: 149
- XML: 26
- Total: 771
- Supplement: 45
- BibTeX: 12
- EndNote: 14
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1