the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Demonstrating the use of UNSEEN climate data for hydrological applications: case studies for extreme floods and droughts in England
Abstract. Meteorological and hydrological hazards present challenges to people and ecosystems worldwide, but the limited length of observational data means that the possible extreme range is not fully understood. Here, a large ensemble of climate model data is combined with a simple grid-based hydrological model, to assess unprecedented but plausible hydrological extremes in the current climate across England. Two case studies are selected—dry (Summer 2022) and wet (Autumn 2023)—with the hydrological model initialised from known conditions then run forward for several months using the large climate ensemble. The modelling chain provides a large set of plausible events including extremes outside the range from use of observed data, with the lowest flows around 28 % lower on average for the Summer 2022 drought study and the highest flows around 42 % higher on average for the Autumn 2023 flood study. The temporal evolution and spatial dependence of extremes is investigated, including the potential time-scale of recovery of flows to normal and the chance of persistent extremes. Being able to plan for such events could help improve the resilience of water supply systems to drought, and improve flood risk management and incident response.
- Preprint
(1420 KB) - Metadata XML
-
Supplement
(140 KB) - BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on nhess-2024-51', Anonymous Referee #1, 10 May 2024
Review of "Demonstrating the use of UNSEEN climate data for hydrological applications: case studies for extreme floods and droughts in England" by Kay et al.
The study provides an application of the UNSEEN climate data sets to eight regions in England assessing the potential effects for one recent flood and one recent drought event using a modeling chain including a simple monthly water balance model informed by long historical run of the G2G model and including fidelity tests.
In my opinion, this is a valuable contribution to the discussion on how to be able to estimate of plausible, but yet unseen future extreme events.
The structure of the manuscript is logical and easy to follow and it is very clearly written overall.
I have only minor comments and would else recommend publication:
- while for droughts and drought recovery in most cases a monthly temporal scale is sufficient, for floods daily or often sub-daily is the scale of interest. The authors mention in their conclusion that that is the case, but I would recommend to pick that up earlier in the manuscript, best already in the methods, maybe discussed in limitations again
- in the description of the summer drought 2022 and autumn flood 2023 (2.3.1 and 2.3.2) how much was each region affected by these? all similar? Some particular?
- the figure colors can not be distinguished if printed in b&w, please consider adjusting the hue
Line by line comments (mostly editorial):
page 3
- L11: remove brackets
- L12: place the unit directly after rates
- L13: remove bracketspage 5
-L6 remove brackets; are these variables just examples or is the list exhaustive?page 7
-L20 remove bracketsCitation: https://doi.org/10.5194/nhess-2024-51-RC1 - AC1: 'Reply on RC1', Alison Kay, 18 Jun 2024
-
RC2: 'Comment on nhess-2024-51', Ben Maybee, 28 May 2024
This paper provides a valuable demonstration of the application of coupled rainfall-flood model ensembles to extreme events through two UK studies. As the authors note in their Conclusions, this is a physically-informed alternative to primarily statistical existing methodologies, and in my opinion this is especially important. Overall it is an excellent, well written article and I would recommend publication once the Minor comments below are addressed.
Some details of the methodology can be difficult to understand on a first read of the Methods section, but this is hard to avoid with so many datasets and different ensemble setups, and the authors have clearly endeavoured to try and lead the reader through. In a few places just a little extra elaboration on underpinning datasets or analogue methods (eg UKHO) could be considered.
I note only minor corrections that are primarily editorial in character:
- Page 3, lines 16 – 24: the bias correction factors are not currently motivated initially. The downscaling felt obvious enough, but I was curious as to the specific need for this part. Only short motivation would be needed.
- Page 3, line 28/Fig1b: would be useful to provide some physical interpretation of the SAAR ratios, thus rationalising their use in the downscaling. For example, from fig 1b they appear to in part track orography (which one would expect), but also to significantly deviate. Please expand a bit on what these ratios physically mean.
- Page 5, line 16: are these the same 17 regions as in Fig.1, and of which the regions used in the study are a subset?
- Page 5, line 17: significance of correlation?
- Page 5, lines 29 – 31: please expand on how WBM Obs generates an ensemble. I presume the members are the different years of HadUK-Grid to 41 members in all, each initialised from case studies’ antecedent conditions? As a reader not familiar with the HadUK-Grid dataset or Ensemble Streamflow method, the current exposition isn’t quite enough to piece together how the Obs ensemble is built (which is important for interpreting results).
- Page 9, table 2: given the comments made on the February fidelity failures for single statistics, its striking to not have a comment on the August SEEngland failure of multiple statistics! Why was this?
- Failure of multiple tests is acknowledged in Page 12 line 28-29; but not explained.
- Figs 2/3: no units on graphs, please correct. It’s a shame that arguably the most important information on these plots is compressed to the RHS – is several years’ preceding data necessary? I appreciate adds context to the extreme flows.
- Figs 4/5: worth explicitly noting that the order of members in Fig legend corresponds to month when said member is most extreme. Not immediate which month each member is relevant to, which is maybe more important that the initialisation batch?
- Page 12, figs 4/5: not necessarily a correction, but note I find it difficult to see the spatial connections between member extremes. Authors have spelled this out in text, but current visualisation does not make awfully clear.
- Figs 6a/7a: the difference between UNSEEN and Obs is difficult to see (appreciate slightly the point in Recovery). However a log scale may be useful to draw out differences which are there, highlighting regions where forecasts and Obs differ notably?
- Page 16, line 5: please improve the clarity of wording here, i.e. to emphasise you are now referring to WBM Obs extremes which are not present in UNSEEN ensemble (at least, I believe that is the point).
- Page 18, line 4: worth redefining AE/PE acronyms. Not used since Methods, and as a meteorologist rather than hydrologist their meaning is not immediately obvious.
Citation: https://doi.org/10.5194/nhess-2024-51-RC2 - AC2: 'Reply on RC2', Alison Kay, 18 Jun 2024
Status: closed
-
RC1: 'Comment on nhess-2024-51', Anonymous Referee #1, 10 May 2024
Review of "Demonstrating the use of UNSEEN climate data for hydrological applications: case studies for extreme floods and droughts in England" by Kay et al.
The study provides an application of the UNSEEN climate data sets to eight regions in England assessing the potential effects for one recent flood and one recent drought event using a modeling chain including a simple monthly water balance model informed by long historical run of the G2G model and including fidelity tests.
In my opinion, this is a valuable contribution to the discussion on how to be able to estimate of plausible, but yet unseen future extreme events.
The structure of the manuscript is logical and easy to follow and it is very clearly written overall.
I have only minor comments and would else recommend publication:
- while for droughts and drought recovery in most cases a monthly temporal scale is sufficient, for floods daily or often sub-daily is the scale of interest. The authors mention in their conclusion that that is the case, but I would recommend to pick that up earlier in the manuscript, best already in the methods, maybe discussed in limitations again
- in the description of the summer drought 2022 and autumn flood 2023 (2.3.1 and 2.3.2) how much was each region affected by these? all similar? Some particular?
- the figure colors can not be distinguished if printed in b&w, please consider adjusting the hue
Line by line comments (mostly editorial):
page 3
- L11: remove brackets
- L12: place the unit directly after rates
- L13: remove bracketspage 5
-L6 remove brackets; are these variables just examples or is the list exhaustive?page 7
-L20 remove bracketsCitation: https://doi.org/10.5194/nhess-2024-51-RC1 - AC1: 'Reply on RC1', Alison Kay, 18 Jun 2024
-
RC2: 'Comment on nhess-2024-51', Ben Maybee, 28 May 2024
This paper provides a valuable demonstration of the application of coupled rainfall-flood model ensembles to extreme events through two UK studies. As the authors note in their Conclusions, this is a physically-informed alternative to primarily statistical existing methodologies, and in my opinion this is especially important. Overall it is an excellent, well written article and I would recommend publication once the Minor comments below are addressed.
Some details of the methodology can be difficult to understand on a first read of the Methods section, but this is hard to avoid with so many datasets and different ensemble setups, and the authors have clearly endeavoured to try and lead the reader through. In a few places just a little extra elaboration on underpinning datasets or analogue methods (eg UKHO) could be considered.
I note only minor corrections that are primarily editorial in character:
- Page 3, lines 16 – 24: the bias correction factors are not currently motivated initially. The downscaling felt obvious enough, but I was curious as to the specific need for this part. Only short motivation would be needed.
- Page 3, line 28/Fig1b: would be useful to provide some physical interpretation of the SAAR ratios, thus rationalising their use in the downscaling. For example, from fig 1b they appear to in part track orography (which one would expect), but also to significantly deviate. Please expand a bit on what these ratios physically mean.
- Page 5, line 16: are these the same 17 regions as in Fig.1, and of which the regions used in the study are a subset?
- Page 5, line 17: significance of correlation?
- Page 5, lines 29 – 31: please expand on how WBM Obs generates an ensemble. I presume the members are the different years of HadUK-Grid to 41 members in all, each initialised from case studies’ antecedent conditions? As a reader not familiar with the HadUK-Grid dataset or Ensemble Streamflow method, the current exposition isn’t quite enough to piece together how the Obs ensemble is built (which is important for interpreting results).
- Page 9, table 2: given the comments made on the February fidelity failures for single statistics, its striking to not have a comment on the August SEEngland failure of multiple statistics! Why was this?
- Failure of multiple tests is acknowledged in Page 12 line 28-29; but not explained.
- Figs 2/3: no units on graphs, please correct. It’s a shame that arguably the most important information on these plots is compressed to the RHS – is several years’ preceding data necessary? I appreciate adds context to the extreme flows.
- Figs 4/5: worth explicitly noting that the order of members in Fig legend corresponds to month when said member is most extreme. Not immediate which month each member is relevant to, which is maybe more important that the initialisation batch?
- Page 12, figs 4/5: not necessarily a correction, but note I find it difficult to see the spatial connections between member extremes. Authors have spelled this out in text, but current visualisation does not make awfully clear.
- Figs 6a/7a: the difference between UNSEEN and Obs is difficult to see (appreciate slightly the point in Recovery). However a log scale may be useful to draw out differences which are there, highlighting regions where forecasts and Obs differ notably?
- Page 16, line 5: please improve the clarity of wording here, i.e. to emphasise you are now referring to WBM Obs extremes which are not present in UNSEEN ensemble (at least, I believe that is the point).
- Page 18, line 4: worth redefining AE/PE acronyms. Not used since Methods, and as a meteorologist rather than hydrologist their meaning is not immediately obvious.
Citation: https://doi.org/10.5194/nhess-2024-51-RC2 - AC2: 'Reply on RC2', Alison Kay, 18 Jun 2024
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
398 | 85 | 25 | 508 | 34 | 17 | 16 |
- HTML: 398
- PDF: 85
- XML: 25
- Total: 508
- Supplement: 34
- BibTeX: 17
- EndNote: 16
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1