the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
How well are hazards associated with derechos reproduced in regional climate simulations?
Abstract. An 11-member ensemble of convection-permitting regional simulations of the fast-moving and destructive derecho of June 29 – 30, 2012 that impacted the northeastern urban corridor of the US is presented. This event generated 1100 reports of damaging winds, significant wind gusts over an extensive area of up to 500,000 km2, caused several fatalities and resulted in widespread loss of electrical power. Extreme events such as this are increasingly being used within pseudo-global warming experiments that seek to examine the sensitivity of historical, societally-important events to global climate non-stationarity and how they may evolve as a result of changing thermodynamic and dynamic context. As such it is important to examine the fidelity with which such events are described in hindcast experiments. The regional simulations presented herein are performed using the Weather Research and Forecasting (WRF) model. The resulting ensemble is used to explore simulation fidelity relative to observations for wind gust magnitudes, spatial scales of convection (as manifest in high composite reflectivity), and both rainfall and hail production as a function of model configuration (microphysics parameterization, lateral boundary conditions (LBC), start date, and use of nudging). We also examine the degree to which each ensemble member differs with respect to key mesoscale drivers of convective systems (e.g. convective available potential energy and vertical wind shear) and critical manifestations of deep convection; e.g. vertical velocities, cold pool generation, and how those properties relate to correct characterization of the associated atmospheric hazards (wind gusts and hail). Here, we show that the use of a double-moment, 7-class scheme with number concentrations for all species (including hail and graupel) results in the greatest fidelity of model simulated wind gusts and convective structure against the observations of this event. We further show very high sensitivity to the LBC employed and specifically that simulation fidelity is higher for simulations nested within ERA-Interim than ERA5.
- Preprint
(9117 KB) - Metadata XML
-
Supplement
(4261 KB) - BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on nhess-2021-373', Anonymous Referee #1, 23 Jan 2022
Review of “How well are hazards associated with derechos reproduced in regional climate simulations?” by Shepherd et al.
This is a model evaluation study to evaluate WRF simulations downscaling to 1.3 km grid spacing with changes of cloud microphysics schemes, lateral boundary conditions (LBC), start date, and nudging. The focus is derecho induced from a mesoscale convective system (MCS). Since derechos cause significant infrastructure damages and economic loss, it is interesting to see if models can capture such extreme events and how the simulations are sensitive to different model setups and physical parameterizations. So, I advocate such studies. However, after looking at the results, I had to doubt whether the simulations were carried out correctly or the simulations were produced from a stable supercomputer/cluster. The model results swift from a convective system simulated with one microphysics scheme to the disappearance of the system with another microphysics scheme is something I never experienced as a senior cloud modeler. Particularly, switching graupel to hail in Morrison scheme also caused the disappearance of the MCS, which is not likely to occur since the change from graupel to hail renders minor changes relative to the entire scheme (mainly in fall speed and density). The hail option is recommended to use for continental deep convective cloud cases by the model developer but it does not simulate the MCS at all. I tested both options in several studies before and this never happened (the simulated convective cloud systems were generally very similar in morphology). In addition, there are so many literature studies with different microphysics schemes for a variety of cases that do not show such a result. As the authors stated this is expected result.
Also, none of the simulations can simulate both derecho and front stages of the observed system, then if the study have focused on why the simulations fail like this, it would still be useful. Furthermore, the sensitivity to two lateral boundary data (ERA5 and ERA-Interim) is also opposite compared with the previous studies (many literature studies showed ERA5 data is improved on ERA-Interim). With all these considered, it is very difficult for me to trust these model simulations thus I would recommend the rejection of the manuscript at this time.
Below I have some specific comments including the appropriate way to calculate the maximum hail size to compare with observations. Hope these comments will be useful for authors to improve the study.
Abstract,
“We also examine the degree to which each ensemble member differs with respect to key mesoscale drivers of convective systems (e.g. convective available potential energy and vertical wind shear) and critical manifestations of deep convection; e.g. vertical velocities, cold pool generation, and how those properties relate to correct characterization of the associated atmospheric hazards (wind gusts and hail).”
-The sentence is near the end of the abstract about it is still about the scientific approach. Suggest changing to phrasing it from the angle of describing your key findings, which is more appropriate for a scientific paper.
Introduction,
- “deep convection disproportionally contributes….”, disproportionally does not deliver a good meaning here. Suggest rewording.
- “…and three events caused more than 60% of a utilities’ customers power outage; a derecho, an ice storm and a hurricane (Shield, 2021)”, not three events, should be three types of events.
- Section 1.2: this section is for case description. There is too much text in describing the societal and economic impacts but lacking a description of the large-scale and mesoscale metrological environments in which the event developed, which is the key information for the study. Also, there should be observational analysis of this event on storm and wind properties which should be discussed to provide a better background of the events.
Section 2,
- Section 2.1: (a) the description of model simulations needs some clarification. I am confused by the descriptions at line 160-165. First you described 3 nested domains were used but then said “A single domain configuration and inner nest grid spacing is used in all members of the ensemble…”. Are you using two types of domain settings (3 nest domains and single 1.3 km domain)? If so, please clearly describe which domain setup is used for each simulation listed in Table 2? (b) Nudging simulations are not even mentioned here but they are discussed a lot in the Result section.
- Section 2.2. There are better precipitation data than retrieved precipitation rate using Z-R relationship, which in general has a large uncertainty, such as rain gauge data and Stage IV data from NOAA which combines radar and rain gauge measurements.
- Section 2.3, “Grid cells in d03 are classified as containing ‘significant hail’ in the WRF simulations if there is > 1 mm of hail and/or graupel accumulation, and in RADAR observations for MESH > 5mm. First, how the simulation and observations can be compared since different calculations of significant severe hail (SSH) are used? Second, What are you based on to define SSH with “ > 1 mm of hail and/or graupel accumulation” in the simulations? Accumulation over what time period? In literature there are a few methods to calculate the maximum hail size based on model predicted hail/graupel size distribution such as Thompson (J. A. Milbrandt, M. K. Yau, A Multimoment Bulk Microphysics Parameterization. Part III: Control Simulation of a Hailstorm. J. Atmos. Sci. 63, 3114-3136, 2006) and Snook (N. Snook, et al. Prediction and Ensemble Forecast Verification of Hail in the Supercell Storms of 20 May 2013. Weather Forecasting 31, 811-825,2016) methods. These methods make the model-observation comparison more physically consistent. BTW, “significant hail” should be “significant severe hail” based on the conventional terminology from literature. Another comment on this section is that the metrics description for evaluating models takes too much text and can be tightened up.
Section 3,
- Figure 4, the simulations with changes of different microphysics only (Goddard, Morrison, Morrison+hail, Thompson, NSSL, Milbrandt-626) totally fails to simulate the convective systems except Morrison captured the linear system. Switching to another microphysics scheme in WRF usually does not make such a large storm system totally disappear (never experienced or saw this in literature). The coupling of those microphysics scheme with WRF should have no problem so it is strange this happened. What is especially suspicious is when switching the graupel option to hail in Morrison scheme, the large linear MCS system disappeared. This would not be possible in a few days of forecasting simulations since slight differences in microphysics should not have such a huge upscale effect on mesoscale systems in a few days.
Furthermore, based on Figures 4-7, none of the simulations can simulate both derecho and front stages of the observed system, then why the simulations fail like this should be the first priority of the study.
- Figure 8, why not plot the radar measurements?
- Figures 9 -11, because the observed MCS was not captured in most of the simulations, one can see the simulated wind speeds are also way off. I do not see a point to intercompare convective winds, cold pools, or other storm related properties since the mesoscale system does not even exist in the model simulations or the basic mesoscale storm is totally wrong as shown in Figures 4-7. Therefore, I did not review Section 3.2 “Linking fidelity to metrics of CAPE, downbursts, and cold pool generation”.
- Since the sensitivity to two lateral boundary data (ERA5 and ERA-Interim) is opposite to previous studies (many studies showed ERA5 data is improved on ERA-Interim), the authors need to evaluate both datasets with observations to show this is an exception that ERA5 performs worse than ERA-Interim. Otherwise, this will make people doubt if the simulations are done correctly.
Citation: https://doi.org/10.5194/nhess-2021-373-RC1 -
AC1: 'Reply on RC1', TJ Shepherd, 26 Feb 2022
The comment was uploaded in the form of a supplement: https://nhess.copernicus.org/preprints/nhess-2021-373/nhess-2021-373-AC1-supplement.pdf
-
RC2: 'Comment on nhess-2021-373', Anonymous Referee #2, 19 Apr 2022
Review of nhess-2021-373: How well are hazards associated with derechos reproduced in regional climate simulations?
The authors used WRF as a convective-permitting regional climate model to produce 11 simulations of a severe derecho that affected the northeastern U.S. in 2012. The derecho had a major impact in terms of property damage and power outages over a large, populated region. This derecho was poorly forecast, yet we need to understand how climate change will impact such extreme mesoconvective systems. The authors examined the role of microphysical parametrization, nudging, and two different reanalysis products on 3km simulations of several days leading up to and including the derecho. The authors compared model output to surface and radar observations of precipitation, wind gust, hail, as well as variables describing the convective environment, such as vertical velocities and cold pool formation. The explanation of the methods of model assessment was particularly thoughtful.
Overall, the manuscript is well constructed with clear objectives, detailed methodology, and significant findings. I recommend that the manuscript be accepted following minor revisions.
Major comments
- While most of the simulations poorly represented the derecho, this is not surprising given that this event was not well predicted. While I would not ask the authors to address in this manuscript, it would be intriguing to duplicate this work for a significant mesoconvective system that was well predicted.
- While the use of the different microphysical parameterizations in the model was well designed, it is unclear what is learned by the comparison of ERA5 and ERA-Interim. It is not clear to me that you can generalize that ERA-Interim is inherently better at producing boundary conditions for such simulations (l. 526-527), or whether the small differences in the pre-storm temperature and moisture fields in ERA-Interim (l. 413-415) fortuitously produced more realistic simulations.
- I share the concern with the first reviewer that the abstract was not sufficiently specific, but I find the proposed new abstract in the authors’ response to be a significant improvement that addresses this concern.
- I share the first reviewer’s concern about the need for additional context on the convective environment for this storm in the background. The additions proposed by the authors appear to address this concern.
Minor
- 44: Is “atmospheric phenomena” another way of saying “weather”? Or is it intended to capture more.
- 54: focuses
- 51: “function model configuration” – appears a word is missing
- 56: Is “advected” the right word? Perhaps “propagated”?
- 66: run-on sentence
- 88-89: it might be helpful to explain “scale-aware convective parameterizations”. How are they “scale-aware”?
- 93: “degree/manner in what the model parameterization interact” is unclear. What is this trying to say?
- 306: Perhaps I’m missing something obvious, but why would s(w) be used as intensity for vertical motion rather than just w?
- 322-323: This sentence (Rank correlation coefficients…”) isn’t clear. How does the rank correlation show which model property most greatly influences skill? The correlations show how well the model and observations agree, but the word “influences” suggests that you can determine a causal mechanism.
- 501: The authors use the pseudo-global warming framework as a justification but don’t provide references (perhaps I missed them) how this framework has been used to examine mesoconvective systems nor provide examples of how such a framework might be used.
Citation: https://doi.org/10.5194/nhess-2021-373-RC2 -
AC2: 'Reply on RC2', TJ Shepherd, 27 May 2022
The comment was uploaded in the form of a supplement: https://nhess.copernicus.org/preprints/nhess-2021-373/nhess-2021-373-AC2-supplement.pdf
Status: closed
-
RC1: 'Comment on nhess-2021-373', Anonymous Referee #1, 23 Jan 2022
Review of “How well are hazards associated with derechos reproduced in regional climate simulations?” by Shepherd et al.
This is a model evaluation study to evaluate WRF simulations downscaling to 1.3 km grid spacing with changes of cloud microphysics schemes, lateral boundary conditions (LBC), start date, and nudging. The focus is derecho induced from a mesoscale convective system (MCS). Since derechos cause significant infrastructure damages and economic loss, it is interesting to see if models can capture such extreme events and how the simulations are sensitive to different model setups and physical parameterizations. So, I advocate such studies. However, after looking at the results, I had to doubt whether the simulations were carried out correctly or the simulations were produced from a stable supercomputer/cluster. The model results swift from a convective system simulated with one microphysics scheme to the disappearance of the system with another microphysics scheme is something I never experienced as a senior cloud modeler. Particularly, switching graupel to hail in Morrison scheme also caused the disappearance of the MCS, which is not likely to occur since the change from graupel to hail renders minor changes relative to the entire scheme (mainly in fall speed and density). The hail option is recommended to use for continental deep convective cloud cases by the model developer but it does not simulate the MCS at all. I tested both options in several studies before and this never happened (the simulated convective cloud systems were generally very similar in morphology). In addition, there are so many literature studies with different microphysics schemes for a variety of cases that do not show such a result. As the authors stated this is expected result.
Also, none of the simulations can simulate both derecho and front stages of the observed system, then if the study have focused on why the simulations fail like this, it would still be useful. Furthermore, the sensitivity to two lateral boundary data (ERA5 and ERA-Interim) is also opposite compared with the previous studies (many literature studies showed ERA5 data is improved on ERA-Interim). With all these considered, it is very difficult for me to trust these model simulations thus I would recommend the rejection of the manuscript at this time.
Below I have some specific comments including the appropriate way to calculate the maximum hail size to compare with observations. Hope these comments will be useful for authors to improve the study.
Abstract,
“We also examine the degree to which each ensemble member differs with respect to key mesoscale drivers of convective systems (e.g. convective available potential energy and vertical wind shear) and critical manifestations of deep convection; e.g. vertical velocities, cold pool generation, and how those properties relate to correct characterization of the associated atmospheric hazards (wind gusts and hail).”
-The sentence is near the end of the abstract about it is still about the scientific approach. Suggest changing to phrasing it from the angle of describing your key findings, which is more appropriate for a scientific paper.
Introduction,
- “deep convection disproportionally contributes….”, disproportionally does not deliver a good meaning here. Suggest rewording.
- “…and three events caused more than 60% of a utilities’ customers power outage; a derecho, an ice storm and a hurricane (Shield, 2021)”, not three events, should be three types of events.
- Section 1.2: this section is for case description. There is too much text in describing the societal and economic impacts but lacking a description of the large-scale and mesoscale metrological environments in which the event developed, which is the key information for the study. Also, there should be observational analysis of this event on storm and wind properties which should be discussed to provide a better background of the events.
Section 2,
- Section 2.1: (a) the description of model simulations needs some clarification. I am confused by the descriptions at line 160-165. First you described 3 nested domains were used but then said “A single domain configuration and inner nest grid spacing is used in all members of the ensemble…”. Are you using two types of domain settings (3 nest domains and single 1.3 km domain)? If so, please clearly describe which domain setup is used for each simulation listed in Table 2? (b) Nudging simulations are not even mentioned here but they are discussed a lot in the Result section.
- Section 2.2. There are better precipitation data than retrieved precipitation rate using Z-R relationship, which in general has a large uncertainty, such as rain gauge data and Stage IV data from NOAA which combines radar and rain gauge measurements.
- Section 2.3, “Grid cells in d03 are classified as containing ‘significant hail’ in the WRF simulations if there is > 1 mm of hail and/or graupel accumulation, and in RADAR observations for MESH > 5mm. First, how the simulation and observations can be compared since different calculations of significant severe hail (SSH) are used? Second, What are you based on to define SSH with “ > 1 mm of hail and/or graupel accumulation” in the simulations? Accumulation over what time period? In literature there are a few methods to calculate the maximum hail size based on model predicted hail/graupel size distribution such as Thompson (J. A. Milbrandt, M. K. Yau, A Multimoment Bulk Microphysics Parameterization. Part III: Control Simulation of a Hailstorm. J. Atmos. Sci. 63, 3114-3136, 2006) and Snook (N. Snook, et al. Prediction and Ensemble Forecast Verification of Hail in the Supercell Storms of 20 May 2013. Weather Forecasting 31, 811-825,2016) methods. These methods make the model-observation comparison more physically consistent. BTW, “significant hail” should be “significant severe hail” based on the conventional terminology from literature. Another comment on this section is that the metrics description for evaluating models takes too much text and can be tightened up.
Section 3,
- Figure 4, the simulations with changes of different microphysics only (Goddard, Morrison, Morrison+hail, Thompson, NSSL, Milbrandt-626) totally fails to simulate the convective systems except Morrison captured the linear system. Switching to another microphysics scheme in WRF usually does not make such a large storm system totally disappear (never experienced or saw this in literature). The coupling of those microphysics scheme with WRF should have no problem so it is strange this happened. What is especially suspicious is when switching the graupel option to hail in Morrison scheme, the large linear MCS system disappeared. This would not be possible in a few days of forecasting simulations since slight differences in microphysics should not have such a huge upscale effect on mesoscale systems in a few days.
Furthermore, based on Figures 4-7, none of the simulations can simulate both derecho and front stages of the observed system, then why the simulations fail like this should be the first priority of the study.
- Figure 8, why not plot the radar measurements?
- Figures 9 -11, because the observed MCS was not captured in most of the simulations, one can see the simulated wind speeds are also way off. I do not see a point to intercompare convective winds, cold pools, or other storm related properties since the mesoscale system does not even exist in the model simulations or the basic mesoscale storm is totally wrong as shown in Figures 4-7. Therefore, I did not review Section 3.2 “Linking fidelity to metrics of CAPE, downbursts, and cold pool generation”.
- Since the sensitivity to two lateral boundary data (ERA5 and ERA-Interim) is opposite to previous studies (many studies showed ERA5 data is improved on ERA-Interim), the authors need to evaluate both datasets with observations to show this is an exception that ERA5 performs worse than ERA-Interim. Otherwise, this will make people doubt if the simulations are done correctly.
Citation: https://doi.org/10.5194/nhess-2021-373-RC1 -
AC1: 'Reply on RC1', TJ Shepherd, 26 Feb 2022
The comment was uploaded in the form of a supplement: https://nhess.copernicus.org/preprints/nhess-2021-373/nhess-2021-373-AC1-supplement.pdf
-
RC2: 'Comment on nhess-2021-373', Anonymous Referee #2, 19 Apr 2022
Review of nhess-2021-373: How well are hazards associated with derechos reproduced in regional climate simulations?
The authors used WRF as a convective-permitting regional climate model to produce 11 simulations of a severe derecho that affected the northeastern U.S. in 2012. The derecho had a major impact in terms of property damage and power outages over a large, populated region. This derecho was poorly forecast, yet we need to understand how climate change will impact such extreme mesoconvective systems. The authors examined the role of microphysical parametrization, nudging, and two different reanalysis products on 3km simulations of several days leading up to and including the derecho. The authors compared model output to surface and radar observations of precipitation, wind gust, hail, as well as variables describing the convective environment, such as vertical velocities and cold pool formation. The explanation of the methods of model assessment was particularly thoughtful.
Overall, the manuscript is well constructed with clear objectives, detailed methodology, and significant findings. I recommend that the manuscript be accepted following minor revisions.
Major comments
- While most of the simulations poorly represented the derecho, this is not surprising given that this event was not well predicted. While I would not ask the authors to address in this manuscript, it would be intriguing to duplicate this work for a significant mesoconvective system that was well predicted.
- While the use of the different microphysical parameterizations in the model was well designed, it is unclear what is learned by the comparison of ERA5 and ERA-Interim. It is not clear to me that you can generalize that ERA-Interim is inherently better at producing boundary conditions for such simulations (l. 526-527), or whether the small differences in the pre-storm temperature and moisture fields in ERA-Interim (l. 413-415) fortuitously produced more realistic simulations.
- I share the concern with the first reviewer that the abstract was not sufficiently specific, but I find the proposed new abstract in the authors’ response to be a significant improvement that addresses this concern.
- I share the first reviewer’s concern about the need for additional context on the convective environment for this storm in the background. The additions proposed by the authors appear to address this concern.
Minor
- 44: Is “atmospheric phenomena” another way of saying “weather”? Or is it intended to capture more.
- 54: focuses
- 51: “function model configuration” – appears a word is missing
- 56: Is “advected” the right word? Perhaps “propagated”?
- 66: run-on sentence
- 88-89: it might be helpful to explain “scale-aware convective parameterizations”. How are they “scale-aware”?
- 93: “degree/manner in what the model parameterization interact” is unclear. What is this trying to say?
- 306: Perhaps I’m missing something obvious, but why would s(w) be used as intensity for vertical motion rather than just w?
- 322-323: This sentence (Rank correlation coefficients…”) isn’t clear. How does the rank correlation show which model property most greatly influences skill? The correlations show how well the model and observations agree, but the word “influences” suggests that you can determine a causal mechanism.
- 501: The authors use the pseudo-global warming framework as a justification but don’t provide references (perhaps I missed them) how this framework has been used to examine mesoconvective systems nor provide examples of how such a framework might be used.
Citation: https://doi.org/10.5194/nhess-2021-373-RC2 -
AC2: 'Reply on RC2', TJ Shepherd, 27 May 2022
The comment was uploaded in the form of a supplement: https://nhess.copernicus.org/preprints/nhess-2021-373/nhess-2021-373-AC2-supplement.pdf
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
822 | 327 | 62 | 1,211 | 182 | 58 | 53 |
- HTML: 822
- PDF: 327
- XML: 62
- Total: 1,211
- Supplement: 182
- BibTeX: 58
- EndNote: 53
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1