the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Investigation of an extreme rainfall event during 8–12 December 2018 over central Viet Nam – Part 2: An evaluation of predictability using a time-lagged cloud-resolving ensemble system
Abstract. This is the second part of a two-part study that investigates an extreme rainfall event that occurred from 8 to 12 December 2018 over central Viet Nam (referred to as the D18 event). In this part, the study aims to evaluate the predictability of the D18 event using a time-lagged cloud-resolving ensemble and a quantitative precipitation forecast system. To do this study, 29 time-lagged (8 days in lead time) high resolution (2.5 km) members were run, with the first members run at 12:00 UTC 3 December 2018, and the last member-run at 12:00 UTC 10 December 2018. Between the first and the last members are multiple members that run every 6-h. The evaluated results reveal that CReSS well predict the rainfall fields at the short-range forecast (less than 3 days) for 10 December (rainiest day). Particularly, results show CReSS has high skills in heavy-rainfall QPFs for the 24-h rainfall of 10 Dec with the SSS scores greater than 0.5 for both the last five members and the last nine members. These good results are due to the model having good predicts of other meteorological variables, such as surface wind fields. However, these prediction skills are reducing at extending lead time (longer than 3 days), and it is challenging to achieve the prediction of QPF for rainfall thresholds greater than 100 mm with lead time longer than 6 days. Besides, the ensemble sensitivity analysis of 24-hour rainfall responds to the initial conditions shows that the 24-hour rainfall is very sensitive with initial conditions, not only at the lower level but also at the upper level. The ensemble-based sensitivity is decreased with the increasing lead time. Through the analysis of thermodynamic and moisture sensitivities, it showed that the features of ESA facilitated a better understanding of the sensitivity of a precipitation forecast to the initial conditions, implying that it is meaningful to apply ESA to control initial conditions by work in the future.
- Preprint
(4708 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on nhess-2023-192', Anonymous Referee #1, 23 Jan 2024
Comments on “Investigation of an extreme rainfall event during 8-12 December 2018 over central Viet Nam – Part 2: An evaluation of predictability using a time-lagged cloud-resolving ensemble system” by Wang et al.
In general, I think this is a good follow-up study to evaluate the usefulness of CReSS ensembles in forecasting an extreme rainfall event over Viet Nam. However, I believe significant editorial improvements (texts and figures) are necessary to enhance the manuscript’s readability. For example, varies data are descripted in the data section (Section 2.1), but many of them are not used in the subsequent sections (e.g. TIGGE, WRF). Some of the colour bars and their labels are quite small. Overall, I think the current manuscript needs major revision.
Specific comments:
Abstract: The authors should state the full form first before using abbreviations e.g. CReSS, QPF, SSS, and ESA.
Line 36: Perhaps it would add some clarity to “The observational data …” by adding GPM in the text.
Lines 40-42: Missing reference(s)?
Lines 97-98: Missing reference(s)?
Lines 113-119: I am not sure whether these details are relevant to the current study.
Lines 123-139: Given the nature of this study is to demonstrate the CReSS ensemble can produce good forecast of extreme precipitation events. Perhaps it is necessary to have a brief description of other models, which could perform simulation at a cloud-resolving resolution, e.g. MM5 by Son and Tan (2009), WRF by Toan et al. (2018) and Nhu et al. (2017). Information such as resolution and relevant parameterisation schemes (if any) would be relevant in this case. If the resolution and relevant parametrisation schemes of those models would be the same as the current study, the authors might want to emphasis this point.
Lines 139-147: The authors might want to highlight the fact that these global NWP models do not have “cloud-resolving resolution”.
Line 173: Not sure why UKMO is mentioned but their forecast outputs were not used. Also “ECMWF of the European Union” is not accurate as the UK is a member state of ECMWF but not in the European Union.
Lines 175-176: The authors should state the usage of this dataset at the beginning of the paragraph first rather than in the middle of the paragraph.
Section 2.1.2: It might be worthwhile to highlight the fact that NCEP GFS is a deterministic forecast.
Section 2.1.3: The authors should highlight the fact that these are the in-situ observation data as GPM data is also a type of “observation” data.
Section 2.1: The authors might want to restructure this section so that the data, which serves as similar purpose would be grouped together. A possible structure could be: “Section 2.1.1” In-situ observation data and “Section 2.1.2” GPM are used to model validation (ground truth); “Section 2.1.3” TIGGE and “Section 2.1.4” WRF are used to demonstrate the added values of CReSS ensemble; “Section 2.1.5” NCEP GFS is used to drive CReSS. This structure would fit nicely into “Section 2.2 Model description …”
Figure 1, Section 2.1: ERA5 was used in Figure 1 but was not mentioned in Section 2.1.
Lines 228-230: “The first members ran at 12:00 UTC on 3 December 2018, and the last member ran at 12:00 UTC on 10 December 2018…” à “The first member was initialised at 12:00 UTC on 3 December 2018, and the last member was initialised at 12:00 UTC on 10 December 2018. A new member was initialised every 6-hr within the period 1200 UTC 3 Dec 2018-1200 UTC 10 Dec 2018.”
Section 2.1.2, Lines 232-234, and Table 1: I am a bit confused. Did the authors use NCEP FNL (Lines 232-234, Table 1; also, from the first part of this two-part study [Wang and Nguyen 2022]) or NCEP GFS (as stated in Section 2.1.2) to drive CReSS?
Line 278: “If small spread …” à “For example, small spread indicates…”
Figure 3, Lines 312-313, Figure 5: I believe the OBS, which the authors are referring to, is not from GPM nor in-situ observations but from ERA5 as it has more similarity to Figure 1f than Figure 1e. The authors also mentioned stations but stations are not indicated in Figure 3. Perhaps, the authors meant to show another figure?
Figure 3: Perhaps I missed it, but it is not clear to me how good members (green) and bad members (red) are determined. For example, why does “00:00 UTC 9” is a good member whereas “18:00 UTC 8”, “06:00 UTC 9”, and “12:00 UTC 9” are not classified as good members but the spatial structures of these members are very similar to “00:00 UTC 9”.
Lines 318, 324: “… executed …” à “… initialised …”
Line 341: Is it GFS or FNL?
Lines 353-367; Figure 5: (1) Perhaps it might be clearer if the authors would rename the subgroups by using the period of forecast initialisations, e.g. “First 4 members” à “12 UTC 3- 06 UTC 4 Dec”. (2) Similar grouping could be done by defining period using a “moving window period”, e.g. “Subgroup 1: 12 UTC 3- 06 UTC 4 Dec”, “Subgroup 2: 18 UTC 3- 12 UTC 4 Dec” etc.. The “moving window” approach could help the authors to better locate the optimal initialisation periods. (3) If the “moving window” approach is used, perhaps some kind of measure of the deviation of the 24h rainfall field between OBS and subgroups would be useful in quantifying the optimal initialisation periods.
Line 365: The rainfall of second 4 members is the highest “in” these… (A word is missing)
Line 366: … due to “a single good forecast initialised at 1800 UTC on 4 Dec”.
Lines 384-388: The definition of subgroups seems to be arbitrary. It is not clear to me why these subgroups are chosen for in-depth analysis. Furthermore, would this not be expected that the forecasts initialised closer to the target period would have a better forecast skill as certain features that are highly related to the extreme precipitation event are included in the initial conditions? In this sense, an interesting question would be: Why does the forecast initialised on 1800 UTC 4 Dec can (partially) capture the spatial extent of the extreme precipitation event of interest and such information was lost for the next 72 hours?
Figures 10, 11, 12: The authors should add some labels (u, v, qv) on the plot. Also missing labels on Figure 10a and Figure 10c
Figure 10: Is it relating to surface wind, which typically is defined as 10-m wind, or 100-m wind (as stated in Lines 473-474).
Section 3.2: The authors should indicate which graph/panels the readers should be looking for. E.g. Lines 476-479: It should be referring to Figure 10a-c. etc.
Lines 502-509: I am confused about the Figures and the texts. What levels are actually showing and the authors are referring to. I guess the authors are comparing Figures 11 and 10? If this is the case, Line 504 should include something like “… 100 meters (Figure 10).
Citation: https://doi.org/10.5194/nhess-2023-192-RC1 -
AC2: 'Reply on RC1', Duc Nguyen, 06 Jun 2024
The comment was uploaded in the form of a supplement: https://nhess.copernicus.org/preprints/nhess-2023-192/nhess-2023-192-AC2-supplement.pdf
-
AC2: 'Reply on RC1', Duc Nguyen, 06 Jun 2024
-
RC2: 'Comment on nhess-2023-192', Anonymous Referee #2, 19 Mar 2024
The comment was uploaded in the form of a supplement: https://nhess.copernicus.org/preprints/nhess-2023-192/nhess-2023-192-RC2-supplement.pdf
-
AC1: 'Reply on RC2', Duc Nguyen, 06 Jun 2024
The comment was uploaded in the form of a supplement: https://nhess.copernicus.org/preprints/nhess-2023-192/nhess-2023-192-AC1-supplement.pdf
-
AC1: 'Reply on RC2', Duc Nguyen, 06 Jun 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
372 | 82 | 38 | 492 | 39 | 34 |
- HTML: 372
- PDF: 82
- XML: 38
- Total: 492
- BibTeX: 39
- EndNote: 34
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1