the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Simulated rainfall extremes over southern Africa over the 20th and 21st centuries
Abstract. In Southern Africa, precipitation is a crucial variable linked to agriculture and water supply. In addition, extreme precipitation causes devastating flooding, and heavy rainfall events are a significant threat to the population in this region. We analyse here the spatial patterns of extreme precipitation and its projected changes in the future. We also investigate whether the Agulhas Current, a major regional oceanic current system, influences those events. For this purpose, we analyse simulations with the regional atmospheric model CCLM covering the last decades and the 21st century. The simulations are driven by atmospheric reanalysis and by two global simulations. The regional simulations display the strongest precipitation over Madagascar, the Mozambique channel, and the adjacent mainland. Extreme rainfall events are most intense over the mountainous regions of Madagascar and Drakensberg and the African Great Lakes. In general, extremes are stronger in the Summer Rainfall Zone than in the Winter Rainfall Zone.
Extremes are projected to become more intense over the South African coast in the future. For the KwaZulu-Natal Province, the heaviest rainfall event in the future is twice as strong as the strongest extreme simulated in the historical period and the recently observed disastrous extreme event in April 2022. The impact of the Agulhas Current System on strong rainfall events over the South African coast does not clearly appear in the simulations.
- Preprint
(32238 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on nhess-2023-147', Anonymous Referee #1, 28 Aug 2023
The manuscript by Tim et al. analyzes simulations of a regional climate model (CCLM) to assess the evolution of extreme rainfall. I don't have a very favorable opinion of this manuscript, which seems to just present the results of a single model and doesn't propose a regional synthesis linked to previous work. The format needs to be reviewed, with many paragraphs with a single sentence, which shows a lack of organization in the arguments presented. The two main problems are 1/ the total lack of comparison with observed rainfall observations, climate simulations are only compared with reanalyses, and 2/ the results seem contradictory compared to previous studies, which are based on a much larger number of simulations. Thus, I question the relevance of the whole study. The work that would be required to improve the manuscript seems very extensive and beyond the scope of a major revision.
Page 1, line 24, perhaps relevant here to add more references that deal with changes in extreme rainfall over the same region:
Abiodun, B.J., Abba Omar, S., Lennard, C. and Jack, C. (2016), Using regional climate models to simulate extreme rainfall events in the Western Cape, South Africa. Int. J. Climatol., 36: 689-705. https://doi.org/10.1002/joc.4376
Engelbrecht, C.J., Engelbrecht, F.A. and Dyson, L.L. (2013), High-resolution model-projected changes in mid-tropospheric closed-lows and extreme rainfall events over southern Africa. Int. J. Climatol., 33: 173-187. https://doi.org/10.1002/joc.3420
Mason, S.J., Waylen, P.R., Mimmack, G.M. et al. Changes in Extreme Rainfall Events in South Africa. Climatic Change 41, 249–257 (1999). https://doi.org/10.1023/A:1005450924499
Page 2 line 35, I feel like this section could be a bit improved, to better describe the synoptic influences on extreme rainfall in this area
Manhique, A.J., Reason, C.J.C., Silinto, B. et al. Extreme rainfall and floods in southern Africa in January 2013 and associated circulation patterns. Nat Hazards 77, 679–691 (2015). https://doi.org/10.1007/s11069-015-1616-y
Reading the rather short introduction, I’m left with a question about the novelty of this work. The introduction is quick and does not stress enough what is known/what is unknown about changes in extreme rainfall in Southern Africa. An overview should be provided about 1/ studies analyzing long term trends in extreme rainfall and 2/ the climate change scenarios already available in this region. At the end of the introduction, it should be clearly explained what is the novelty of the present study.
Section 2.1 data, I don’t understand the rationale of using the JRA-55 reanalysis for the hindcast simulations and the ERA5 reanalysis “for validation purpose”. Is the ERA5 reanalysis considered as observations here? The hindcast runs should be compared to observed rainfall.
Page 3, line 65, percentiles are computed for each grid cells, but for the Cape and Natal regions the grid cells are first averaged. It does not make sense to me. This methodological choice should be explained.
Page 3, line 85 “Thus, the spatial distribution of precipitation extremes seems to be generally realistically simulated by CCLM”. So you compare a reanalysis (JRA55) with another reanalysis (ERA5) and you think if they agree it means simulated precipitation is realistic. Simulations should be evaluated by observed rainfall, or perhaps with merged products like CHIRPS that uses raingauges data and satellite data. There is absolutely no mention in the manuscript that ERA5 could be a reliable source of information for extreme rainfall in that area.
Page 4, line 95: are these values realistic compared to actual observations?
Figure 3: please add in the caption the time periods when the trends have been computed.
Page 6, line 119: “The highest grid-cell-scale percentile is found in the future” this is not clear.
Figure 4: “Threshold of the 99th percentile of precipitation..” this is not clear. Do you mean “value” instead of “threshold” ?
Page 8, line 145: I question the relevance of proving the largest event over such a large domain (also mentioned before in the manuscript). The value of 1 pixel for 1 day over long term simulations has absolutely no meaning in terms of future trends, it would me more relevant to provide an average of changes in percentiles, or the magnitude of the trend, for the whole domain than only providing one single value.
Page 8, line 162: “number of extremes per year” ; how are extreme defined ? values above the 99th percentile?
Page 9, line 183: please list here the other potential drivers
Sections 3.4 and 3.5 are hard to follow. Why a focus on these regions, it is not explained. Another methodology is presented here (2-day sums etc) that should rather be in the method section. Figure 6 is not called in the text. It refers to the “coastline” so I guess the Cape region. So if this is correct, the results are wrong since it can be clearly seen in figure 6 left panel that the y-axis is higher for future (0-600) than for historical (0-400), so precipitation extremes are increasing but the opposite can be read in section 3.4.
Page 16, line 265, please provide a more recent citation than Trenberth 2003 about this statement on the evolution of extreme rainfall
Page 17, line 275: please explain what could be the reason why CMIP5 model project an increase and your study a decrease. Pohl et al. 2015 use 15 models when in the present work only one is used. Therefore there is much more confidence in the robustness of the results of Pohl et al. 2017.
Citation: https://doi.org/10.5194/nhess-2023-147-RC1 -
AC1: 'Reply on RC1', Eduardo Zorita, 15 Nov 2023
We would like to thank both reviewers for their time and their comments on the manuscript.
Before addressing their comments in detail, we want to explain two general points that were not highlighted clearly enough in the initial submission and that the reviewers have apparently overlooked.
Perhaps the most important criticisms expressed by the reviewers pertains to a lack of validation of the original climate model used to simulate trends in external events in South Africa and the unclear novelty of our study. These two points are not totally fair, although we acknowledge that they need to be adequately explained in the initial submission.
We would like to draw the editor's and the reviewers' attention to the fact that this manuscript is a follow-up manuscript of a previous study analysing trends in precipitation in this region and its connection to their Agulhas current (Tim et al., 2023, The impact of the Agulhas Current system on precipitation in southern Africa in regional climate simulations covering the recent past and future doi:https://doi.org/10.5194/wcd-4-381-2023. This was stated in the submission of the present manuscript : page2 line 27, ‘In a previous paper (Tim et al., 2023), we analysed the mean precipitation and its trends over the late 20th and 21st centuries simulated by the same regional atmospheric model. We compared it to several observational data sets.’
In that manuscript, we did compare this simulation with other observational records of precipitation and several reanalysis data sets in terms of the annual cycle of precipitation and the historical trends of seasonal precipitation in the two rainfall zones. Therefore, this simulation had been previously validated against observations. The validation presented in the previous manuscript was indeed focused on seasonal means and not extreme events. But validation in terms of trends of extreme events is rather difficult and probably not possible, as daily precipitation records are short and not homogenous in time. In this respect, reanalysis fo provide valuable information, complete in time and space, in contrast to daily station precipitation records.
The two reviewers point to previous studies that have compared regional climate simulations with specific observed precipitation extremes. We also include this type of validation in the present manuscript, albeit focusing on a recent extreme event that did not occur in South Africa but along the Indian Ocean coast. Therefore, it is also not totally fair to state that this study does not compare specific simulated and observed extreme events. We do so, but not the extreme events the reviewers hoped to see.
Regarding the second criticism, namely that this study only includes one regional simulation, whereas previous studies have included an example of simulations with different climate models, we would like to point out that the spatial resolution of our model is much finer than the special resolution of previous studies. The simulations included in the Cordex experiment are typically 0.5 degrees, roughly equivalent to 40 kilometres. In contrast, our model simulation has a spatial resolution of 13 kilometres and, therefore, is at the limit of models that do not specifically resolve local convection. This is the first study that has used a long (multidecadal) high-resolution regional simulation for this area.
Nevertheless, in the initially submitted version of the manuscript, we did discuss the results of the Codex experiment, and we summarised the results achieved with that model ensemble. It is known that a reliable simulation of precipitation extremes requires a very high spatial resolution.
Another point is that running the original climate model at such spatial resolution consumes considerable computational resources. For instance, this simulation required a whole year of real-time on a high-performance computer. At this stage, it is very expensive to compute an ensemble of simulations at this resolution. As stated before, to our knowledge, this is the only model that runs at this high resolution for this area.
We acknowledge that our study can be improved and take the reviewers' suggestions very seriously. In the following, we address their particular points in more detail.
Reviewer #1
‘ Section 2.1 data, I don’t understand the rationale of using the JRA-55 reanalysis for the hindcast simulations and the ERA5 reanalysis “for validation purpose”. Is the ERA5 reanalysis considered as observations here? The hindcast runs should be compared to observed rainfall.’
The regional simulation, driven by the JRA-reanalysis was compared to the ERA5 reanalysis to avoid a complete circularity. In an ideal world, the reviewer is correct that the simulations would have been compared to trends of extremes from station data. However, the observational data sets of daily rainfall are short and certainly have their own homogeneity problems. They can be considered more reliable in the 21st century, but this part of the world was not sufficiently monitored at a daily scale in the 20th century. Trends of precipitation extremes estimated from just 20 years or so of data cannot be reliably computed. This is the rationale for using reanalysis data.
There are additional problems when comparing station-scale extremes and model extremes related to the spatial resolution. Gridded data sets, as simulations and reanalysis, represent averages over a grid cell, and even in the best case of a very fine resolution, the probability distributions of gridded data sets are much more Gaussian (less skewed) than local station data. The high quantiles of gridded data sets are, for this reason, automatically lower. The needed correction depends on the spatial correlation of precipitation, which is very uncertain. Therefore, in terms of quantities, the comparison between gridded data sets and station data sets is more complex and likely not completely meaningful. On the other hand, long-term, multidecadal, trends could be assessed, but again, long-term daily precipitation records have their own problems. For instance, the TMMR daily data set derived from satellite remote sensing covers just 20 years and their originators explicitly warned against estimating long term trends trends, as the homogeneity is not assured. The CHIRPS data set only includes monthly mean data, as therefore not useful to validate daily extremes.
‘Page 3, line 65, percentiles are computed for each grid cells, but for the Cape and Natal regions the grid cells are first averaged. It does not make sense to me. This methodological choice should be explained.’
These are two very small regions in the context of Southern Africa. In the simulation, they contain just a few model grid cells. The averages were conducted to achieve a more robust and accurate estimation of the trends in extreme precipitation, as these are two focus regions of the study. Certainly, the trends can also be computed at the grid-cell level, and we can also include those results in a revised version.
‘Page 3, line 85 “Thus, the spatial distribution of precipitation extremes seems to be generally realistically simulated by CCLM”. So you compare a reanalysis (JRA55) with another reanalysis (ERA5), and you think if they agree, it means simulated precipitation is realistic. Simulations should be evaluated by observed rainfall, or perhaps with merged products like CHIRPS that uses raingauges data and satellite data. There is no mention in the manuscript that ERA5 could be a reliable source of information for extreme rainfall in that area.#
As per the previous point, the CHIRPS data set provides monthly precipitation data since 1981. The validation of the simulated precipitation in terms of monthly mean data has already been conducted in our previous manuscript, so it would not be adequate to repeat it here. Regarding validating the simulated precipitation extremes, the observational station record may be adequate when focusing on specific extremes, as the studies indicated by the reviewer, but the long-term trends derived from the station data may be highly problematic.
‘Page 4, line 95: are these values realistic compared to actual observations?’
It is certainly a good suggestion to compare the simulated extreme values with actual observed station data, but this comparison needs to be carefully regarded from the outset. The gridded data represent an average of approximately 150 square kilometres, whereas station data are a point observation. The extremes in the simulation cannot represent a point extreme; thus, the model data will always appear too low in the comparison. However, the order of magnitude can be indicative, and we may include them in a revised version.
‘Figure 3: please add in the caption the periods when the trends have been computed.’
Point taken
‘Page 6, line 119: “The highest grid-cell-scale percentile is found in the future” this is not clear.’
Point taken
‘Figure 4: “Threshold of the 99th percentile of precipitation..” this is not clear. Do you mean “value” instead of “threshold” ?
Yes, we mean value. Point taken
‘Page 8, line 145: I question the relevance of proving the largest event over such a large domain (also mentioned before in the manuscript). The value of 1 pixel for 1 day over long-term simulations has absolutely no meaning in terms of future trends, it would me more relevant to provide an average of changes in percentiles, or the magnitude of the trend, for the whole domain than only providing one single value.’
The magnitude of the trends is indicated in figure 3, so the magnitude of the absolute maximum of precipitation was meant to be an additional piece of information. We can exclude it in a revised version, but it may interest some readers. For instance, for the extreme in the Matal region, the scenario simulation displays an extreme double as strong as the historical simulation. We agree with the reviewer that this maximum cannot be attributed to greenhouse gas forcing, but impact studies may find it useful to know that much higher extremes are possible according to this simulation.
The need to state both, the long term trends and the magnitude of the extremes, is highlighted by another reviewer’s comment below.
‘Page 8, line 162: “number of extremes per year” ; how are extreme defined ? values above the 99th percentile?’
Yes, this was defined in the first sentence of section 3.3
‘Page 9, line 183: please list here the other potential drivers’
We refer here to the direct impact of atmospheric circulation. We would add it in a revised version.
‘Sections 3.4 and 3.5 are hard to follow. Why a focus on these regions, it is not explained. Another methodology is presented here (2-day sums etc) that should rather be in the method section. Figure 6 is not called in the text. It refers to the “coastline” so I guess the Cape region. So if this is correct, the results are wrong since it can be clearly seen in figure 6 left panel that the y-axis is higher for future (0-600) than for historical (0-400), so precipitation extremes are increasing, but the opposite can be read in section 3.4.’
The rationale for focusing on the Cape Town and Natal regions is twofold. The Cape Town region is a populated city in the Winter Rainfall Zone, in which the drivers of precipitation are known to be different than in most of inland Southern Africa and more related to the passage of extratropical lows. The Natal region lies within the tropics, and precipitation extremes are related to tropical storms. The reason for focusing on this region is the recent flooding suffered in April 2022. The reason for focusing on two-day sums is that this event extended over two consecutive extreme days. Therefore, it is relevant to see if the climate simulation produces extreme precipitation with this characteristic.
Figure 6 needs to be referred to in the text. It should have been called on page 10 immediately after the display of Figure 6.
Section 3.4 discusses the need for long-term historical and scenario simulation trends. This point is also related to a previous point of the reviewer: whereas the long-term trends of the annual maxima are close to zero, there are events in the scenario simulations that are much larger than in the historical simulation, and therefore, the plot requires a wider y-axis. This is why quoting the value of the respective absolute maxima is informative and not only the long-term trends.
‘Page 16, line 265, please provide a more recent citation than Trenberth 2003 about this statement on the evolution of extreme rainfall’
Point taken
‘Page 17, line 275: please explain what could be the reason why CMIP5 model project an increase and your study a decrease. Pohl et al. 2015 use 15 models when in the present work only one is used. Therefore there is much more confidence in the robustness of the results of Pohl et al. 2017.’
It is not correct to state that the study of Pohl et al. (2017) indicates an increase of extremes across all CMIP5 models and all areas. This is only the case for the ensemble model mean, but the model mean hides a large model spread. For instance, Figure 2 in Pohl et al. in the panels below panels a-d show that the number of models that agree with the ensemble mean values quite strongly in the whole area, with many areas displayed with greenish or bluish colours, i.e. less or equal than 50% of the models. Regarding the time series for the regions of Malawi and Tanganyika, the series of extremes are flat for many models, though some do show a clear increase. This shows that there are likely many reasons for the intermodal discrepancies.
In addition, another difference between the CMIP5 models and our simulation is, as explained before, the relatively coarse spatial resolution of the CMIP5 models.
We would expand this discussion in a revised version, but we believe that there is no apparent disagreement between our simulation and the CMIP5 models, but rather a large model spread.
Citation: https://doi.org/10.5194/nhess-2023-147-AC1
-
AC1: 'Reply on RC1', Eduardo Zorita, 15 Nov 2023
-
RC2: 'Comment on nhess-2023-147', Anonymous Referee #2, 07 Sep 2023
Review of “Simulated rainfall extremes over southern Africa over the 20th and 21st centuries” submitted to Natural Hazards and Earth System Sciences (Manuscript ID: nhess-2023-147). This paper proposes to analyze the spatial patterns of extreme precipitation and its projected changes in the future in a regional climate model (RCM), and how it could be linked to the Agulhas Current. This is an interesting topic, especially if the paper could help understand how future changes in extreme rainfall are affected by larger-scale circulation features, such as the Agulhas Current. However, this aspect is not much developed in the present study.
The paper mainly focuses on analyzing regional trends in extreme rainfall using a single realization from a single RCM at 16km. However, considering how different the results could be in different RCMs and/or in the same RCM driven with different GCMs, I wonder how much trust we give attribute to this type of analysis. It would be good if the author could highlight a bit more the novelty of their approach, compared to previous work using CORDEX. How does this RCM compare with CORDEX models in the region?
Similarly, in the present manuscript, one RCM is compared to ERA5. However, ERA5 also has biases, it would be good to see how the RCM compared with observed station data and/or satellite-derived datasets (CHIRPS, PERSIAN, TRMM).
Finally, I’m also concerned with the absence of statistical testing, which makes the entire discussion/results a bit subjective. The authors compare different statistics in different datasets and over different time periods, but the discussion/results are never supported by statistical tests. For instance, in Figure 1 (but the same apply to most figure), can we not have a plot of the difference between ERA5 and the RCM? Can we not include a t-test to evaluate the significance of the difference between the 99th percentiles? Can we not include a Mann-Kendall test to assess the trend significance?
Citation: https://doi.org/10.5194/nhess-2023-147-RC2 -
AC2: 'Reply on RC2', Eduardo Zorita, 15 Nov 2023
We would like to thank both reviewers for their time and their comments on the manuscript.
Before addressing their comments in detail, we want to explain two general points that were not highlighted clearly enough in the initial submission and that the reviewers have apparently overlooked.
Perhaps the most important criticisms expressed by the reviewers pertains to a lack of validation of the original climate model used to simulate trends in external events in South Africa and the unclear novelty of our study. These two points are not totally fair, although we acknowledge that they need to be adequately explained in the initial submission.
We would like to draw the editor's and the reviewers' attention to the fact that this manuscript is a follow-up manuscript of a previous study analysing trends in precipitation in this region and its connection to their Agulhas current (Tim et al., 2023, The impact of the Agulhas Current system on precipitation in southern Africa in regional climate simulations covering the recent past and future doi:https://doi.org/10.5194/wcd-4-381-2023. This was states in the submission of the present manuscript : page2 line 27, ‘In a previous paper (Tim et al., 2023), we analysed the mean precipitation and its trends over the late 20th and 21st centuries simulated by the same regional atmospheric model. We compared it to several observational data sets.’
In that manuscript, we did compare this simulation with other observational records of precipitation and several reanalysis data sets in terms of the annual cycle of precipitation and the historical trends of seasonal precipitation in the two rainfall zones. Therefore, this simulation had been previously validated against observations. The validation presented in the previous manuscript was indeed focused on seasonal means and not extreme events. But validation in terms of trends of extreme events is rather difficult and probably not possible, as daily precipitation records are short and not homogenous in time. In this respect, reanalysis fo provide valuable information, complete in time and space, in contrast to daily station precipitation records.
The two reviewers point to previous studies that have compared regional climate simulations with specific observed precipitation extremes. We also include this type of validation in the present manuscript, albeit focusing on a recent extreme event that did not occur in South Africa but along the Indian Ocean coast. Therefore, it is also not totally fair to state that this study does not compare specific simulated and observed extreme events. We do so, but not the extreme events the reviewers hoped to see.
Regarding the second criticism, namely that this study only includes one regional simulation, whereas previous studies have included an example of simulations with different climate models, we would like to point out that the spatial resolution of our model is much finer than the special resolution of previous studies. The simulations included in the Cordex experiment are typically 0.5 degrees, roughly equivalent to 40 kilometres. In contrast, our model simulation has a spatial resolution of 13 kilometres and, therefore, is at the limit of models that do not specifically resolve local convection. This is the first study that has used a long (multidecadal) high-resolution regional simulation for this area.
Nevertheless, in the initially submitted version of the manuscript, we did discuss the results of the Codex experiment, and we summarised the results achieved with that model ensemble. It is known that a reliable simulation of precipitation extremes requires a very high spatial resolution.
Another point is that running the original climate model at such spatial resolution consumes considerable computational resources. For instance, this simulation required a whole year of real-time on a high-performance computer. At this stage, it is very expensive to compute an ensemble of simulations at this resolution. As stated before, to our knowledge, this is the only model that runs at this high resolution for this area.
We acknowledge that our study can be improved and take the reviewers' suggestions very seriously. In the following, we address their particular points in more detail.
Reviewer #2
‘ The paper mainly focuses on analyzing regional trends in extreme rainfall using a single realization from a single RCM at 16km. However, considering how different the results could be in different RCMs and/or in the same RCM driven with different GCMs, I wonder how much trust we give attribute to this type of analysis. It would be good if the author could highlight a bit more the novelty of their approach compared to previous work using CORDEX. How does this RCM compare with CORDEX models in the region?'
As explained in the introduction to this response, one novelty is the high spatial resolution (16km) compared to the Cordex model ensemble (0.44 degrees, approx 40 km). The Cordex ensemble displays a relatively large spread in the simulated trends of precipitation extremes, as analyzed in the study by Pohl et al. (2017) large areas in Southern Africa display levels of model agreement equal or less than 50% of the 15 models included. When looking into two specific areas, Malawi and Tanganika, some models agree with our study in which extreme simulated trends are flat. Other models show a long-term increase. We would discuss in more detail the results of obtained by Pohl et al. compared to ours, also in terms of the resolution of the Cordex models.
‘Similarly, in the present manuscript, one RCM is compared to ERA5. However, ERA5 also has biases, it would be good to see how the RCM compared with observed station data and/or satellite-derived datasets (CHIRPS, PERSIAN, TRMM).’
As also explained in the introduction to this response, a validation of the seasonal means of precipitation and their long-term trends was already presented in a previous paper (Tim et al., 2023). The currently available data sets do not allow for a reliable estimation of historical extremes of daily precipitation - the data sets mentioned by the reviewer include only monthly means, with the exception of TMMR. However, TMMR covers a short period (1997-2015). Their description explicitly warns against using this as a climatological product, as the homogeneity of the records was not a focus in their compilation.
Our study looks more specifically at one type of extreme precipitation in the Natal region caused by tropical cyclones. We explore whether these types of extremes can become more frequent or more intense, concluding that the long-term trends are also flat. However, the scenario simulation does simulate stronger extremes so the long-term trends are, in this regard, not totally informative.
‘Finally, I’m also concerned with the absence of statistical testing, which makes the entire discussion/results a bit subjective. The authors compare different statistics indifferent datasets and over different time periods, but the discussion/results are never supported by statistical tests. For instance, in Figure 1 (but the same apply to most figure), can we not have a plot of the difference between ERA5 and the RCM? Can we not include a t-test to evaluate the significance of the difference between the 99th percentiles? Can we not include a Mann-Kendall test to assess the trend significance?’
We agree that this information can be included in a revised version. However, the impact of statistical significance computed at the grid cell level can be highly misleading, as a map of individual non-significant trends can still be highly significant if many independent grid cells display a trend of the same sign. Therefore, the overall significance of a map of trends depends on the level of spatial autocorrelation of precipitation at different timescales, and this is not straightforward to compute. The overestimation of the role of statistical significance has been highlighted in publications in high-profile journals (Amrhein et al., Nature 567, 306 ‘Retire statistical significance’ doi:; Wasserstein & Laza (2016) The ASA Statement on p-Values: Context, Process, and Purpose, The American Statistician, 70:2, 129-133, DOI: 10.1080/00031305.2016.1154108Citation: https://doi.org/10.5194/nhess-2023-147-AC2
-
AC2: 'Reply on RC2', Eduardo Zorita, 15 Nov 2023
Status: closed
-
RC1: 'Comment on nhess-2023-147', Anonymous Referee #1, 28 Aug 2023
The manuscript by Tim et al. analyzes simulations of a regional climate model (CCLM) to assess the evolution of extreme rainfall. I don't have a very favorable opinion of this manuscript, which seems to just present the results of a single model and doesn't propose a regional synthesis linked to previous work. The format needs to be reviewed, with many paragraphs with a single sentence, which shows a lack of organization in the arguments presented. The two main problems are 1/ the total lack of comparison with observed rainfall observations, climate simulations are only compared with reanalyses, and 2/ the results seem contradictory compared to previous studies, which are based on a much larger number of simulations. Thus, I question the relevance of the whole study. The work that would be required to improve the manuscript seems very extensive and beyond the scope of a major revision.
Page 1, line 24, perhaps relevant here to add more references that deal with changes in extreme rainfall over the same region:
Abiodun, B.J., Abba Omar, S., Lennard, C. and Jack, C. (2016), Using regional climate models to simulate extreme rainfall events in the Western Cape, South Africa. Int. J. Climatol., 36: 689-705. https://doi.org/10.1002/joc.4376
Engelbrecht, C.J., Engelbrecht, F.A. and Dyson, L.L. (2013), High-resolution model-projected changes in mid-tropospheric closed-lows and extreme rainfall events over southern Africa. Int. J. Climatol., 33: 173-187. https://doi.org/10.1002/joc.3420
Mason, S.J., Waylen, P.R., Mimmack, G.M. et al. Changes in Extreme Rainfall Events in South Africa. Climatic Change 41, 249–257 (1999). https://doi.org/10.1023/A:1005450924499
Page 2 line 35, I feel like this section could be a bit improved, to better describe the synoptic influences on extreme rainfall in this area
Manhique, A.J., Reason, C.J.C., Silinto, B. et al. Extreme rainfall and floods in southern Africa in January 2013 and associated circulation patterns. Nat Hazards 77, 679–691 (2015). https://doi.org/10.1007/s11069-015-1616-y
Reading the rather short introduction, I’m left with a question about the novelty of this work. The introduction is quick and does not stress enough what is known/what is unknown about changes in extreme rainfall in Southern Africa. An overview should be provided about 1/ studies analyzing long term trends in extreme rainfall and 2/ the climate change scenarios already available in this region. At the end of the introduction, it should be clearly explained what is the novelty of the present study.
Section 2.1 data, I don’t understand the rationale of using the JRA-55 reanalysis for the hindcast simulations and the ERA5 reanalysis “for validation purpose”. Is the ERA5 reanalysis considered as observations here? The hindcast runs should be compared to observed rainfall.
Page 3, line 65, percentiles are computed for each grid cells, but for the Cape and Natal regions the grid cells are first averaged. It does not make sense to me. This methodological choice should be explained.
Page 3, line 85 “Thus, the spatial distribution of precipitation extremes seems to be generally realistically simulated by CCLM”. So you compare a reanalysis (JRA55) with another reanalysis (ERA5) and you think if they agree it means simulated precipitation is realistic. Simulations should be evaluated by observed rainfall, or perhaps with merged products like CHIRPS that uses raingauges data and satellite data. There is absolutely no mention in the manuscript that ERA5 could be a reliable source of information for extreme rainfall in that area.
Page 4, line 95: are these values realistic compared to actual observations?
Figure 3: please add in the caption the time periods when the trends have been computed.
Page 6, line 119: “The highest grid-cell-scale percentile is found in the future” this is not clear.
Figure 4: “Threshold of the 99th percentile of precipitation..” this is not clear. Do you mean “value” instead of “threshold” ?
Page 8, line 145: I question the relevance of proving the largest event over such a large domain (also mentioned before in the manuscript). The value of 1 pixel for 1 day over long term simulations has absolutely no meaning in terms of future trends, it would me more relevant to provide an average of changes in percentiles, or the magnitude of the trend, for the whole domain than only providing one single value.
Page 8, line 162: “number of extremes per year” ; how are extreme defined ? values above the 99th percentile?
Page 9, line 183: please list here the other potential drivers
Sections 3.4 and 3.5 are hard to follow. Why a focus on these regions, it is not explained. Another methodology is presented here (2-day sums etc) that should rather be in the method section. Figure 6 is not called in the text. It refers to the “coastline” so I guess the Cape region. So if this is correct, the results are wrong since it can be clearly seen in figure 6 left panel that the y-axis is higher for future (0-600) than for historical (0-400), so precipitation extremes are increasing but the opposite can be read in section 3.4.
Page 16, line 265, please provide a more recent citation than Trenberth 2003 about this statement on the evolution of extreme rainfall
Page 17, line 275: please explain what could be the reason why CMIP5 model project an increase and your study a decrease. Pohl et al. 2015 use 15 models when in the present work only one is used. Therefore there is much more confidence in the robustness of the results of Pohl et al. 2017.
Citation: https://doi.org/10.5194/nhess-2023-147-RC1 -
AC1: 'Reply on RC1', Eduardo Zorita, 15 Nov 2023
We would like to thank both reviewers for their time and their comments on the manuscript.
Before addressing their comments in detail, we want to explain two general points that were not highlighted clearly enough in the initial submission and that the reviewers have apparently overlooked.
Perhaps the most important criticisms expressed by the reviewers pertains to a lack of validation of the original climate model used to simulate trends in external events in South Africa and the unclear novelty of our study. These two points are not totally fair, although we acknowledge that they need to be adequately explained in the initial submission.
We would like to draw the editor's and the reviewers' attention to the fact that this manuscript is a follow-up manuscript of a previous study analysing trends in precipitation in this region and its connection to their Agulhas current (Tim et al., 2023, The impact of the Agulhas Current system on precipitation in southern Africa in regional climate simulations covering the recent past and future doi:https://doi.org/10.5194/wcd-4-381-2023. This was stated in the submission of the present manuscript : page2 line 27, ‘In a previous paper (Tim et al., 2023), we analysed the mean precipitation and its trends over the late 20th and 21st centuries simulated by the same regional atmospheric model. We compared it to several observational data sets.’
In that manuscript, we did compare this simulation with other observational records of precipitation and several reanalysis data sets in terms of the annual cycle of precipitation and the historical trends of seasonal precipitation in the two rainfall zones. Therefore, this simulation had been previously validated against observations. The validation presented in the previous manuscript was indeed focused on seasonal means and not extreme events. But validation in terms of trends of extreme events is rather difficult and probably not possible, as daily precipitation records are short and not homogenous in time. In this respect, reanalysis fo provide valuable information, complete in time and space, in contrast to daily station precipitation records.
The two reviewers point to previous studies that have compared regional climate simulations with specific observed precipitation extremes. We also include this type of validation in the present manuscript, albeit focusing on a recent extreme event that did not occur in South Africa but along the Indian Ocean coast. Therefore, it is also not totally fair to state that this study does not compare specific simulated and observed extreme events. We do so, but not the extreme events the reviewers hoped to see.
Regarding the second criticism, namely that this study only includes one regional simulation, whereas previous studies have included an example of simulations with different climate models, we would like to point out that the spatial resolution of our model is much finer than the special resolution of previous studies. The simulations included in the Cordex experiment are typically 0.5 degrees, roughly equivalent to 40 kilometres. In contrast, our model simulation has a spatial resolution of 13 kilometres and, therefore, is at the limit of models that do not specifically resolve local convection. This is the first study that has used a long (multidecadal) high-resolution regional simulation for this area.
Nevertheless, in the initially submitted version of the manuscript, we did discuss the results of the Codex experiment, and we summarised the results achieved with that model ensemble. It is known that a reliable simulation of precipitation extremes requires a very high spatial resolution.
Another point is that running the original climate model at such spatial resolution consumes considerable computational resources. For instance, this simulation required a whole year of real-time on a high-performance computer. At this stage, it is very expensive to compute an ensemble of simulations at this resolution. As stated before, to our knowledge, this is the only model that runs at this high resolution for this area.
We acknowledge that our study can be improved and take the reviewers' suggestions very seriously. In the following, we address their particular points in more detail.
Reviewer #1
‘ Section 2.1 data, I don’t understand the rationale of using the JRA-55 reanalysis for the hindcast simulations and the ERA5 reanalysis “for validation purpose”. Is the ERA5 reanalysis considered as observations here? The hindcast runs should be compared to observed rainfall.’
The regional simulation, driven by the JRA-reanalysis was compared to the ERA5 reanalysis to avoid a complete circularity. In an ideal world, the reviewer is correct that the simulations would have been compared to trends of extremes from station data. However, the observational data sets of daily rainfall are short and certainly have their own homogeneity problems. They can be considered more reliable in the 21st century, but this part of the world was not sufficiently monitored at a daily scale in the 20th century. Trends of precipitation extremes estimated from just 20 years or so of data cannot be reliably computed. This is the rationale for using reanalysis data.
There are additional problems when comparing station-scale extremes and model extremes related to the spatial resolution. Gridded data sets, as simulations and reanalysis, represent averages over a grid cell, and even in the best case of a very fine resolution, the probability distributions of gridded data sets are much more Gaussian (less skewed) than local station data. The high quantiles of gridded data sets are, for this reason, automatically lower. The needed correction depends on the spatial correlation of precipitation, which is very uncertain. Therefore, in terms of quantities, the comparison between gridded data sets and station data sets is more complex and likely not completely meaningful. On the other hand, long-term, multidecadal, trends could be assessed, but again, long-term daily precipitation records have their own problems. For instance, the TMMR daily data set derived from satellite remote sensing covers just 20 years and their originators explicitly warned against estimating long term trends trends, as the homogeneity is not assured. The CHIRPS data set only includes monthly mean data, as therefore not useful to validate daily extremes.
‘Page 3, line 65, percentiles are computed for each grid cells, but for the Cape and Natal regions the grid cells are first averaged. It does not make sense to me. This methodological choice should be explained.’
These are two very small regions in the context of Southern Africa. In the simulation, they contain just a few model grid cells. The averages were conducted to achieve a more robust and accurate estimation of the trends in extreme precipitation, as these are two focus regions of the study. Certainly, the trends can also be computed at the grid-cell level, and we can also include those results in a revised version.
‘Page 3, line 85 “Thus, the spatial distribution of precipitation extremes seems to be generally realistically simulated by CCLM”. So you compare a reanalysis (JRA55) with another reanalysis (ERA5), and you think if they agree, it means simulated precipitation is realistic. Simulations should be evaluated by observed rainfall, or perhaps with merged products like CHIRPS that uses raingauges data and satellite data. There is no mention in the manuscript that ERA5 could be a reliable source of information for extreme rainfall in that area.#
As per the previous point, the CHIRPS data set provides monthly precipitation data since 1981. The validation of the simulated precipitation in terms of monthly mean data has already been conducted in our previous manuscript, so it would not be adequate to repeat it here. Regarding validating the simulated precipitation extremes, the observational station record may be adequate when focusing on specific extremes, as the studies indicated by the reviewer, but the long-term trends derived from the station data may be highly problematic.
‘Page 4, line 95: are these values realistic compared to actual observations?’
It is certainly a good suggestion to compare the simulated extreme values with actual observed station data, but this comparison needs to be carefully regarded from the outset. The gridded data represent an average of approximately 150 square kilometres, whereas station data are a point observation. The extremes in the simulation cannot represent a point extreme; thus, the model data will always appear too low in the comparison. However, the order of magnitude can be indicative, and we may include them in a revised version.
‘Figure 3: please add in the caption the periods when the trends have been computed.’
Point taken
‘Page 6, line 119: “The highest grid-cell-scale percentile is found in the future” this is not clear.’
Point taken
‘Figure 4: “Threshold of the 99th percentile of precipitation..” this is not clear. Do you mean “value” instead of “threshold” ?
Yes, we mean value. Point taken
‘Page 8, line 145: I question the relevance of proving the largest event over such a large domain (also mentioned before in the manuscript). The value of 1 pixel for 1 day over long-term simulations has absolutely no meaning in terms of future trends, it would me more relevant to provide an average of changes in percentiles, or the magnitude of the trend, for the whole domain than only providing one single value.’
The magnitude of the trends is indicated in figure 3, so the magnitude of the absolute maximum of precipitation was meant to be an additional piece of information. We can exclude it in a revised version, but it may interest some readers. For instance, for the extreme in the Matal region, the scenario simulation displays an extreme double as strong as the historical simulation. We agree with the reviewer that this maximum cannot be attributed to greenhouse gas forcing, but impact studies may find it useful to know that much higher extremes are possible according to this simulation.
The need to state both, the long term trends and the magnitude of the extremes, is highlighted by another reviewer’s comment below.
‘Page 8, line 162: “number of extremes per year” ; how are extreme defined ? values above the 99th percentile?’
Yes, this was defined in the first sentence of section 3.3
‘Page 9, line 183: please list here the other potential drivers’
We refer here to the direct impact of atmospheric circulation. We would add it in a revised version.
‘Sections 3.4 and 3.5 are hard to follow. Why a focus on these regions, it is not explained. Another methodology is presented here (2-day sums etc) that should rather be in the method section. Figure 6 is not called in the text. It refers to the “coastline” so I guess the Cape region. So if this is correct, the results are wrong since it can be clearly seen in figure 6 left panel that the y-axis is higher for future (0-600) than for historical (0-400), so precipitation extremes are increasing, but the opposite can be read in section 3.4.’
The rationale for focusing on the Cape Town and Natal regions is twofold. The Cape Town region is a populated city in the Winter Rainfall Zone, in which the drivers of precipitation are known to be different than in most of inland Southern Africa and more related to the passage of extratropical lows. The Natal region lies within the tropics, and precipitation extremes are related to tropical storms. The reason for focusing on this region is the recent flooding suffered in April 2022. The reason for focusing on two-day sums is that this event extended over two consecutive extreme days. Therefore, it is relevant to see if the climate simulation produces extreme precipitation with this characteristic.
Figure 6 needs to be referred to in the text. It should have been called on page 10 immediately after the display of Figure 6.
Section 3.4 discusses the need for long-term historical and scenario simulation trends. This point is also related to a previous point of the reviewer: whereas the long-term trends of the annual maxima are close to zero, there are events in the scenario simulations that are much larger than in the historical simulation, and therefore, the plot requires a wider y-axis. This is why quoting the value of the respective absolute maxima is informative and not only the long-term trends.
‘Page 16, line 265, please provide a more recent citation than Trenberth 2003 about this statement on the evolution of extreme rainfall’
Point taken
‘Page 17, line 275: please explain what could be the reason why CMIP5 model project an increase and your study a decrease. Pohl et al. 2015 use 15 models when in the present work only one is used. Therefore there is much more confidence in the robustness of the results of Pohl et al. 2017.’
It is not correct to state that the study of Pohl et al. (2017) indicates an increase of extremes across all CMIP5 models and all areas. This is only the case for the ensemble model mean, but the model mean hides a large model spread. For instance, Figure 2 in Pohl et al. in the panels below panels a-d show that the number of models that agree with the ensemble mean values quite strongly in the whole area, with many areas displayed with greenish or bluish colours, i.e. less or equal than 50% of the models. Regarding the time series for the regions of Malawi and Tanganyika, the series of extremes are flat for many models, though some do show a clear increase. This shows that there are likely many reasons for the intermodal discrepancies.
In addition, another difference between the CMIP5 models and our simulation is, as explained before, the relatively coarse spatial resolution of the CMIP5 models.
We would expand this discussion in a revised version, but we believe that there is no apparent disagreement between our simulation and the CMIP5 models, but rather a large model spread.
Citation: https://doi.org/10.5194/nhess-2023-147-AC1
-
AC1: 'Reply on RC1', Eduardo Zorita, 15 Nov 2023
-
RC2: 'Comment on nhess-2023-147', Anonymous Referee #2, 07 Sep 2023
Review of “Simulated rainfall extremes over southern Africa over the 20th and 21st centuries” submitted to Natural Hazards and Earth System Sciences (Manuscript ID: nhess-2023-147). This paper proposes to analyze the spatial patterns of extreme precipitation and its projected changes in the future in a regional climate model (RCM), and how it could be linked to the Agulhas Current. This is an interesting topic, especially if the paper could help understand how future changes in extreme rainfall are affected by larger-scale circulation features, such as the Agulhas Current. However, this aspect is not much developed in the present study.
The paper mainly focuses on analyzing regional trends in extreme rainfall using a single realization from a single RCM at 16km. However, considering how different the results could be in different RCMs and/or in the same RCM driven with different GCMs, I wonder how much trust we give attribute to this type of analysis. It would be good if the author could highlight a bit more the novelty of their approach, compared to previous work using CORDEX. How does this RCM compare with CORDEX models in the region?
Similarly, in the present manuscript, one RCM is compared to ERA5. However, ERA5 also has biases, it would be good to see how the RCM compared with observed station data and/or satellite-derived datasets (CHIRPS, PERSIAN, TRMM).
Finally, I’m also concerned with the absence of statistical testing, which makes the entire discussion/results a bit subjective. The authors compare different statistics in different datasets and over different time periods, but the discussion/results are never supported by statistical tests. For instance, in Figure 1 (but the same apply to most figure), can we not have a plot of the difference between ERA5 and the RCM? Can we not include a t-test to evaluate the significance of the difference between the 99th percentiles? Can we not include a Mann-Kendall test to assess the trend significance?
Citation: https://doi.org/10.5194/nhess-2023-147-RC2 -
AC2: 'Reply on RC2', Eduardo Zorita, 15 Nov 2023
We would like to thank both reviewers for their time and their comments on the manuscript.
Before addressing their comments in detail, we want to explain two general points that were not highlighted clearly enough in the initial submission and that the reviewers have apparently overlooked.
Perhaps the most important criticisms expressed by the reviewers pertains to a lack of validation of the original climate model used to simulate trends in external events in South Africa and the unclear novelty of our study. These two points are not totally fair, although we acknowledge that they need to be adequately explained in the initial submission.
We would like to draw the editor's and the reviewers' attention to the fact that this manuscript is a follow-up manuscript of a previous study analysing trends in precipitation in this region and its connection to their Agulhas current (Tim et al., 2023, The impact of the Agulhas Current system on precipitation in southern Africa in regional climate simulations covering the recent past and future doi:https://doi.org/10.5194/wcd-4-381-2023. This was states in the submission of the present manuscript : page2 line 27, ‘In a previous paper (Tim et al., 2023), we analysed the mean precipitation and its trends over the late 20th and 21st centuries simulated by the same regional atmospheric model. We compared it to several observational data sets.’
In that manuscript, we did compare this simulation with other observational records of precipitation and several reanalysis data sets in terms of the annual cycle of precipitation and the historical trends of seasonal precipitation in the two rainfall zones. Therefore, this simulation had been previously validated against observations. The validation presented in the previous manuscript was indeed focused on seasonal means and not extreme events. But validation in terms of trends of extreme events is rather difficult and probably not possible, as daily precipitation records are short and not homogenous in time. In this respect, reanalysis fo provide valuable information, complete in time and space, in contrast to daily station precipitation records.
The two reviewers point to previous studies that have compared regional climate simulations with specific observed precipitation extremes. We also include this type of validation in the present manuscript, albeit focusing on a recent extreme event that did not occur in South Africa but along the Indian Ocean coast. Therefore, it is also not totally fair to state that this study does not compare specific simulated and observed extreme events. We do so, but not the extreme events the reviewers hoped to see.
Regarding the second criticism, namely that this study only includes one regional simulation, whereas previous studies have included an example of simulations with different climate models, we would like to point out that the spatial resolution of our model is much finer than the special resolution of previous studies. The simulations included in the Cordex experiment are typically 0.5 degrees, roughly equivalent to 40 kilometres. In contrast, our model simulation has a spatial resolution of 13 kilometres and, therefore, is at the limit of models that do not specifically resolve local convection. This is the first study that has used a long (multidecadal) high-resolution regional simulation for this area.
Nevertheless, in the initially submitted version of the manuscript, we did discuss the results of the Codex experiment, and we summarised the results achieved with that model ensemble. It is known that a reliable simulation of precipitation extremes requires a very high spatial resolution.
Another point is that running the original climate model at such spatial resolution consumes considerable computational resources. For instance, this simulation required a whole year of real-time on a high-performance computer. At this stage, it is very expensive to compute an ensemble of simulations at this resolution. As stated before, to our knowledge, this is the only model that runs at this high resolution for this area.
We acknowledge that our study can be improved and take the reviewers' suggestions very seriously. In the following, we address their particular points in more detail.
Reviewer #2
‘ The paper mainly focuses on analyzing regional trends in extreme rainfall using a single realization from a single RCM at 16km. However, considering how different the results could be in different RCMs and/or in the same RCM driven with different GCMs, I wonder how much trust we give attribute to this type of analysis. It would be good if the author could highlight a bit more the novelty of their approach compared to previous work using CORDEX. How does this RCM compare with CORDEX models in the region?'
As explained in the introduction to this response, one novelty is the high spatial resolution (16km) compared to the Cordex model ensemble (0.44 degrees, approx 40 km). The Cordex ensemble displays a relatively large spread in the simulated trends of precipitation extremes, as analyzed in the study by Pohl et al. (2017) large areas in Southern Africa display levels of model agreement equal or less than 50% of the 15 models included. When looking into two specific areas, Malawi and Tanganika, some models agree with our study in which extreme simulated trends are flat. Other models show a long-term increase. We would discuss in more detail the results of obtained by Pohl et al. compared to ours, also in terms of the resolution of the Cordex models.
‘Similarly, in the present manuscript, one RCM is compared to ERA5. However, ERA5 also has biases, it would be good to see how the RCM compared with observed station data and/or satellite-derived datasets (CHIRPS, PERSIAN, TRMM).’
As also explained in the introduction to this response, a validation of the seasonal means of precipitation and their long-term trends was already presented in a previous paper (Tim et al., 2023). The currently available data sets do not allow for a reliable estimation of historical extremes of daily precipitation - the data sets mentioned by the reviewer include only monthly means, with the exception of TMMR. However, TMMR covers a short period (1997-2015). Their description explicitly warns against using this as a climatological product, as the homogeneity of the records was not a focus in their compilation.
Our study looks more specifically at one type of extreme precipitation in the Natal region caused by tropical cyclones. We explore whether these types of extremes can become more frequent or more intense, concluding that the long-term trends are also flat. However, the scenario simulation does simulate stronger extremes so the long-term trends are, in this regard, not totally informative.
‘Finally, I’m also concerned with the absence of statistical testing, which makes the entire discussion/results a bit subjective. The authors compare different statistics indifferent datasets and over different time periods, but the discussion/results are never supported by statistical tests. For instance, in Figure 1 (but the same apply to most figure), can we not have a plot of the difference between ERA5 and the RCM? Can we not include a t-test to evaluate the significance of the difference between the 99th percentiles? Can we not include a Mann-Kendall test to assess the trend significance?’
We agree that this information can be included in a revised version. However, the impact of statistical significance computed at the grid cell level can be highly misleading, as a map of individual non-significant trends can still be highly significant if many independent grid cells display a trend of the same sign. Therefore, the overall significance of a map of trends depends on the level of spatial autocorrelation of precipitation at different timescales, and this is not straightforward to compute. The overestimation of the role of statistical significance has been highlighted in publications in high-profile journals (Amrhein et al., Nature 567, 306 ‘Retire statistical significance’ doi:; Wasserstein & Laza (2016) The ASA Statement on p-Values: Context, Process, and Purpose, The American Statistician, 70:2, 129-133, DOI: 10.1080/00031305.2016.1154108Citation: https://doi.org/10.5194/nhess-2023-147-AC2
-
AC2: 'Reply on RC2', Eduardo Zorita, 15 Nov 2023
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
451 | 155 | 43 | 649 | 34 | 32 |
- HTML: 451
- PDF: 155
- XML: 43
- Total: 649
- BibTeX: 34
- EndNote: 32
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1