the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Comparison of different rheological approaches and flow direction algorithms in a physically based debris flow model for data scarce regions
Abstract. A debris flow simulation model was proposed for data-scarce regions. The model couples a one-dimensional explicit solution for a monophasic sediment-water mixture with flow direction algorithms for debris flow routing. We investigate the effects of different multiple flow direction algorithms (D8, D∞, and Freeman’s Multiple Flow Direction (MFD)) and multiple rheology approaches (Newtonian, Bingham, Herschel-Bulkley, and dilatant) for the one-dimensional flow on the debris flow simulations. The model was tested by simulating debris flows triggered by an extreme rainfall in the Mascarada river basin in southern Brazil. We conducted two separate sets of simulations: one focused on the effects of flow directions, considering multiple DEM resolutions, and another to compare rheology approaches. A third simulation was conducted for multiple debris flows concurrently, utilizing optimal parameters derived from the results of the two simulation sets. D8 proved to be unsuitable for debris flow routing, whereas MFD performed better for high-resolution DEM (1 m pixel size) and D∞ for coarser resolutions (2.5, 5, and 10 m). In terms of affected area, the difference between the rheology approaches was less impactful than the difference between flow direction algorithms. The lack of velocity estimates and deposition depths for the simulated debris flow hindered a detailed comparison of which rheology had a more accurate result. Nevertheless, we found MFD and dilatant fluid to perform slightly better and utilize the optimal parameters to simulate three other debris flows, reaching true positive ratios of 58 % up to 83 %.
- Preprint
(1695 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on nhess-2023-119', Anonymous Referee #1, 29 Nov 2023
This paper presents new tools to address debris flow hazards by developing and testing new models to evaluate debris flow inundation zones. The models developed use different approaches based on Newtonian and No Newtonian rheologies. The manuscript is well-written and organized. The results are of interest to the scientific community, especially to those who need to evaluate the hazard posed by these phenomena but are limited by the scarcity of data. After reading the manuscript, I have the following suggestions and questions:
General comments
Aim.
-To write the aim of the paper in a more concise way. It is not clear if the aim is the model developed, to compare the effects of the simulations performed or to evaluate their differences.
Methodology.
-Specify all the variables in the formulas used to validate the model. The significance of VP in equation 20 is missing.
Model inputs and data.
-Include a more comprehensive description of the parameters used to test the models, more specifically kinematic viscosity, consistency factor, and mixture density. Do they were based on previously published information or theoretically derived? In the case of mixture density, I suggest consulting the following papers:
Yang, T., Li, Y., Zhang, Q. et al. Calculating debris flow density based on grain-size distribution. Landslides 16, 515–522 (2019). https://doi.org/10.1007/s10346-018-01130-2.
Ji, F., Dai, Z., and Li, R.: A multivariate statistical method for susceptibility analysis of debris flow in southwestern China, Nat. Hazards Earth Syst. Sci., 20, 1321–1334, https://doi.org/10.5194/nhess-20-1321-2020, 2020.
Thouret, J. C., Antoine, S., Magill, C., & Ollier, C. (2020). Lahars and debris flows: Characteristics and impacts. Earth-Science Reviews, 201, 103003.
-Line 244-245. Initiation volumes are mentioned, please add the data used and how they were estimated.
-Lines 216 – 217. Explain the influence of the flow partition value (1.5) in the spreading and distal reach of the simulations performed.
Results.
-Explain if excluding simulations that did not meet the stopping criteria could affect the performance evaluation of a particular FDA.
-In figures 4 and 6 seem like, despite the FDA, all the simulations have better performance at higher kinematic viscosities or consistency/mixture density. Is this attributable to the constitutive equations of the FDA?
Discussion.
-The mixture density used to calibrate the models, 2.65 g/cm3, is higher than the reported by other authors. Please discuss if this factor could have changed the results obtained from the models.
-Model performance was evaluated by the parameter Hs in subsections 4.1 and 4.2. On the other side, in sections 4.3 and 5, other parameters were included, for example, TPR and FPR. Would be interesting to discuss all this data in the previous sections.
-What would be the minimum value of Hs for the simulation results to be considered a good approximation?
References.
Check that all references are cited in the text and in alphabetical order. For example, Giráldez et l. (2016) is missing; Kobiyama et al. (2019) is not in alphabetical order.
Specific comments are included in the PDF file.
I hope my comments will be useful in improving the manuscript.
-
AC1: 'Reply on RC1', Leonardo Rodolfo Paul, 15 Feb 2024
We appreciate your feedback; it will be helpful for this manuscript. We took note of each comment and responded to them as follows:
-To write the aim of the paper in a more concise way. It is not clear if the aim is the model developed, to compare the effects of the simulations performed or to evaluate their differences.
Authors: We rewrote the sentence containing the aims of the paper. The aim is to compare the effects both of rheology and flow direction methods on the simulations. The model is a mean for this analysis, so it is not the main objective.
-Specify all the variables in the formulas used to validate the model. The significance of VP in equation 20 is missing.
Authors: VP was a typo: it is supposed to be TP. Thank you for pointing it out.
-Include a more comprehensive description of the parameters used to test the models, more specifically kinematic viscosity, consistency factor, and mixture density. Do they were based on previously published information or theoretically derived? In the case of mixture density, I suggest consulting the following papers:
Yang, T., Li, Y., Zhang, Q. et al. Calculating debris flow density based on grain-size distribution. Landslides 16, 515–522 (2019). https://doi.org/10.1007/s10346-018-01130-2.
Ji, F., Dai, Z., and Li, R.: A multivariate statistical method for susceptibility analysis of debris flow in southwestern China, Nat. Hazards Earth Syst. Sci., 20, 1321–1334, https://doi.org/10.5194/nhess-20-1321-2020, 2020.
Thouret, J. C., Antoine, S., Magill, C., & Ollier, C. (2020). Lahars and debris flows: Characteristics and impacts. Earth-Science Reviews, 201, 103003.
Authors: We added information to make this section clearer and consulted the suggested papers. Since we did not have access to the deposits immediately after the debris flow events, possibly finer particles, such as clay and fine sand were already washed out. Thus, we did not measure those parameters and they were utilized as calibration variables. Thus, we consulted literature to obtain reasonable ranges for each parameter and tested them out.
-Line 244-245. Initiation volumes are mentioned, please add the data used and how they were estimated.
Authors: The debris flows are shallow landslide induced. Thus, we utilized information from a technical report (SEMA and GPDEN/UFRGS, 2017)¹ about the average depth of failure. We calculated the area inside the landslide initiation based on orthophotos and multiplied by the average depth of failure. Added a citation for this report.
¹ The technical report can be found at: https://www.ufrgs.br/gpden/wordpress/wp-content/uploads/2020/07/DRH-GPDEN-2017-Diagnostico-preliminar-de-Rolante.pdf
-Lines 216 – 217. Explain the influence of the flow partition value (1.5) in the spreading and distal reach of the simulations performed.
Authors: We did not do an in-depth analysis of the effects of the flow partition. However, high values concentrate the flow in a single direction. Thus, higher flow partition exponents lead to higher flow depth between time steps, high velocities and possibly enhance runout distance, since it turns the algorithm almost as a single flow direction. We added a brief discussion clarifying the effects of this parameter.
-Explain if excluding simulations that did not meet the stopping criteria could affect the performance evaluation of a particular FDA.
Authors: In cases where the stopping criteria are not met, the debris flow goes further, reaching unreasonably high runout distances, sometimes even leaving the simulation domain. If supposedly we included those cases in the evaluation, the respective FDA would have a poorer performance. However, at the time the simulations were run, interrupting the model wouldn’t result in an output. Therefore, we could not check the performance of those simulations.
-In figures 4 and 6 seem like, despite the FDA, all the simulations have better performance at higher kinematic viscosities or consistency/mixture density. Is this attributable to the constitutive equations of the FDA?
Authors: We believe that debris flow in this area had a high amount of friction due to the high quantity of blocks and boulders (both grain to grain friction and grain to bed). Since the rheological models are simplified, a high value of kinematic viscosity compensates the lack of terms to comprise friction losses. Thus, to lower flow velocities and allow for deposition in the observed area, high values of kinematic viscosities resulted in better performances for all FDAs. However, we can observe in the Figure that MFD has an inflection: as viscosity increases Hs starts to lower. This is a consequence of the flow spreading combined with high resistance to flow, which causes the debris flow to stop before reaching the observed deposition area. We added more information discussing the lack of bed-fluid friction loss terms in the velocity calculation and possible effects in our simulations.
-The mixture density used to calibrate the models, 2.65 g/cm3, is higher than the reported by other authors. Please discuss if this factor could have changed the results obtained from the models.
Authors: We utilized the mixture bulk density to obtain the ranges of kinematic viscosity from authors that measured only dynamic viscosity. If we utilized a more realistic, lower value mixture density, we would have an offset of the range of dynamic viscosity (a higher magnitude for both upper and lower limit). This would change the simulations that took value from the upper and lower limits of the range, but would not change simulations that have viscosities that overlap with the previous and updated range values. Basically, the mixture density was not an input, it was just utilized as a proxy to get the calibration ranges. We added more information clarifying the purpose of the mixture density in our study.
-Model performance was evaluated by the parameter Hs in subsections 4.1 and 4.2. On the other side, in sections 4.3 and 5, other parameters were included, for example, TPR and FPR. Would be interesting to discuss all this data in the previous sections.
Authors: We added information about the other performance metrics.
-What would be the minimum value of Hs for the simulation results to be considered a good approximation?
Authors: The logical interpretation could change from author to author. Hs ranges from -1 to 1. In general terms, a Hs of zero represents that the model outputs are as good as a random guess. From 0.4 onwards is considered a moderate agreement, from 0.8>Hs>0.6 represents substantial agreement and higher than 0.8 is an almost perfect agreement. We will add an interpretation table as supplementary material.
Check that all references are cited in the text and in alphabetical order. For example, Giráldez et l. (2016) is missing; Kobiyama et al. (2019) is not in alphabetical order.
Authors: Thank you for noticing those errors. We added the missing reference and rearranged the order.
Citation: https://doi.org/10.5194/nhess-2023-119-AC1
-
AC1: 'Reply on RC1', Leonardo Rodolfo Paul, 15 Feb 2024
-
RC2: 'Comment on nhess-2023-119', Anonymous Referee #2, 15 Apr 2024
This manuscript proposes a debris-flow model that can be used to simulate debris-flow processes in data-sparse regions. The authors explore how different rheological models and flow direction algorithms impact model performance against four debris flows in Brazil. I think that this manuscript addresses an interesting question, but I believe that major revisions are needed to provide a better understanding of exactly how this model works and what information it provides.
General Comments
It is currently unclear throughout the course of the manuscript what inputs are needed to run a simulation of the model, how those inputs are determined, and what the model outputs (i.e., sediment deposited by a debris flow or debris-flow inundation extent) given that information. This needs to be properly addressed.
The captions for all figures need to be improved so that readers have a better understanding of what they are seeing. Currently, the captions provide little or no information regarding the flow direction algorithm and rheological models used, and no information regarding which debris-flow sites are displayed.
The authors should make clear how this model improves our ability to understand debris-flow processes in data-sparse regions. How does this model improve upon existing models that use flow direction algorithms to determine debris-flow runout, for example?
I am concerned about the quality of the mapped debris flow data and the topography data used to develop this model. While I understand this data can be difficult to obtain in many areas, I am worried that it limits the authors’ ability to truly assess the performance of the model.
Specific Comments
Methods
I recommend restructuring and improving this section of the manuscript, as many details regarding how the model works are currently unclear. First, a more robust introduction to the model is needed. Currently, there are only three sentences regarding the model itself (Lines 71-76). This needs to be greatly expanded to show readers how the model actually runs a simulation. What are the inputs to the model? What does the model output? For example, does it output the extent of inundation as is suggested in later figures (e.g., Figure 5)? Or does it output the areas where sediment is deposited, as is suggested later in the text (e.g., Line 358). The sentence regarding how the simulation ends is currently confusing without more context. I think multiple paragraphs of text need to be added to walk readers through how a simulation of the model is run. Additionally, I think a flow chart that visualizes a model simulation from start to finish would be more useful than the current flow chart that walks users through the study (Figure 1).
Why do the authors choose Heidke’s score (Line 156) for model evaluation? They note that the large amount of True Negative points could hinder model evaluation, and propose the solution of setting the number of TN points to five times the number of total positives, but would a metric that does not include TN be more applicable in this scenario? Something like the Threat Score or the metric introduced in Heiser et al. (2017)? https://doi.org/10.1007/s10596-016-9609-9
As I note below, I think Section 3.2 regarding model inputs needs to be incorporated into this section. It is currently unclear 1) what inputs are needed to run a simulation of the model, and 2) how those inputs are determined. One example of the current shortcomings in this aspect is debris-flow volume. From reading the manuscript, I can discern that volume is a required input for model simulation (e.g., Line 244), but this is not stated anywhere, including Table 1, which includes model inputs. Furthermore, there is no information regarding how this model input is determined. Was it measured in the field? This is a broader issue for all model inputs. E.g., How were n and m calculated?
Study Area
The naming convention for the four debris flows that were used for model development (F1 – F4) should be introduced in this section. I think it would also be helpful to have a figure that shows the mapped extent of inundation for each of these four. Currently, this naming convention is not introduced until the end of the Results section (Line 277), making it difficult to follow as a reader.
Building off of my previous point, how were the F1 – F4 debris flows selected? The authors first state that they selected debris flows that did not reach the main channel (Line 191), and thus had defined deposition zones, but then reveal that the F4 debris flow did, in fact, reach the main channel and does not have a discernible deposition zone (Line 208). Why was the F4 debris flow selected for analysis if this was the case? Also, why were debris flows F2 and F3 run as one simulation while F1 and F4 were run individually?
Finally, how were these debris flows mapped? The authors state that they were identified using an existing inventory (Line 208), but some additional sentences revealing the methods used in that paper would be helpful. It is currently unclear whether the debris flows were mapped using field techniques, as suggested in Line 182, or using satellite imagery, as suggested in Line 346.
To improve the structure and readability of the manuscript, I would recommend moving Section 3.1 to before the Methods section, and to incorporate Section 3.2 into the Methods section.
Results
It is currently unclear what many tables and figures in the results are showing. From what I can discern, I would guess that Figures 4, 6, and 7, for instance, are showing model results when calibrated against debris flow F1? But I’m not sure. This is why it is critical to outline in the Study Area and Methods sections which debris-flow you calibrated against and why. I would also guess that Figure 5 is showing debris flow F1, but I am again not sure. As an aside, the captions for all figures throughout the manuscript need to be improved.
Section 4.3, Model Validation, presents new information that wasn’t outlined in the methods. There needs to be information in the Methods section that outlines which debris flow you calibrated the model against, how you determined what those best-fit parameters were, and which watersheds you validated those parameters against. Also, did you only validate the model for MFD and dilatant rheological approach, as suggested by the caption of Figure 8? If so, why?
In Line 273, the authors state that the end of the simulation shows the debris-flow depth. Is this the sediment depth deposited by the debris flow, or the peak flow depth of the debris flow measured over the course of a simulation? This is a very important distinction that is not made throughout the manuscript.
The caption of Table 2 needs to be improved to let readers know what is being shown. Is this using the performance against all four debris flows using the parameters calibrated for F1? What flow routing algorithm was used? What rheological model? This is all currently unclear.
Discussion
Line 304 – The stopping criteria that is mentioned multiple times throughout the paper is still unclear to me. Please add more text to the Methods section that better outlines this.
I am concerned by the fact that the authors think that there is enough error in the delineated debris-flow scar or in the DEM (Line 349) to really impact the performance of the model against debris flows F2 and F3. I understand that this data can be difficult to obtain in data-sparse regions, but if the error is large enough in one or both of these areas, then I question the validity of using this case for model validation. The authors also note that a potential DEM artifact in the topography for F4 (Line 360). Again, I find this concerning.
Line 362 – What is MDT? As far as I can tell, this is not defined anywhere in the manuscript.
Citation: https://doi.org/10.5194/nhess-2023-119-RC2 -
AC2: 'Reply on RC2', Leonardo Rodolfo Paul, 27 May 2024
Authors: Thank you for your comments and suggestions. They will help to improve our manuscript. We took note of each specific comments and replied to them as follows:
Reviewer: I recommend restructuring and improving this section of the manuscript, as many details regarding how the model works are currently unclear. First, a more robust introduction to the model is needed. Currently, there are only three sentences regarding the model itself (Lines 71-76). This needs to be greatly expanded to show readers how the model actually runs a simulation. What are the inputs to the model? What does the model output? For example, does it output the extent of inundation as is suggested in later figures (e.g., Figure 5)? Or does it output the areas where sediment is deposited, as is suggested later in the text (e.g., Line 358). The sentence regarding how the simulation ends is currently confusing without more context. I think multiple paragraphs of text need to be added to walk readers through how a simulation of the model is run. Additionally, I think a flow chart that visualizes a model simulation from start to finish would be more useful than the current flow chart that walks users through the study (Figure 1).
(Commentary moved by the authors from Study Area section)
To improve the structure and readability of the manuscript, I would recommend moving Section 3.1 to before the Methods section, and to incorporate Section 3.2 into the Methods section.
Authors: We agree and will expand the section about the model. As suggested, we will add a flowchart describing how the model works. Also, we have a list of the input parameters from line 200 to 225, but we will add information about the inputs in the revised section describing the model. The final output is a raster with valid values for the runout pixels and where the pixel value represents the final flow height. The user can set an iteration interval in which the model takes snapshots along the simulation and outputs flow heights for these specific timesteps.
Reviewer: Why do the authors choose Heidke’s score (Line 156) for model evaluation? They note that the large amount of True Negative points could hinder model evaluation, and propose the solution of setting the number of TN points to five times the number of total positives, but would a metric that does not include TN be more applicable in this scenario? Something like the Threat Score or the metric introduced in Heiser et al. (2017)? https://doi.org/10.1007/s10596-016-9609-9
Authors: We agree that there are other metrics that could help to assess the performance of the simulations, such as the suggested Threat Score. The Heidke’s score (or Cohen’s Kappa) is fairly utilized for debris flow simulation even on recent publications (i.e.: https://doi.org/10.5194/nhess-22-3183-2022), thus the choice. The limitation of the TN was due to the size of the simulation area (which could be an entire basin and result in high scores, for instance), to improve the identification of poor performances. For the same reason, we also calculated the true positive ratio, false discovery ratio and false negative ratio, which do not account for TN. The Threat Score seems suitable since it disregards TN and we could add to the analysis.
Reviewer: As I note below, I think Section 3.2 regarding model inputs needs to be incorporated into this section. It is currently unclear 1) what inputs are needed to run a simulation of the model, and 2) how those inputs are determined. One example of the current shortcomings in this aspect is debris-flow volume. From reading the manuscript, I can discern that volume is a required input for model simulation (e.g., Line 244), but this is not stated anywhere, including Table 1, which includes model inputs. Furthermore, there is no information regarding how this model input is determined. Was it measured in the field? This is a broader issue for all model inputs. E.g., How were n and m calculated?
Authors: We will describe further the required inputs and how they were estimated. n and m are exponential coefficients of the rheological equations and were empirically set. Since we could not determine these values through experiments, we utilized them as calibration parameters. For this reason, we have a wide range for these coefficients. To clarify, tests were performed with n>2, and for high viscosities the stopping criteria was met with little iterations (the debris flow barely moved downslope). Conversely, for m that were too small, the simulation carried on to the point the debris flow reached the DTM boundary and outflowed and the stopping criteria was not met. Based on those tests, we narrowed the values of m and we would adopt in this study.
Study Area
Reviewer: The naming convention for the four debris flows that were used for model development (F1 – F4) should be introduced in this section. I think it would also be helpful to have a figure that shows the mapped extent of inundation for each of these four. Currently, this naming convention is not introduced until the end of the Results section (Line 277), making it difficult to follow as a reader.
Authors: We will introduce the naming convention as suggested.
Reviewer: Building off of my previous point, how were the F1 – F4 debris flows selected? The authors first state that they selected debris flows that did not reach the main channel (Line 191), and thus had defined deposition zones, but then reveal that the F4 debris flow did, in fact, reach the main channel and does not have a discernible deposition zone (Line 208). Why was the F4 debris flow selected for analysis if this was the case? Also, why were debris flows F2 and F3 run as one simulation while F1 and F4 were run individually?
Authors: We did those runs individually to enhance computational efficiency. Since F2 and F3 were spatially close, they were simulated together. However, all the debris flows could’ve been simulated together. Since we tried to cover the debris flows that did not reach the channel, however, they were too few. Near the debris flows F1, F2 and F3 there was another debris flow that did reach the channel, thus, to add more individuals to validation we included F4. Since F4 was close to the other debris flows, we had more confidence that its soil properties would be similar to F1, F2 and F3.
Reviewer: Finally, how were these debris flows mapped? The authors state that they were identified using an existing inventory (Line 208), but some additional sentences revealing the methods used in that paper would be helpful. It is currently unclear whether the debris flows were mapped using field techniques, as suggested in Line 182, or using satellite imagery, as suggested in Line 346.
Authors: We are going to add more information regarding the debris flow inventory. The debris flows were mapped using satellite imagery, but information regarding the depth of landslide failure (which would act as sediment source to the debris flow) was observed on field. We will describe further the debris flow mapping and clarify the sources of each information.
Reviewer: It is currently unclear what many tables and figures in the results are showing. From what I can discern, I would guess that Figures 4, 6, and 7, for instance, are showing model results when calibrated against debris flow F1? But I’m not sure. This is why it is critical to outline in the Study Area and Methods sections which debris-flow you calibrated against and why. I would also guess that Figure 5 is showing debris flow F1, but I am again not sure. As an aside, the captions for all figures throughout the manuscript need to be improved.
Authors: Yes, these images are for debris flow F1. We will introduce the naming convention in the methodology and add them to Figure captions.
Reviewer: Section 4.3, Model Validation, presents new information that wasn’t outlined in the methods. There needs to be information in the Methods section that outlines which debris flow you calibrated the model against, how you determined what those best-fit parameters were, and which watersheds you validated those parameters against. Also, did you only validate the model for MFD and dilatant rheological approach, as suggested by the caption of Figure 8? If so, why?
Authors: We intended working with the original DTM resolution for the rheological comparison, thus we selected the flow direction algorithm with higher performance, which was the MFD for 1 m resolution. Then, we tested the rheological equations, and the best fit parametrization was chosen. This was our calibration, for F1. The calibrated parameters were then applied to other debris flows to check their validity.
Reviewer: In Line 273, the authors state that the end of the simulation shows the debris-flow depth. Is this the sediment depth deposited by the debris flow, or the peak flow depth of the debris flow measured over the course of a simulation? This is a very important distinction that is not made throughout the manuscript.
Authors: The output we are showing in the paper is the mixture depth of the debris flow at the end of the simulation. It would be an equivalent to deposition. The model also outputs peak velocities if the user chooses to.
Reviewer:The caption of Table 2 needs to be improved to let readers know what is being shown. Is this using the performance against all four debris flows using the parameters calibrated for F1? What flow routing algorithm was used? What rheological model? This is all currently unclear.
Authors: We are going to improve Table 2 caption and add the suggested information.
Reviewer: Line 304 – The stopping criteria that is mentioned multiple times throughout the paper is still unclear to me. Please add more text to the Methods section that better outlines this.
Authors: We agree there is not enough information and will further explain the stopping criteria. The sopping criteria is based on the maximum height difference along the simulation grid, and it is a threshold value that informs when the simulation ends. The model stops the simulation when the maximum difference in flow height between calculation time steps is below this threshold. It is a proxy for low velocities, since small volume exchange between cells indicates slow moving debris flow.
Reviewer: I am concerned by the fact that the authors think that there is enough error in the delineated debris-flow scar or in the DEM (Line 349) to really impact the performance of the model against debris flows F2 and F3. I understand that this data can be difficult to obtain in data-sparse regions, but if the error is large enough in one or both of these areas, then I question the validity of using this case for model validation. The authors also note that a potential DEM artifact in the topography for F4 (Line 360). Again, I find this concerning.
Authors: The delineation of debris-flow scars was based on pre and post event imagery (Carzodo et al. (2021)). Since the remaining vegetation (a very dense Atlantic Forest) could occlude sections of the scar, we also have some uncertainty associated with this data. We revised and refined the delineation of the debris flow scars selected for simulation to ensure maximum accuracy with our methodology. However, we acknowledge that debris flow possibly had a longer runout that it was observable. Though, we could assume that the magnitude of the velocities and volumes were decreasing near the edge of the scar, since the vegetation downslope was unscathed.
Since the simulation for the debris flow ‘F4’ was considerably different from the expected flow path, we commented on possible uncertainty sources, but we are not invalidating the data. We will expand the discussion towards these uncertainties. Regarding the quality of the DTM, it was previously utilized in other studies, such as in sediment connectivity analysis. The DTM was validated based on field evidence on the following papers:
i) Abatti et al. (2022) https://onlinelibrary.wiley.com/doi/epdf/10.1002/esp.5507
ii) Zanandrea et al. (2020): https://www.sciencedirect.com/science/article/abs/pii/S0169555X19304532
iii) Zanandrea et al. (2021): https://www.sciencedirect.com/science/article/abs/pii/S0341816221002393
Reviewer: Line 362 – What is MDT? As far as I can tell, this is not defined anywhere in the manuscript.
Authors: It was supposed to be DTM, it is a typo. Thank you for pointing out.
Citation: https://doi.org/10.5194/nhess-2023-119-AC2
-
AC2: 'Reply on RC2', Leonardo Rodolfo Paul, 27 May 2024
Model code and software
SIRDEFLOW Leonardo Rodolfo Paul https://doi.org/10.5281/zenodo.8136370
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
424 | 133 | 34 | 591 | 30 | 29 |
- HTML: 424
- PDF: 133
- XML: 34
- Total: 591
- BibTeX: 30
- EndNote: 29
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1