the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Slope Unit Maker (SUMak): an efficient and parameter-free algorithm for delineating slope units to improve landslide susceptibility modeling
Jacob B. Woodard
Benjamin B. Mirus
Nathan J. Wood
Kate E. Allstadt
Benjamin A. Leshchinsky
Matthew M. Crawford
Abstract. Slope units are terrain partitions bounded by drainage and divide lines. They provide several advantages over gridded units in landslide-susceptibility modeling, such as better capturing terrain geometry, improved incorporation of geospatial landslide-occurrence data in different formats (e.g., point and polygon), and better accommodating the varying data accuracy and precision in landslide inventories. However, the use of slope units in regional (> 100 km2) landslide susceptibility studies remains limited due, in part, to prohibitive computational costs and/or poor reproducibility with current delineation methods. We introduce a computationally efficient algorithm for the parameter-free delineation of slope units. The algorithm uses geomorphic scaling laws to define the appropriate scaling of the slope units representative of hillslope processes, avoiding the costly parameter optimization procedures of other slope unit delineation methods. We then demonstrate how slope units enable more robust regional-scale landslide susceptibility maps.
- Preprint
(3756 KB) - Metadata XML
-
Supplement
(5885 KB) - BibTeX
- EndNote
Jacob B. Woodard et al.
Status: closed
-
RC1: 'Comment on nhess-2023-70', Anonymous Referee #1, 02 Jun 2023
Review comment on the manuscript “Slope Unit Maker (SUMak): An efficient and parameter-free algorithm for delineating slope units to improve landslide susceptibility modeling” submitted to NHESS by Woodard et al.
GENERAL COMMENTS
Thank you for inviting me for this review. I have read the manuscript with great interest and I appreciate the effort to come up with a parameter-free tool for slope unit (SU) delineation.
In their manuscript, the authors present a newly developed tool for the automatic and parameter-free delineation of SU for landslide susceptibility mapping. In a first step, they compare SU created by their tool to SU created with the commonly used r.slopeunits algorithm by Alvioli et al., and in a second step, they compare the performance of landslide susceptibility models trained using different pixel-based and their SU-based landslide discretization methods for three case study sites to demonstrate the superiority of the SU-based approach as opposed to pixel-based methods.
I strongly believe in the advantages of SU as mapping units. I have used the r.slopeunits tool myself and found that the parametrization can be tricky and is not always transferable to other study areas. Thus, I believe that a parameter-free tool is an important step towards more objective and generalizable approaches in landslide susceptibility modelling.
As it comes to the comparison of SU vs. pixel-based approaches for landslide susceptibility modelling, it should be noted that there are already numerous publications on this topic.
Moreover, I am not totally convinced by the discussion of the SU result presented in Fig. 1 and by the attempt to demonstrate the superiority of the SU-based approach. This is partly because there is some information lacking that would enable the readers to clearly interpret the results. But also, the discussion appears a bit brief and shallow, considering the authors compared so many different approaches in three study areas. I would expect a deeper and more critical discussion not only focusing on performance metrics, but also model set-up and spatial performance for all case studies, also the Puerto Rico one. Please find some specific comments and questions below.
Also, some maps are in my opinion not ideally composed.
Apart from that, the English language is flawless, the paper is well structured, and the references are complete. Multiple references in the text should be sorted alphabetically though.
I think that after the additional information on the methodology has been provided and the discussion improved, the paper could be published.
SPECIFIC COMMENTS
Abstract
I would suggest to mention the software the proposed algorithm runs on in the abstract (and also in the main text in section 2.1). This is relevant information for the readers.
1 Introduction
Lines 120-127: Could you mention here which methods were used for landslide susceptibility modelling?
2 Methods
Lines 130-143: Since the new method is presented as “easy-to-use” I would expect a little more information and instructions on where and how to run it for readers who are not so familiar with GRASS or R for geospatial analyses, or at least a reference to the repository where more detailed instructions can be found.
Lines 144-147: Which parameters were used for the r.slopeunits algorithm?
Lines 163-169: What types of landslides were included in the invetories?
Lines 167-168, lines 179-180: It is a bit unclear to me. What I understand is that the landslide inventories were mixed, with some landslides represented as points, and others as polygons, and the points were mapped at the centroids of the landslides. How many points and polygons, respectively, did each of the landslide inventories contain? How did you deal with landslides that were originally mapped as (centroid?) points for the different sampling strategies that put the points at the scarp or randomly within a landslide body?
Lines 200-201 and lines 202-209: It would be very helpful for the interpretation of the modelling results if you could provide some statistics. How many samples did each dataset contain? How many SU were delineated in each study area? What was the original positive to negative ratio, especially for the SU?
Line 210-229: Which software was used for the susceptibility modelling? Did you conduct any data preparation, such as scaling? Why didn’t you use lithology as an input parameter?
Section 2.2: How were the final landslide susceptibility maps generated? Were the trained models applied to all pixels in the study area in the pixel-based approaches? And for the SU-based approach, did you apply the trained model on a pixel-basis or SU-basis?
3 Results
Fig. 1: The different scales of the two excerpts are confusing. What are the colors in map c? In case they are SU, its unrecognizable. A plain hillshade or DEM could work better.
Lines 259-260 and Fig. 1: To me the SUMak SU look much more heterogenous than the r.slopeunits ones. Some SU are larger, and then there are some areas containing many small ones. Could you explain this in more detail? Is the result really so similar to the r.slopeunits one? Here it would also help to know which parameters were used for the latter, see my previous comment.
Fig. 2 a, b, Fig. 3 a: at these scales it is impossible to recognize the SU. I would suggest to enlarge the maps or omit them. Then again, for being able to interpret the performance of the landslide susceptibility maps, it would be helpful to see maps with the distribution of positive and negative SU.
Citation: https://doi.org/10.5194/nhess-2023-70-RC1 -
AC1: 'Reply on RC1', Jacob Woodard, 07 Aug 2023
We thank the reviewer for thier consructive feedback on this manuscript. In the attached document we have responded to each comment made by the reviewer. The reviewer’s comments are in bold, and our responses are in roman text. Changes we have made to the original text are in italics.
-
AC1: 'Reply on RC1', Jacob Woodard, 07 Aug 2023
-
RC2: 'Comment on nhess-2023-70', Anonymous Referee #2, 28 Jun 2023
The manuscript nhess-2023-70 proposed for publication in NHESS describes a new algorithm for slope unit delineation. Slope units are a well-known terrain subdivision type in the landslide research community; the topic of the manuscript is well within the aims & scope of the Journal. The authors make a good case about the use of slope units in landslide susceptibility mapping. They describe the advantages of using slope units in conjunction with statistical methods, discussing the use of heterogeneous data, landslide inventories of varying quality, data inaccuracies, and drawbacks of using grid cells.
The paper is intentionally split into a rather technical part, about the outcomes of the software introduced here, and an application part, about use of slope units for landslide susceptibility mapping. I have two main general comments about those parts, and a few specific comments that I will list afterwards.
The main issue in the technical part in my opinion is that it is rather focused on the comparison of the outcome and, above all, speed of the new software with respect to the existing r.slopeunits software by Alvioli et al. I believe the way this part is presented is a bit misleading, because it makes assumptions and comparisons that may be not entirely justified. My understanding is that the main difference between the two pieces of work are that the previous one require parametric inputs, while the one presented here only gives one possible result, with no additional parameters. (The new software is not described in detail, so it is difficult to be more specific on the input requirements). Because of this difference the authors stress in several occasions that the existing software has much larger computational demands, and it has "prohibitive" processing times. In an explicit comparison of the outputs of the two software in Sicily (a part of Italy where the slope unit map were published by the r.slopeunit team) the authors of the new software estimated the processing and optimization time based on the total running time quoted in the original paper for the entire Italy, scaling it down proportionally to the size of the area. I felt that as an unfair comparison and, being myself a user of r.slopeunits, it was easy to run the software a couple of times with different values of the input parameters (on the same 25 m EU-DEM). The two runs required 100 minutes and 140 minutes with typical values of parameters I usually input, using a maximum of about 2GB of RAM, on a single computing core (a rather outdated CPU, to tell the truth). To compare with the computing time quoted by the authors of the new software (7 hours), who used 16 computing cores, running about 48 instances of r.slopeunits if would require three times that; say, an average of about 120x3 = 360 minutes. Thus in four hours one could have done about 48 runs, and picked up the "best" one, within the criteria developed by the r.slopeunit team. I would say that the difference is mostly due to the fact that the quoted processing time in the previous paper was due to a rather complicated optimization algorithm, taking into account a huge number of runs and some peculiar arrangement of optimization spatial domains. Then I would go on and say that this is not a fair comparison either, because the outputs of the two pieces of software seem very different, judging from figure 1 in the manuscript: one can clearly see that the slope units obtained with the new software are different from the existing ones, and to be honest I can hardly say that they provide a comparable accuracy in segmenting the sub-areas: one can see areas with very similar morphological setting that are split into tiny details by SUMak, where the previous delineation looks more "reasonable". There probably are areas with the opposite situation, even if they are more difficult to spot. So, different output, different computing times and rather different flexibility - the possibility of having different outputs with different input requirements looks like an advantage, in my opinion, because it gives the possibility of tuning the map to one's needs. In conclusion, and in essence, my suggestion is either to substantially improve the comparison part (I will not suggest to try and compare with other methods - even if I believe other methods exist that were defined as parameter-free by their respective authors), or to reduce its relative importance in the manuscript, in favour of the practical application part.
The authors proposed two applications of the slope unit maps produced with the new software, in two different areas. The two applications are substantially different: in the one case (actually two, in Oregon, USA), they authors use landslide data collected over decades, and the the other case (island of Puerto Rico) landslides were all caused by a hurricane - thus, a specific event. I believe that many would question the use of an event-based landslide inventory to map landslide susceptibility, without mentioning the necessary caveats. In fact - if we agree that landslide susceptibility is the spatial component of landslide hazard - it is difficult to fit an event-based map within this definition. While a "generic", or "historical" landslide inventory actually tells something about the different likelihood of landslides to occur at different locations in a study area, an event-based landslide inventory clearly does a very different job. This is very clear from the maps obtained in the two areas, with statistical methods; but this is in contradiction with the statement "statistical models analyze the spatial distribution of known landslides in relation to local terrain condition", because there no terrain condition alone can explain the landslide distribution in figure 3, and the susceptibility map in figure 5. The authors stated all of this in one line (228), just mentioning that they included soil moisture data as an additional predictor. This singles out the map obtained from the event data as something different from a susceptibility map. For example, in a few papers, this was done using shake maps to account for a trigger in co-seismic landslides. There, the ground shaking parameters are interpreted somewhat as a dynamical input; examples are in Nowicki et al. (10.1016/j.enggeo.2014.02.002) and in Tanyas et al. (10.1016/j.geomorph.2018.10.022). In conclusion, I believe this aspect has been overlooked by the authors of the proposed manuscript, and it should be discussed in some detail as the different interpretation and purpose of the results is not obvious. Another, related point is the repeated reference to the absence of a time component in input data; how would it change the outcome? That would require a totally different framework, in my opinion.
Minor or not-so-minor comments follow.
As anticipated, the abstract is very unbalanced towards the computational speed of the codes, rather than on actual methods and conclusions of the proposed work. As similar comments apply in different parts of the text, I will not point to all of them - only I would like to add that one fair comment can be that regardless of the few or many hours needed to prepare a slope unit map, that is a one-time effort and processing time may be less important than other aspects.
In section 2.1 the authors refer to the "constant drop law"; I feel it would be nice to have a bit more on this law and its meaning. In addition to a quantitative description it would be nice to explain why is the law relevant for slope unit delineation. Do hillslope processes uniquely determine slope unit boundaries, or are they more relevant to identify landslides? Are slope units produced by this criterion suited for any kind of landslides, or for a subset of them? Did the authors check that the law actually holds, in the specific areas investigated here and - more importantly - with the representation of the terrain provided by the digital elevation models adopted here? Moreover, another relevant point is - is this criterion applicable to areas of any size? This is probably the motivation behind the different optimization procedure adopted by Alvioli at al 2020 (and further refined in Alvioli et al 2021 - 10.1080/17445647.2022.2052768).
About the conversion of landslide polygons to point-like features: it is not very clear to me why this is necessary or, actually, what is the rationale of the different methods, especially converting one polygon into multiple points; I can understand using the highest elevation point as an indicator for the landslide initiation point - but what is the difference in using equally-spaced multiple points instead of all of the grid cells overlapping a landslides? Maybe I am missing something, here.
When introducing the XGBoost method - what is the meaning of the list of parameters (max_depth) and most importantly how does the optimization work, in short?
I do not understand the sentence "the Brier score provides measure of the scale of the model fit and not just its ordering"; can the authors explain, in short?
As already mentioned, the method to scale down the computing time in lines 265-267 does not seem reasonable at all, for a quantitative nor qualitative comparison.
I believe the overall view of slope units in figure 1(c) is rather poor - most probably due to the attempt of showing the slope unit vector layer at that zoom scale. On similar grounds, it is kind of impossible to see anything sensible in figure 2(a)-(b). Slope units would be much more visible in figure 3(b) if they weren't colorized as the higher elevation values.
The discussion about the percentage of grid cells with large susceptibility values, which result in visually under-represented with respect to the number and size of landslides, is interesting. The authors ascribe that to a poor performance of methods based on grid cells, and better suitability of the slope unit approach. Could it be that the statistical methods themselves reveal their limits? The observed overall picture is seemingly typical of over-fitting, for methods with poor generalization performance.
Despite the interesting premises set by the introduction and discussion sections, the conclusions drawn by the authors do not seem to meet the expectations (at least, my expectations). I mean, the difference between slope units vs. pixel based models has been investigated by several authors, with similar conclusions. Maybe a bit more could have been done in highlighting the role of the optimization algorithm, for the slope unit delineation part, and on the meaning of using a landslide inventory corresponding to an individual event instead of a "generic"
inventory, for the susceptibility part.Citation: https://doi.org/10.5194/nhess-2023-70-RC2 -
AC2: 'Reply on RC2', Jacob Woodard, 07 Aug 2023
We thank the reviewer for thier constructive feedback on this manuscript. In the attached document we have responded to each comment made by the reviewer. The reviewer’s comments are in bold, and our responses are in roman text. Changes we have made to the original text are in italics.
-
AC2: 'Reply on RC2', Jacob Woodard, 07 Aug 2023
Status: closed
-
RC1: 'Comment on nhess-2023-70', Anonymous Referee #1, 02 Jun 2023
Review comment on the manuscript “Slope Unit Maker (SUMak): An efficient and parameter-free algorithm for delineating slope units to improve landslide susceptibility modeling” submitted to NHESS by Woodard et al.
GENERAL COMMENTS
Thank you for inviting me for this review. I have read the manuscript with great interest and I appreciate the effort to come up with a parameter-free tool for slope unit (SU) delineation.
In their manuscript, the authors present a newly developed tool for the automatic and parameter-free delineation of SU for landslide susceptibility mapping. In a first step, they compare SU created by their tool to SU created with the commonly used r.slopeunits algorithm by Alvioli et al., and in a second step, they compare the performance of landslide susceptibility models trained using different pixel-based and their SU-based landslide discretization methods for three case study sites to demonstrate the superiority of the SU-based approach as opposed to pixel-based methods.
I strongly believe in the advantages of SU as mapping units. I have used the r.slopeunits tool myself and found that the parametrization can be tricky and is not always transferable to other study areas. Thus, I believe that a parameter-free tool is an important step towards more objective and generalizable approaches in landslide susceptibility modelling.
As it comes to the comparison of SU vs. pixel-based approaches for landslide susceptibility modelling, it should be noted that there are already numerous publications on this topic.
Moreover, I am not totally convinced by the discussion of the SU result presented in Fig. 1 and by the attempt to demonstrate the superiority of the SU-based approach. This is partly because there is some information lacking that would enable the readers to clearly interpret the results. But also, the discussion appears a bit brief and shallow, considering the authors compared so many different approaches in three study areas. I would expect a deeper and more critical discussion not only focusing on performance metrics, but also model set-up and spatial performance for all case studies, also the Puerto Rico one. Please find some specific comments and questions below.
Also, some maps are in my opinion not ideally composed.
Apart from that, the English language is flawless, the paper is well structured, and the references are complete. Multiple references in the text should be sorted alphabetically though.
I think that after the additional information on the methodology has been provided and the discussion improved, the paper could be published.
SPECIFIC COMMENTS
Abstract
I would suggest to mention the software the proposed algorithm runs on in the abstract (and also in the main text in section 2.1). This is relevant information for the readers.
1 Introduction
Lines 120-127: Could you mention here which methods were used for landslide susceptibility modelling?
2 Methods
Lines 130-143: Since the new method is presented as “easy-to-use” I would expect a little more information and instructions on where and how to run it for readers who are not so familiar with GRASS or R for geospatial analyses, or at least a reference to the repository where more detailed instructions can be found.
Lines 144-147: Which parameters were used for the r.slopeunits algorithm?
Lines 163-169: What types of landslides were included in the invetories?
Lines 167-168, lines 179-180: It is a bit unclear to me. What I understand is that the landslide inventories were mixed, with some landslides represented as points, and others as polygons, and the points were mapped at the centroids of the landslides. How many points and polygons, respectively, did each of the landslide inventories contain? How did you deal with landslides that were originally mapped as (centroid?) points for the different sampling strategies that put the points at the scarp or randomly within a landslide body?
Lines 200-201 and lines 202-209: It would be very helpful for the interpretation of the modelling results if you could provide some statistics. How many samples did each dataset contain? How many SU were delineated in each study area? What was the original positive to negative ratio, especially for the SU?
Line 210-229: Which software was used for the susceptibility modelling? Did you conduct any data preparation, such as scaling? Why didn’t you use lithology as an input parameter?
Section 2.2: How were the final landslide susceptibility maps generated? Were the trained models applied to all pixels in the study area in the pixel-based approaches? And for the SU-based approach, did you apply the trained model on a pixel-basis or SU-basis?
3 Results
Fig. 1: The different scales of the two excerpts are confusing. What are the colors in map c? In case they are SU, its unrecognizable. A plain hillshade or DEM could work better.
Lines 259-260 and Fig. 1: To me the SUMak SU look much more heterogenous than the r.slopeunits ones. Some SU are larger, and then there are some areas containing many small ones. Could you explain this in more detail? Is the result really so similar to the r.slopeunits one? Here it would also help to know which parameters were used for the latter, see my previous comment.
Fig. 2 a, b, Fig. 3 a: at these scales it is impossible to recognize the SU. I would suggest to enlarge the maps or omit them. Then again, for being able to interpret the performance of the landslide susceptibility maps, it would be helpful to see maps with the distribution of positive and negative SU.
Citation: https://doi.org/10.5194/nhess-2023-70-RC1 -
AC1: 'Reply on RC1', Jacob Woodard, 07 Aug 2023
We thank the reviewer for thier consructive feedback on this manuscript. In the attached document we have responded to each comment made by the reviewer. The reviewer’s comments are in bold, and our responses are in roman text. Changes we have made to the original text are in italics.
-
AC1: 'Reply on RC1', Jacob Woodard, 07 Aug 2023
-
RC2: 'Comment on nhess-2023-70', Anonymous Referee #2, 28 Jun 2023
The manuscript nhess-2023-70 proposed for publication in NHESS describes a new algorithm for slope unit delineation. Slope units are a well-known terrain subdivision type in the landslide research community; the topic of the manuscript is well within the aims & scope of the Journal. The authors make a good case about the use of slope units in landslide susceptibility mapping. They describe the advantages of using slope units in conjunction with statistical methods, discussing the use of heterogeneous data, landslide inventories of varying quality, data inaccuracies, and drawbacks of using grid cells.
The paper is intentionally split into a rather technical part, about the outcomes of the software introduced here, and an application part, about use of slope units for landslide susceptibility mapping. I have two main general comments about those parts, and a few specific comments that I will list afterwards.
The main issue in the technical part in my opinion is that it is rather focused on the comparison of the outcome and, above all, speed of the new software with respect to the existing r.slopeunits software by Alvioli et al. I believe the way this part is presented is a bit misleading, because it makes assumptions and comparisons that may be not entirely justified. My understanding is that the main difference between the two pieces of work are that the previous one require parametric inputs, while the one presented here only gives one possible result, with no additional parameters. (The new software is not described in detail, so it is difficult to be more specific on the input requirements). Because of this difference the authors stress in several occasions that the existing software has much larger computational demands, and it has "prohibitive" processing times. In an explicit comparison of the outputs of the two software in Sicily (a part of Italy where the slope unit map were published by the r.slopeunit team) the authors of the new software estimated the processing and optimization time based on the total running time quoted in the original paper for the entire Italy, scaling it down proportionally to the size of the area. I felt that as an unfair comparison and, being myself a user of r.slopeunits, it was easy to run the software a couple of times with different values of the input parameters (on the same 25 m EU-DEM). The two runs required 100 minutes and 140 minutes with typical values of parameters I usually input, using a maximum of about 2GB of RAM, on a single computing core (a rather outdated CPU, to tell the truth). To compare with the computing time quoted by the authors of the new software (7 hours), who used 16 computing cores, running about 48 instances of r.slopeunits if would require three times that; say, an average of about 120x3 = 360 minutes. Thus in four hours one could have done about 48 runs, and picked up the "best" one, within the criteria developed by the r.slopeunit team. I would say that the difference is mostly due to the fact that the quoted processing time in the previous paper was due to a rather complicated optimization algorithm, taking into account a huge number of runs and some peculiar arrangement of optimization spatial domains. Then I would go on and say that this is not a fair comparison either, because the outputs of the two pieces of software seem very different, judging from figure 1 in the manuscript: one can clearly see that the slope units obtained with the new software are different from the existing ones, and to be honest I can hardly say that they provide a comparable accuracy in segmenting the sub-areas: one can see areas with very similar morphological setting that are split into tiny details by SUMak, where the previous delineation looks more "reasonable". There probably are areas with the opposite situation, even if they are more difficult to spot. So, different output, different computing times and rather different flexibility - the possibility of having different outputs with different input requirements looks like an advantage, in my opinion, because it gives the possibility of tuning the map to one's needs. In conclusion, and in essence, my suggestion is either to substantially improve the comparison part (I will not suggest to try and compare with other methods - even if I believe other methods exist that were defined as parameter-free by their respective authors), or to reduce its relative importance in the manuscript, in favour of the practical application part.
The authors proposed two applications of the slope unit maps produced with the new software, in two different areas. The two applications are substantially different: in the one case (actually two, in Oregon, USA), they authors use landslide data collected over decades, and the the other case (island of Puerto Rico) landslides were all caused by a hurricane - thus, a specific event. I believe that many would question the use of an event-based landslide inventory to map landslide susceptibility, without mentioning the necessary caveats. In fact - if we agree that landslide susceptibility is the spatial component of landslide hazard - it is difficult to fit an event-based map within this definition. While a "generic", or "historical" landslide inventory actually tells something about the different likelihood of landslides to occur at different locations in a study area, an event-based landslide inventory clearly does a very different job. This is very clear from the maps obtained in the two areas, with statistical methods; but this is in contradiction with the statement "statistical models analyze the spatial distribution of known landslides in relation to local terrain condition", because there no terrain condition alone can explain the landslide distribution in figure 3, and the susceptibility map in figure 5. The authors stated all of this in one line (228), just mentioning that they included soil moisture data as an additional predictor. This singles out the map obtained from the event data as something different from a susceptibility map. For example, in a few papers, this was done using shake maps to account for a trigger in co-seismic landslides. There, the ground shaking parameters are interpreted somewhat as a dynamical input; examples are in Nowicki et al. (10.1016/j.enggeo.2014.02.002) and in Tanyas et al. (10.1016/j.geomorph.2018.10.022). In conclusion, I believe this aspect has been overlooked by the authors of the proposed manuscript, and it should be discussed in some detail as the different interpretation and purpose of the results is not obvious. Another, related point is the repeated reference to the absence of a time component in input data; how would it change the outcome? That would require a totally different framework, in my opinion.
Minor or not-so-minor comments follow.
As anticipated, the abstract is very unbalanced towards the computational speed of the codes, rather than on actual methods and conclusions of the proposed work. As similar comments apply in different parts of the text, I will not point to all of them - only I would like to add that one fair comment can be that regardless of the few or many hours needed to prepare a slope unit map, that is a one-time effort and processing time may be less important than other aspects.
In section 2.1 the authors refer to the "constant drop law"; I feel it would be nice to have a bit more on this law and its meaning. In addition to a quantitative description it would be nice to explain why is the law relevant for slope unit delineation. Do hillslope processes uniquely determine slope unit boundaries, or are they more relevant to identify landslides? Are slope units produced by this criterion suited for any kind of landslides, or for a subset of them? Did the authors check that the law actually holds, in the specific areas investigated here and - more importantly - with the representation of the terrain provided by the digital elevation models adopted here? Moreover, another relevant point is - is this criterion applicable to areas of any size? This is probably the motivation behind the different optimization procedure adopted by Alvioli at al 2020 (and further refined in Alvioli et al 2021 - 10.1080/17445647.2022.2052768).
About the conversion of landslide polygons to point-like features: it is not very clear to me why this is necessary or, actually, what is the rationale of the different methods, especially converting one polygon into multiple points; I can understand using the highest elevation point as an indicator for the landslide initiation point - but what is the difference in using equally-spaced multiple points instead of all of the grid cells overlapping a landslides? Maybe I am missing something, here.
When introducing the XGBoost method - what is the meaning of the list of parameters (max_depth) and most importantly how does the optimization work, in short?
I do not understand the sentence "the Brier score provides measure of the scale of the model fit and not just its ordering"; can the authors explain, in short?
As already mentioned, the method to scale down the computing time in lines 265-267 does not seem reasonable at all, for a quantitative nor qualitative comparison.
I believe the overall view of slope units in figure 1(c) is rather poor - most probably due to the attempt of showing the slope unit vector layer at that zoom scale. On similar grounds, it is kind of impossible to see anything sensible in figure 2(a)-(b). Slope units would be much more visible in figure 3(b) if they weren't colorized as the higher elevation values.
The discussion about the percentage of grid cells with large susceptibility values, which result in visually under-represented with respect to the number and size of landslides, is interesting. The authors ascribe that to a poor performance of methods based on grid cells, and better suitability of the slope unit approach. Could it be that the statistical methods themselves reveal their limits? The observed overall picture is seemingly typical of over-fitting, for methods with poor generalization performance.
Despite the interesting premises set by the introduction and discussion sections, the conclusions drawn by the authors do not seem to meet the expectations (at least, my expectations). I mean, the difference between slope units vs. pixel based models has been investigated by several authors, with similar conclusions. Maybe a bit more could have been done in highlighting the role of the optimization algorithm, for the slope unit delineation part, and on the meaning of using a landslide inventory corresponding to an individual event instead of a "generic"
inventory, for the susceptibility part.Citation: https://doi.org/10.5194/nhess-2023-70-RC2 -
AC2: 'Reply on RC2', Jacob Woodard, 07 Aug 2023
We thank the reviewer for thier constructive feedback on this manuscript. In the attached document we have responded to each comment made by the reviewer. The reviewer’s comments are in bold, and our responses are in roman text. Changes we have made to the original text are in italics.
-
AC2: 'Reply on RC2', Jacob Woodard, 07 Aug 2023
Jacob B. Woodard et al.
Jacob B. Woodard et al.
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
520 | 180 | 28 | 728 | 48 | 18 | 19 |
- HTML: 520
- PDF: 180
- XML: 28
- Total: 728
- Supplement: 48
- BibTeX: 18
- EndNote: 19
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1