Reply on RC2

1. Now that the author has realized that the spacing of DSM will affect the results of terrain analysis, especially the primary terrain attributes: local slope gradient, roughness, curvature. In fact, the relevant study has existed in this field for decades. Therefore, why didn’t the author use the Root mean square slope (Hutchinson, 1996) to find the optimal resolution at the beginning? In this way, a lot of calculation costs can be saved, and different land uses and topography should be suitable for different resolutions. Plus, a fine spatial resolution of DSMs is no longer an issue, as the author mentioned in the introduction.

1. Now that the author has realized that the spacing of DSM will affect the results of terrain analysis, especially the primary terrain attributes: local slope gradient, roughness, curvature. In fact, the relevant study has existed in this field for decades. Therefore, why didn't the author use the Root mean square slope  to find the optimal resolution at the beginning? In this way, a lot of calculation costs can be saved, and different land uses and topography should be suitable for different resolutions. Plus, a fine spatial resolution of DSMs is no longer an issue, as the author mentioned in the introduction.
One of the aims of our study is to test the use of high-resolution DSMs for the calculation of terrain roughness indices. For this purpose, we selected three spatial resolutions. For our study we did not use the Root Mean Square Slope reported in , since the surface roughness indices that we analysed are directly influenced by the DEM resolution. In fact, lower DEM resolution (equal to higher accuracy) resulted in a better representation of surface features . Different studies indicated that an accurate evaluation of DEM resolution is necessary to correctly identify the features or processes of interests . Since the features of interest of our study have different sizes (trees, rocks, deadwood, disturbed forests, shrubs) we tested seven roughness algorithms looking for the best combination of DEM resolution and moving window size to be used in natural hazard modelling to better differentiate terrain classes that affect simulation of hazard processes.
In the reviewed version of the manuscript we incorporate the following paragraph (new parts are reported in bold).
Line 102 "number of neighbourhood cells. In such sense, most of the roughness indices reported in literature considered the DEM as an isotropic surface. However, the concept of surface anisotropy is of fundamental importance for the investigation of geomorphological features and channelized or dispersed flows (Busse and Jelly, 2020; Insua-Arévalo et al., 2021; ). If the surface shows an anisotropic texture the flow resistance is directly influenced by obstacles disposed along the flow direction. Since, the investigated natural hazards show a predominant diffusion direction identified as the combination of terrain slope and curvature, texture anisotropy has to be taken in account when simulating mass flows ." Here we would continue with the reply of the comment raised by reviewer 1. We report the paragraph below.
"However, the investigated natural hazards have a predominant diffusion direction identified as the combination of terrain slope and curvature. Some studies implemented the surface roughness along a predefined direction (Michelini, 2016;Trevisani and Rocca, 2015). The direction for which roughness has been computed, usually derived through GIS algorithm (D8 or D-infinity), applied to the original or smoothed digital models. However, the direction derived through neighbourhood cells analysis could not be the same of the mass flow propagation. Such behaviours may be observed when the routing volumes are extreme and therefore in some particular situations the propagation direction may be defined by its inertia rather than the topography (Guo et al., 2020). In other cases, the particular mountain topography may force mass flows to affect the opposite hillside of the valley through a runup mechanism (Iverson et al., 2016). Furthermore, the flow direction of banks and channel sides features computed with GIS algorithms do not usually correspond to the mass flow direction. In this situation bank direction can be improved through a smoothing process of the DTM in order to remove gullies and channel from the basal topography. This technique can be easily applicable in case of regular channels but it could become more complex when the channel morphology is irregular, since it could oversimplify the basal topography. For such reasons in this study, we propose a novel approach to calculate surface roughness along user defined lines." 3. From figure 5, there seem to be two clusters of results, one group consisting of area ratio, SD of residual topography, terrain ruggedness index, and vector dispersion. This group has lost a lot of details, especially in the lower-left corner of the image. I would like to see the explanations of the results (difference) of these seven algorithms firstly.
In the lower left corner we see avalanche barriers, which are man-made structures for protecting against avalanche release. They are usually up to 5 m high and from above (from the orthophoto) with less than 1 m width. We believe, that this might be the reason for wrong interpretation by some of the algorithms. We present these results in the section 3.1 Roughness classification and algorithm evaluation and we add these lines to the first paragraph of the 4 Discussion (new phrases in bold): 3.1 Roughness classification and algorithm evaluation "Surface roughness calculated with the seven different algorithms and normalized using the same colour range ( Fig. 5 and Fig. A1 in Appendix, for the Braema and Franza study areas) revealed important differences in the ability to identify specific terrain and vegetation types. As visible for the overall best performing combination of resolution and moving window (1 m and 49 m -2 ) in Fig. 5, all algorithms distinguished accurately between high vegetation (forest) and other vegetation types. Nevertheless, some of the algorithms (vector dispersion, SD of residual topography, terrain ruggedness index and area ratio) failed to detect the avalanche barriers correctly and falsely identified them as rather smooth. Also, small gullies were not clearly separated with some of the algorithms and were particularly poorly visible with the algorithms SD of profile curvature and SD of slope, whereas they were successfully identified with moderate roughness values by the other algorithms. Smooth surfaces were visualized with lower roughness values (darker blue in Fig. 5) by algorithms like vector ruggedness measure, SD of residual topography and vector dispersion [ Fig. 5 (2, 4 and 7)]. Other algorithms [ Fig. 5 (1, 3, 5 and 6)] assigned these smooth surfaces rather high roughness values (lighter blue to cyan blue in Fig. 5)." 4 Discussion  also found that area ratio showed higher values for the smooth slope of a scarp, highlighting a major disadvantage of this algorithm in that smooth steep slopes can be classified as rough. The algorithms vector dispersion, SD of residual topography, terrain ruggedness index and area ratio could not detect the avalanche barriers in the study site Braema. This might be due to small width (less than 1 m) of these objects together in combination with the relatively large moving window area (49m -2 ). Such issues might play an important role for choosing the right algorithm in natural hazard mapping.
4. In the 4.3 Application, I read it several times. It is really difficult to follow the author's logic. I still don't know how the author wants to apply the results. I can vaguely know that the author wants to apply to the ecosystem, but how?
The study highlights the importance of surface roughness in the simulation of natural hazards. The study evaluated the best performing algorithms for land cover classification. The identified classes represent ground features influencing the mass flows. Applications of the study can be straight applied to improve the reliability of model outcomes. Furthermore, we emphasize the fact to adequately represent land cover characterized by disturbed forests as they usually are not correctly implemented.
We changed the title of the section 4.4 Applications for natural hazard assessment to state the focus of the possible application. We further modified this section to make it more comprehensible. Below we report the section and in bold the part we add to the revised version.
"In our study we addressed classified relevant land-cover types in of mountain forests and treeline ecotones of the southern and central Alps. The classes represent land cover characterized by features that influence mass flows propagation in different ways. The derived roughness maps or classes could be straight used in order to improve the reliability of simulation models. Since we analyzed two alpine areas we can assume that our results are relevant also for similar ecosystems characterized by coniferous forests. However, comparable analysis and a verification of the classification would be necessary in order to further generalize our results. Similarly, this would also be required for the classification of other disturbed forest stands (e.g. after a bark beetle outbreaks or wild fires), since different disturbances with different intensities create particular structures with expected diverse surface roughness (Franklin et al., 2002;Hansen et al., 2016;Waldron et al., 2013).

Moreover, the surface roughness classification and the selected roughness algorithm included the identification and analysis of a forest damaged by a wind storm: Franza case study.
We selected the Franza study area in order to analyse a disturbed forest immediately after a windthrow. In cases comparable to this, the forest protection function is altered, when a forest is disturbed. Therefore, there is a need for practitioners to assess the protection capacity of the remaining structures on the ground for natural hazard mapping. In the case of snow avalanches, analysis of field data as well as the very low number of avalanches observed after these disturbances indicate that lying logs contribute to increased terrain roughness and thus to a conservation of a considerable protective function against avalanches at least for the first two decades after the disturbance event as windthrow (Wohlgemuth et al., 2017). In the same way, also early successional stages of post-disturbance development can provide a good protection in avalanche release zones. However, these structures are usually not classified as forest stands, since in most of the cases, they do not match the minimum criteria defined by the authorities (i.e. density, mean height; (Brändli and Speich, 2007;FAO, 2015;INFC, 2005) so these structures might not be included for the definition of avalanche potential release areas. The lying deadwood can also still provide a residual protective function for rockfall. Thanks to the higher impact probability compared to standing trees, the flexibility of the logs on the ground, disturbed forest areas can reduce the rock velocity and absorb kinetic energy (Bourrier et al., 2012;Ringenbach et al., 2021). Especially as in the first phase, when the decaying processes have not reduced the wood strength (Amman, 2006). Therefore, in this study we included in the surface roughness analysis and classification these land cover types (disturbed forests, young forests and shrubs) that are usually not adequately evaluated for natural hazard modelling.
The analysis of surface roughness could therefore serve as a good proxy to evaluate some of the hazard temporal evolution assessment in disturbed forests, but it has some limitations as well. By analysing surface roughness over time, we could also observe landscape transformations and change in vegetation (natural or anthropological) that affect surface roughness and consequently natural hazards processes. In particular, calculating surface roughness for different vegetation types, snow gliding could be easily modelled and predicted for different land-use scenarios. This could improve the identification of natural hazards exposed areas and further implementation of protective measures (Leitinger et al., 2008). In the case of old disturbed forest, the roughness time series analysis might not distinguish between roughness of the old lying logs, lower vegetation and tree regeneration. After years of decomposition, the lying logs become less supportive, decrease in height and they even displace (Bebi et al., 2015;Wohlgemuth et al., 2017). A comprehensive overview of the decay process in longer period after a disturbance (more than 20 years) would be helpful to understand the function of time and the remaining protection capacity after a disturbance such as windthrow. However, a great variability across different environmental gradients may occur, therefore every example should be handled individually, especially if elements of risk exist. Thus, a combination of calculated surface roughness with field investigations may be necessary in such areas (e.g. windthrown forest or large landslides), where an accurate evaluation of the ground features cannot be performed by a DEM survey only.
Surface roughness further influences for the estimation of avalanche release areas and avalanche propagation. Even a small-scale topographic roughness may have an influence on the runout distance of ground-releasing processes as the case of wet snow avalanches (Sovilla et al., 2012). This is also important for small avalanches with little release depths and shallower snowpack (McClung, 2001), since very high snow depths may burry the surface roughness and therefore smoothen the surface (Veitinger et al., 2014). Using DSM for terrain representation in models could improve the surface roughness estimation as showed on the example of the Vector ruggedness measure in our study. It had no pairs of overlapping distribution for all the roughness categories and it assigned well the rough values for higher vegetation, avalanche barriers and other land cover categories; compared to the roughness calculated from a DTM, which have generally underestimated the surface roughness (Brožová et al., 2020). The case study, applying numerical avalanche modelling on DSM and DTM, showed that surface roughness plays a decisive role for the avalanche runout distance and the flow path. However, In the case of high and dense forests, the surface roughness classification based on DSM is limited. The surface roughness values calculated from the DSM picture the tree crowns, which are classified as rough. But the crowns usually don't interact with an avalanche flow (except powder snow avalanches). Therefore, within dense forests, DTM should be applied to calculated the surface roughness and DSM should only be applied for open areas, where roughness may still interact with the hazard process, but not being included in the forest classification. In this way areas with increased roughness outside of defined forest may be detected and included within the hazard modelling. In the case of avalanches, the RAMMS simulation tool (Christen et al., 2010) offers a possibility to add an area with increased friction parameters. A smart combination of DSM and DTM data may allow for better estimation of the surface roughness faced by the gravitational mass movement." 5. The author mentioned that a relatively low-resolution DSM (1 m) can achieve better surface roughness (although I don't know how the author judged it). if the direction did so, has the author tested other lower-resolution data? For example, a global scale of 15m, 30m, etc.
We did not test low-resolution data on global scale, since coarse resolutions do not capture small-scale surface roughness (Vanderhoof and Burt, 2018). Such roughness might be extremely important for the frequent avalanches, delineating the release areas (Veitinger et al., 2016).
In the section 1 Introduction we also mention the importance of high-resolution data for distinguishing more detailed terrain: "Higher DEM resolutions (< 1 m) allow us to see more detailed terrain, but they are usually only available for smaller areas." The resolution of 1 m performed overall better than the other studied resolutions (0.1 and 0.5 m) for all studied algorithms. For our analysis was important that the surface roughness algorithm could distinguish well between the selected roughness categories and we used the paired Wilcoxon test to detect the overlapping distribution of pairs. We determined as well the point of minimum overlap, which we proposed to be used as a threshold for distinguishing between these roughness categories using the vector ruggedness measure algorithm.