Articles | Volume 25, issue 10
https://doi.org/10.5194/nhess-25-4071-2025
© Author(s) 2025. This work is distributed under
the Creative Commons Attribution 4.0 License.Unbalanced relationship between flood risk perception and flood preparedness from the perspective of response intention and socio-economic factors: a case study of Nanjing, China
Download
- Final revised paper (published on 22 Oct 2025)
- Supplement to the final revised paper
- Preprint (discussion started on 14 May 2024)
- Supplement to the preprint
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on nhess-2024-23', Anonymous Referee #1, 05 Jun 2024
-
RC2: 'Reply on RC1', Irfan Ahmad Rana, 28 Jul 2024
- AC2: 'Reply on RC2', Peng Wang, 28 Aug 2024
- AC1: 'Reply on RC1', Peng Wang, 28 Aug 2024
-
RC2: 'Reply on RC1', Irfan Ahmad Rana, 28 Jul 2024
Peer review completion
AR: Author's response | RR: Referee report | ED: Editor decision | EF: Editorial file upload
ED: Reconsider after major revisions (further review by editor and referees) (23 Sep 2024) by Lindsay Beevers

AR by Peng Wang on behalf of the Authors (30 Oct 2024)
Author's response
Author's tracked changes
Manuscript
ED: Referee Nomination & Report Request started (13 Nov 2024) by Lindsay Beevers
RR by Anonymous Referee #2 (20 Nov 2024)

RR by Anonymous Referee #1 (23 Nov 2024)

ED: Reconsider after major revisions (further review by editor and referees) (21 Feb 2025) by Lindsay Beevers

AR by Peng Wang on behalf of the Authors (18 Apr 2025)
Author's response
Author's tracked changes
Manuscript
ED: Publish as is (26 Jun 2025) by Lindsay Beevers

AR by Peng Wang on behalf of the Authors (03 Jul 2025)
Manuscript
In the manuscript, Li and Wang have developed a survey to measure flood risk perceptions and flood preparedness. They have used statistical tests and regression analysis on the survey responses to determine the effect of different variables, e.g., gender and education level, on these dependent variables. These kind of surveys and studies are highly relevant and useful for informing disaster risk education, and risk mitigation policies.
I have some major concerns regarding the comprehensiveness of authors’ methodologies that make it difficult to validate some of their conclusions. I have listed both my major and other concerns below.
1. The language in the paper could be improved for readability. Especially in the abstract and introduction, the use of past tense makes it harder to follow the definitions and literature review. To be clear, the improvement in language is not a reflection on the quality of the scientific content of this paper, which was reviewed independently.
2. The methods used in the paper are only mentioned but not described. While it is okay to use standard methods like linear regression without explaining them, it will be helpful to the audience of this journal to explain when special methods are used such as stepwise regression. Additionally, references to these methods are consistently missing. Additional comments below mention some of the methods for which it would be helpful to add descriptions and references.
3. Since the primary contribution of this manuscript is reviewing survey responses, it will be helpful to understand how the 844 respondents were selected across the geographic region, and the criteria for selection. Please provide additional information, such as, why 844 respondents were selected instead of more or fewer.
4. In addition to the above, please provide additional statistics to ascertain whether the survey respondents were reflective of the socio-economic distribution of the targeted geographic region.
5. The authors have mentioned that a detailed description of the survey questionnaire is available in supplementary material. Given the high relevance of the survey development to the authors’ conclusions, it will be helpful if some more detailed information regarding survey development, improvement, and questions was included in the main text.
6. For the purpose of drawing conclusions from the surveys, all 737 valid responses are considered ground truth and representative of the entire population of the targeted geographic region. In order to substantiate the conclusions, it will be beneficial if sensitivity analysis was performed on results. One possible approach could be providing confidence intervals on the results and regression coefficients. Another approach could be bootstrapping where a subsample of responses are selected multiple times, and the coefficients are generated. This would help quantify the variability of the results in order to determine whether the differences across various factors are within the error margins or not, in order to draw conclusions.
7. Most of the text in Section 3 lists the numbers already present in the respective tables. The section could be made more succinct by only including the key observations from the tables.
8. Line 163 - Were responses also collected from online distribution of the survey and did those responses match the ones collected directly by interviewers?
9. Line 168 - It will be helpful to include whether any analysis was done to review interviewer bias in the responses.
10. Line 179 - Please include information about how the valid and invalid responses were determined.
11. Lines 184-187 - It will be helpful to provide more information about the implementation of Mann-Whitney U and Kruskal-Wallis statistical tests, along with relevant references. The current brief description is not sufficient to understand why these tests were chosen, how they were implemented, and their objectives.
12. Line 191 - Briefly describe stepwise regression and provide relevant references.
13. Line 192 - What is Model 5?
14. Line 198 - What are Cronbach’s α and KMO values? Please provide brief descriptions along with relevant references.
15. Line 232 - What is mean rank?
16. Section 3.2 - Why was Mann-Whitney U test used for binary variables and the Kruskal-Wallis test used for multi-category variables? Can the rank values between the two tests be compared with each other?
17. Fig 3 - The colors in the correlation plot do not appear to match the color bar. For example, the diagonal should be solid red since correlation=1.0, but it’s white indicating correlation=0. Similarly, value of 0.05 is shaded light red while a value of 0.76 is shaded white. As a result, Section 3.3 could not be reviewed for accuracy, and its conclusions could not be substantiated.
18. Section 3.4 - Briefly describe models 1, 2, and 3. From Table 12, it appears that each model used a different set of features.
19. Sections 3.4, 3.5 - The difference in regression coefficients between various groups (e.g., males and females) appear quite small. It will be helpful to see whether these differences are in fact indicative of reality or within the error bars (such as, confidence intervals) based on the number of survey responses.
20. Fig 4 - Why are certain coefficients missing from the figure, e.g., females and flood disaster education? Same for Fig 5.
21. Section 3.6 - Please describe how the influence path analysis was implemented, along with relevant references.
22. Section 3.6 - New taxonomy is presented, e.g, M-1SD, without any explanation for its meaning. As a result, it was not clear how to interpret the figures and results.