|My thanks to the authors for their efforts to take on board the recommendations made by myself and the other reviewer. In particular, the description of the methods has improved considerably and some of the text has become easier to read.|
Unfortunately, I still have a number of serious reservations, especially regarding the scientific significance of the paper, the attribution of participants to groups (weak, medium and strong) and the clarity of the writing.
1. Hypotheses. In order to contribute to the extant literature and have scientific significance, the study needed to have drawn on that literature for its hypotheses. I do not feel this has been done/done sufficiently.
• H1: I am not aware of any empirical evidence to suggest H1. Neither am I convinced by the common-sense justification provided by the authors. Hence, I do not see the value of expending so much effort (and reader time) on conducting a test that fails to support it.
• H2 is more interesting but poorly described and, like the other Hs, not grounded in the literature. Where is the research evidence to suggest that negative psych impacts will reduce the motivation to implement precaution? I would have liked to see a presentation of the literature on this issue.
On p17 the authors themselves cast doubt on the scientific value of the way they teted this hypothesis – when they assert that the sample size was insufficient for H2 and that established processes were not followed.
• H3 remains poorly described. The reader needs to know which psych indicators are being tested rather than just being given examples. It is not clear what is meant by ‘are suitable’ and this cannot be tested with statistics. What does ‘distinctly connected’ mean?
There is no shame is producing a negative result, yet the authors seem to try to hide this. E.g. if you have ‘low explanatory power” and “non-significant results” this suggests that the psych indicators have no usefulness rather than ‘limited usefulness’.
The non-significance (rather than ‘low significance’) of the relationship between avoidance and fatalism amongst those experiencing strong flash floods seems to call into question the validity of combining avoidance and fatalism when looking at strong floods. Yes, this ‘may’ be due to the sample size, but where is the statistical evidence to support this suggestion?
The categorisation used within the study aim and the paper title remains problematic. Flash floods and river floods are not mutually exclusive categories. Rivers can themselves produce flash floods (e.g. when their catchment has a low absorptive capacity). I may be wrong, but I think that it would be more accurate to distinguish “flash” floods from “slower onset” floods and that the source of the flooding (pluvial or fluvial) is irrelevant to their question.
Large parts of the text remain hard to read and, in many instances, hard to understand even after repeated reading. E.g. lines 25-32 on p4; 29-31, p7. It is not uncommon for the former to indicate that authors themselves do not fully understand what it is that they are trying to express, so the lack of clarity makes me particularly nervous. There are two main issues here: logical clarity (i.e. in the construction of sentences and paragraphs) and use of English.
Terminology. “Flood dynamics” needs to be defined before it is used.
• In an ideal world, all other factors should be kept constant in a test of the impacts of type of flood on psych. The confession that the two regions “differ almost completely” is therefore a peculiar one; it seems to invalidate the whole exercise.
• The two surveys are described as “very similar”. I would want some reassurance that none of the differences impact on the validity of the analysis.
• Was “information gathering” aggregated with other precautionary measures in the outcome variable used? Why? My reading of the overall text suggests that the intended/actual precautions variables refer to tangible measures and not to information seeking. This should perhaps be made clearer.
• The attribution of weak, strong and medium to the flash floods is key to the analysis, so needs careful explanation. I find this a particularly vulnerable aspect of the research design and need reassurance and greater clarity. I suggest more detail on this aspect.
o The authors should make it explicit that weak, strong and medium relate to impacts on the areas and do not necessarily reflect levels of physical impact on particular homes/residents.
o Given that online literature and the press usually report only the most dramatic floods, I wonder how the authors were able to identity “weak” floods. It would not be safe to assume that any flood not mentioned in the press was “weak” as it is unlikely that all strong/medium floods were actually reported on.
Given that the paper places such emphasis on the role of denial, it seems strange that the authors do not utilise ‘denial’ to explain lower feelings of threat amongst those experiencing strong floods and that they treat respondents’ answers to survey questions as beliefs – p12
On p14 (line 21), the authors make the cardinal error of reading causal direction into correlation.
To make section 3.3 more accessible to readers not familiar with Bayesian analysis, the authors might want to explain to readers the meaning and significance of the term ‘posterior distribution’.
• “Individual threat perceptions differ from evidence based hazard estimations” p16. This seems obvious and is certainly not a new finding.
• The recommendation of ‘information campaigns’ seems naive given the sophistication of the psychology in this paper. Given the complex emotion regulation that characterises the response to flooding and flood risk (e.g. denial), it is hard to imagine that it will be enough to simply tell people that they might be flooded again. See my 2008 and 2018 papers in Health, Risk & Society and International Small Business Journal for some additional insights.
• I’m afraid I find little in the Conclusion section that adds to existing knowledge of this topic.