the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Advancing nearshore and onshore tsunami hazard approximation with machine learning surrogates
Abstract. Probabilistic tsunami hazard and risk assessment (PTHA and PTRA) are vital methodologies for computing tsunami risk and prompt measures to mitigate impacts. At large regional scales, their use and scope are currently limited by the computational costs of numerically intensive simulations behind them, which may be feasible only with advanced computational resources like high-performance computing (HPC) and may still require reductions in resolution, number of scenarios modelled, or use of simpler approximation schemes. To conduct PTHA and PTRA for large proportions of the coast, we therefore need to develop concepts and algorithms for reducing the number of events simulated and for more efficiently approximating the needed simulation results. This case study for a coastal region of Tohoku, Japan, utilises a limited number of tsunami simulations from submarine earthquakes along the subduction interface to build a wave propagation and inundation database and fits these simulation results through a machine learning-based variational encoder-decoder model. This is used as a surrogate to predict the tsunami waveform at the coast and the maximum inundation depths onshore at the different test sites. The performance of the surrogate models was assessed using a 5-fold cross validation assessment across the simulation events. Further to understand its real world performance and test the generalisability of the model, we used 5 very different tsunami source models from literature for historic events to further benchmark the model and understand its current deficiencies.
- Preprint
(28847 KB) - Metadata XML
-
Supplement
(38596 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on nhess-2024-72', Anonymous Referee #1, 31 May 2024
The paper presents an advance in tsunami hazard approximation, offering a machine learning (ML)-based solution to the computational challenges of traditional methods. However, for publication, major revisions are necessary to address the following concerns.
Details on Test Sets
The paper does not provide sufficient details on the features of the test sets used for model validation. Specific characteristics and parameters of the test sets should be clearly stated to assess the model's generalizability. Additionally, the rationale behind selecting specific test sets is only briefly mentioned. A more detailed justification for their selection is necessary. Include a detailed description of the test sets, including their characteristics. Explain how these sets represent the diversity of potential tsunami scenarios and justify their choice to enhance the study's credibility.
Model Training and Weights
The paper does not discuss whether different weights were assigned to various types of events during training. Assigning different weights could help ensure the model does not overfit to more frequent, less severe events, thus improving its predictive performance for rare, high-impact events. Elaborate on the training process, including whether different weights were assigned to events and how this impacted model performance. If not used, consider implementing and discussing the potential benefits of such weighting schemes.
Information Leakage
The paper lacks details on measures taken to prevent information leakage, which can lead to overly optimistic performance estimates if the test data inadvertently influences the training process. Clearly outline the steps taken to ensure strict separation between training and test data. Discuss any data augmentation techniques and how they are managed to avoid information leakage. This transparency will strengthen the reliability of the reported results. What do you think about the possibility of information leakage for the 2011 Tohoku test case?
Conclusions
The results shown in Figure 15 suggest that the quality of training sets is more important than the quantity. For the test cases outside the design of experiments, there are clear discrepancies between the prediction and observation, which is an intrinsic nature of ML algorithms. Include a discussion on the design of experiments, including the limitations. This will provide a more comprehensive understanding of the model's strengths and areas needing improvement.
Minor comments:
L15 Tsunami -> Tsunamis
L16 USD 280 billion damage -> USD 280 billion in damage
L29 a past historical event -> historical events
L91 machine learning-based -> ML-based; You may replace machine learning with ML in other places of the paper.
L193 Inconsistency in labeling the test events. Make sure the labels are consistent including Test A&E in the 2011 Tohoku case, Figure 5 & 15.
L458 remove “it contains”
Citation: https://doi.org/10.5194/nhess-2024-72-RC1 - AC1: 'Reply on RC1', Naveen Ragu Ramalingam, 11 Jun 2024
-
RC2: 'Comment on nhess-2024-72', Anonymous Referee #2, 25 Jul 2024
The study uses a surrogate approach based on a variational encoder-decoder (VED) to predict the tsunami time series at different depths and maximum inundation depths at three coastal sites in Japan. The surrogate accuracy is validated against historical rupture scenarios. I add some comments below that I believe could strengthen the work presented.
Comments:
The design of experiments is not very clear in terms of number of scenarios and input variables. The authors mention 559 events split to 383 and 176 depending on the nature of the rupture. More information is needed on how these numbers were selected, and also the parameter ranges that led to the variation in magnitudes (length, width, displacement). Furthermore, more clarification is needed on whether there are any other input variables beside the moment magnitude (e.g. location of the event) that are varied in the surrogate development.
I would suggest adding an outline of the times 1) for building the two ML surrogates, 2) for prediction and 3) to run the deterministic model. Possibly in the form of a matrix, this should showcase the benefits of using a surrogate approach.
In 310 and elsewhere in the manuscript please replace observations/observed with model/modelled or simulations/simulated as it can be confused with physical observations of the event.
In table 5 there seems to be a lot of variance regardless the number of the fold. In some cases, increasing the fold reduces the SME but in other cases it increases the SME. Is this variance random or based on certain conditions?
The legends in the figures should be more descriptive, especially in figures 10, 11 the red, blue symbols, lines and black dotted line. In these figures I would assume that these are simulated outputs instead of observed? In a similar manner for figures 12 and 13 for dotted lines, uncertainty bounds etc.
In figure 12, the predictions of events 14, 139, 102, 87 and 96 match nearly identically to the simulations with very small uncertainty bounds. Are those events in close proximity to other events in the training dataset?
In figure 14, the misfit between predictions and simulations and +-2 standard deviations and simulations (columns 3,4 and 5) seem to be very close in terms of values, possibly because the standard deviations are small? Can the authors provide an example with values and how including the standard deviation reduces the misfit between prediction and simulation?
Citation: https://doi.org/10.5194/nhess-2024-72-RC2 - AC2: 'Reply on RC2', Naveen Ragu Ramalingam, 29 Aug 2024
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
447 | 145 | 34 | 626 | 56 | 23 | 55 |
- HTML: 447
- PDF: 145
- XML: 34
- Total: 626
- Supplement: 56
- BibTeX: 23
- EndNote: 55
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1