Interactive comment on “ From regional to local SPTHA : efficient computation of probabilistic inundation maps addressing near-field sources ”

This paper addresses an important topic, namely the development of onshore probabilistic tsunami hazard assessments and overcoming the related computational challenges. It builds on the work of Lorito et al. 2015 and Selva et al. 2016. A key innovation in this study is efficient filtering of near-field sources based on coseismic deformation, rather than offshore tsunami wave height. Overall, the paper is well written and concisely explains the issues and methods used to overcome them, and is suitable for publication in NHESS with some minor revisions.


Introduction
In the last fifteen years, a number of large earthquakes, often accompanied by destructive tsunamis, occurred worldwide.
In several cases, the overall size of the earthquake and/or of the tsunami was unanticipated and some surprising features were observed, in terms of event scaling (e.g.source aspect ratio, tsunami height versus earthquake magnitude) or associated damage (Lay, 2015;Lorito et al., 2016); a striking example is the 2011 Tohoku earthquake and tsunami and the consequent nuclear disaster at the Fukushima Dai-ichi power plant (Synolakis and Kânoglu, 2015).These events called the attention for a systematic re-evaluation of current tsunami hazard estimates.
In the past, tsunami hazard was mostly studied through simulations of one or several "worst credible" earthquake scenarios and the associated tsunami (e.g.Lorito et al., 2008;Tonini et al., 2011;Løvholt et al., 2012a).Such an approach can be useful as a first screening to inform emergency managers on the potential of tsunamis and their features or to realize very detailed assessments of specific scenarios.coastal inundation behind: indeed, coastal uplift or subsidence due to the coseismic displacement induced by local earthquakes can modify the actual tsunami intensity, which can be reduced or enhanced by nearby the target area (Mueller et al., 2014).This is particularly important to preserve the tail of the hazard curves (i.e. largest intensities), as demonstrated by the disaggregation analysis in Selva et al. (2016).For this reason, a special treatment is needed for local sources.
For illustrative purposes, we considered, as a use-case, a target site in the Central Mediterranean, that is the Milazzo oil refinery (Sicily, Italy), in the Southern Tyrrenian sea.This site was previously selected within the framework of the EU project STREST (http://www.strest-eu.org/) as a test case for multi-hazard stress test development for non-nuclear critical infrastructures.
It is worth noting that this paper is strictly methodological and is aimed to propose a computationally efficient procedure for local scale SPTHA, rather than providing a realistic site-specific hazard assessment.In fact, for the sake of simplicity and in order not to deflect the attention from the core of the method, no efforts have been dedicated to constrain and test the (regional) seismic rates, the local seismic sources and their geometry and dynamics, including slip distributions, as well as the accuracy of topo-bathymetric data used in tsunami simulations.
The paper is organized as follows: section 2 resumes the general outline of the method for SPTHA evaluation, as proposed by Lorito et al. (2015) and Selva et al. (2016), while the innovative developments are described in section 3; section 4 focuses on the illustrative application; conclusive remarks are drawn in section 5.

Method outline
Using regional scale SPTHA as input for local scale (site-specific) SPTHA, through the approach proposed by Lorito et al. (2015), is a task already foreseen by Selva et al. (2016) (see Fig. 1 therein).However, this possibility was neither applied nor tested in practice, since their main focus was the application to regional scale analyses.The details of the general method have been already thoroughly described and validated in the previous studies.Here we will resume the basic concepts.
The whole general procedure for site specific SPTHA can be outlined in four STEPs: (1) the definition of earthquake scenarios and their probability, allowing a full exploration of source aleatory uncertainty; (2) the computation, for each source, of tsunami propagation up to a given offshore isobath; (3) the selection of the relevant scenarios for a given site through a filtering procedure and the relative high-resolution tsunami inundation simulations; (4) the assessment of local SPTHA with joint aleatory and epistemic uncertainty quantification by means of ensemble modeling, including modeling alternatives eventually implemented at STEPs (1)-(3).
In STEP (1), all the modeled earthquakes must be defined for different seismic regions, which are assumed to be independent from each other.The earthquake parameters and their logically ordered conditional probabilities are treated by means of an event tree technique.We emphasize that the common assumption that tsunami hazard is dominated at all time scales by subduction zone earthquakes is released: non-subduction faults, unknown offshore faults and diffuse seismicity around major known and well mapped structures are all taken into account.This strategy attempts to prevent biases in the hazard due to incompleteness of the source model (Basili et al., 2013;Selva et al., 2016).The seismicity related to the main and better In STEP (2), for each scenario retrieved from STEP (1), the corresponding tsunami generation and propagation is numerically modeled, and the pattern of offshore tsunami wave height (H max ) is evaluated on a set of points along the 50m isobath, in front of the target area.To provide the input to Lorito et al. (2015), these points may be limited to a profile in front of the site.
The length of this control profile must be tuned depending on the morphology and the extension of the target coast: a trade-off has to be reached, as few points could make the profile not enough representative, while too many points could downgrade the performances of the subsequent filtering procedure (Lorito et al., 2015).Actually, the optimal length is the shortest one that makes the offshore hazard curves stable with respect to the source selection, and further increase in length would increase the computational effort without significantly altering the results.
In STEP (3), using the offshore H max profiles calculated at STEP (2), a filtering procedure is implemented to select a subset of relevant sources, based on the similarity of the associated tsunami intensity, not on the similarity or spatial proximity of the sources themselves.The selected sources, each of them representative of a cluster of sources producing comparable tsunamis offshore the target area, are then used for explicit inundation modeling on high-resolution topo-bathymetric grids.
This approach allows for a consistent and significant reduction of the computational cost, while preserving the accuracy.However, Lorito et al. (2015) considered a limited set of sources.The extension to a much larger set of potential sources requires some modification to the method that, along with several other improvements, are proposed in this study, as reported in section 3.
In STEP (4), local SPTHA is quantified.The inundation maps for each representative scenario from STEP (3) are aggregated according to the probabilities provided at STEP (1), assigning the total probability of a cluster to the representative scenario.
Aleatory and epistemic uncertainty are simultaneously quantified by means of an ensemble modeling approach (Marzocchi et al., 2015;Selva et al., 2016) over alternative implementations of the previous steps.In practice, STEPS (1) to (3) can be iterated for each alternative model and these alternatives can be weighted according to their credibility and the possible correlations among the models.The results are finally integrated through ensemble modeling into a single model which expresses both aleatory and epistemic uncertainty.

Implementation of an improved filtering methodology
The described method has been tested by both Selva et al. (2016) and Lorito et al. (2015).However, Lorito et al. (2015) focused on the filtering procedure of STEP (3), adopting a simplified configuration for the source variability, in which sources were allowed only within the Hellenic arc, that is an area relatively smaller than the full aleatory variability.On the other hand, Selva et al. (2016) applied the approach to a regional study extended to the Ionian seas, in central Mediterranean.The quantification of the local hazard is instead discussed only in theory, without proposing any application.
The original method by Lorito et al. (2015) adopted a two-stage procedure.In the first stage, scenarios giving a negligible contribution to H max offshore the target area were removed, assuming they would lead to negligible inundation.Hereinafter, we call this "Filter H".
As a second filtering stage, a Hierarchical Cluster Analysis (HCA) was carried out, separately for each earthquake magnitude class included in the seismicity model, under the assumption that sources which produce similar offshore H max along the control profile will produce as well similar inundation patterns.The distance between two H max patterns, that is between two different scenarios, was measured by a cost function previously used to compare tsunami waveforms in source inversion studies (e.g., Lorito et al., 2010;Romano et al., 2010).For each cluster, the scenario closer to the centroid was selected as the reference scenario, with an associated probability corresponding to the probability of occurrence of the entire cluster.The optimal number of clusters (i.e., the "stopping criterion") was assessed by analyzing the intra-cluster variance as a function of the number of clusters and selecting the largest value still producing significant changes (Lorito et al., 2015, and references therein).
We implemented a different strategy to further reduce the number of explicit tsunami simulations and introduced a separate treatment for local and remote sources.In particular, the source scenario filtering procedure was revised to improve both the computational efficiency and the accuracy, allowing for a full scalability to the source variability of typical SPTHA (millions of scenarios located allover an entire basin).A schematic diagram of the new procedure is sketched in Fig. 1, with (right, STEP (3b)) or without (left, STEP (3a)) the separation between near-and far-field.
We still kept Filter H, but also adopted an additional filter on the occurrence probability (hereinafter "Filter P", see Fig. 1), discarding scenarios whose cumulative mean annual rate (mean of the model epistemic uncertainty) is below a fixed threshold.
In practice, scenarios were sorted for increasing mean annual rate and the first ones were removed until the cumulated rate reached the selected threshold.This allows to further reduce the number of required numerical simulations.On the other hand, this operation introduces a controlled downward bias on the estimated hazard, whose upper limit corresponds (on average) to the probability threshold of the filter.This threshold can be set at a negligible level in the framework of the overall analysis and/or with respect to other uncertainties.In addition, it can be empirically checked to which extent this affects the results by analyzing the offshore hazard curves at the control points.This check was quantitatively done by computing the maximum deviation between the mean hazard curves at each control point before and after Filter P was applied.We also notice that, as reported in Fig. 1, Filter P was always applied after Filter H due to strategical reasons of optimization: in fact, the cumulate rate curve is lowered by the removal of small events (i.e., producing small H max ), which are typically featured by high occurrence probability.As a consequence, a greater number of scenarios can be removed before reaching the imposed threshold, making Filter P more efficient.
Also the cluster analysis stage was modified.Firstly, we used a different algorithm, as the large number of source scenarios in some cases can make the HCA a computationally unaffordable task.We implemented the more efficient k−medoids clustering procedure (Kaufman and Rousseeuw, 2009;Park and Jun, 2009).Moreover, the cluster analysis was performed separately for groups of scenarios with similar mean < H max > along the profile, instead of grouping scenarios per earthquake magnitude classes.This makes the partitioning more efficient, as the earthquake magnitude can not be considered the only parameter controlling the tsunami intensity, as it was for the limited set of sources adopted by Lorito et al. (2015).The cluster distance  2015), but we updated the stopping criterion, which is now related to the maximum allowed intra-cluster variance, rather than being a blind optimization of the number of clusters.
More specifically, to control the dispersion within each cluster, we set a threshold for the maximum allowed squared Euclidean distance.This threshold was empirically fixed by comparing the offshore hazard curves before and after the analysis and assuming an acceptable range of variability, in analogy with the approach used for Filter P.
Finally, and probably most importantly, in order to deal more properly with the contribution from local sources, we implemented two independent filtering schemes for distant and local sources.Indeed, as mentioned in the Introduction, a special treatment for near-field sources is needed, as the coseismic deformation can modify the actual local tsunami intensity at the nearby coast, due to coastal uplift or subsidence.As a consequence, the offshore tsunami amplitude profiles generated by such events may fail in being representative of the coastal inundation, as assumed by Lorito et al. (2015), and a separate modeling is required, using the coseismic displacement as the metric for source proximity in the cluster analysis.This issue was somehow hidden in Lorito et al. (2015), again due to the relatively small aleatory variability they considered, being the source either in the far-or near-field, depending on the target site, but never mixed together.In addition, this separation may favor some refinement of the near-field source discretization and modeling, such as a denser sampling of geometrical parameters and/or the introduction of heterogeneous slip distributions.
For testing the proposed method, we replaced STEP (3) either with STEP (3a) or STEP (3b), as displayed in Fig. 1.The workflow of STEP (3a) is substantially equivalent to the original procedure by Lorito et al. (2015), improved by all the mentioned changes, except the separate treatment of near-and far-field, which is included in STEP (3b).STEP (3a) is then used in this study as a term of comparison for the new scheme.
In STEP (3a), three sequential tasks were performed, namely Filter H, Filter P and the cluster analysis based on the offshore tsunami amplitudes.
In STEP (3b), local and distant sources were firstly detected, based on the coseismic deformation produced by the earthquake near and on the target coast.The procedure was then split into two parallel paths, which need to be merged at the end when evaluating SPTHA (Fig. 1).As far as the far-field scenarios are concerned, the same workflow as STEP (3a) was followed.
Near-field scenarios, which in principle should be individually modeled, were also filtered in order to reduce the number of explicit inundation simulations.Filter H was applied as well, but choosing a smaller threshold value: a more conservative approach is indeed recommended at this stage, as offshore values could be strongly misleading when coseismic deformation of the coast occurs.Then, Filter P was employed and finally a cluster analysis was performed, by comparing the coseismic vertical deformations, instead of the (unrepresentative) offshore tsunami amplitudes.For each local source, the coseismic displacement was calculated on a 2D grid centered around the fault and having size equal to three times the fault length.Then, the cluster analysis was carried out, separately for each magnitude, by comparing the coseismic fields at each point of the 2D grid.In this case, the cluster analysis is based on the squared Euclidean distance, instead of the cost function, while the stopping criterion on the Euclidean distance, since the coseismic field can take both positive and negative values.The selected earthquake scenarios from STEP (3a) or from the two branches (near-and far-field) of STEP (3b) were then used for high-resolution inundation simulations and combined together in STEP (4) when evaluating SPTHA.A practical example of the whole procedure is illustrated in the next section.
4 The Milazzo oil refinery (Sicily, Italy) use-case The described procedure was applied to a test site, Milazzo, located on the north eastern coast of Sicily (Italy), within the Mediterranean sea.The site houses an oil refinery, one of the non nuclear critical infrastructures selected as case study in the framework of the EU project STREST (http://www.strest-eu.org/).Due to the illustrative purposes of the present work, strong assumptions were imposed during the filtering procedure for the sake of simplicity.More sanity and sensitivity tests for a finer tuning of thresholds and modeling would be mandatory in case of a real application.For example, the modeling of near-field scenarios is expected to be dependent on the source parameters, especially concerning the heterogeneous slip distribution on the fault plane (e.g.Geist and Oglesby, 2014), which was not included here.Therefore, the computational effort of a real assessment, including a wider source variability and more conservative thresholds, is expected to be more demanding than this case-study.
Regarding STEP (1), the adopted seismicity model was previously developed in the framework of the EU project ASTARTE (http://www.astarte-project.eu/).This model extends the method applied to the Ionian Sea in Selva et al. (2016) to the entire Mediterranean sea, including the subduction interfaces of the Calabrian and Hellenic Arcs as well as crustal seismicity in the whole basin (see Fig. 2a).On subduction zones, events of different magnitude and positions on the whole interface are allowed, disregarding the geometry uncertainty of the slab; conversely, crustal seismicity is allowed to occur with any meaningful geometry and mechanism in the whole seismogenic volume at different magnitude and depths.The complete set of sources retrieved from STEP (1) contains about 40 millions of elements, among which 1.701.341scenarios actually affect the target site (H max > 0.05m offshore Milazzo).
Tsunami amplitudes (STEP (2)) were computed on a control profile made of 11 points offshore the Milazzo target area (on the 50m isobath), as reported in Fig. 2A.To save computational time, scenarios from STEP (1) were not individually simulated, but were obtained by linear combination of pre-calculated tsunami waveforms produced by Gaussian-shaped unitary sources (Molinari et al., 2016).The Gaussian propagation has been modeled by the Tsunami-HySEA code, a non-linear hydrostatic shallow-water multi-GPU code based on a mixed finite difference/finite volume method (de la Asunción et al., 2013;Macías et al., 2016Macías et al., , 2017)).STEP (3) was addressed by independently performing the two branches (3a) and (3b), as discussed in the previous section, and then comparing results to assess the importance of the separate treatment of the near-field sources.
In STEP (3a), thresholds were fixed at 1m for Filter H and 10 −5 yr −1 for Filter P.This resulted in discarding scenarios with individual mean annual rate below ∼ 10 −9 yr −1 , causing a maximum bias on the offshore mean hazard curves of about 10% in the considered range of tsunami intensities, with respect to the curves obtained without Filter P, as explained before in Section 3. At the end of the filtering procedure, imposing a threshold equal to 0.2 on the intra-cluster variance, we obtained 776 clusters, each associated to a representative scenario.Namely, we had a reduction even above 99%.It is worth stressing that the efficiency of the filters is here artificially enhanced by the imposed high thresholds.
In STEP (3b), we considered as local scenarios, requiring a separate processing, sources generating a coseismic vertical displacement greater than or equal to 0.5m on a set of near-field points, that is the 11 control points on the 50m isobath plus 95 inland points, strategically located at the edges of the refinery storage tanks, as shown in Fig. 2B.We found 4721 scenarios in the near-field (see Fig. 2A).Afterward, for both branches we applied Filter H and P as well, using the following thresholds: for far-field scenarios, Filter H=1m and Filter P=5 × 10 −6 yr −1 ; for near-field scenarios, Filter H=0.1m, according to the more conservative approach described in the previous section, and Filter P=5 × 10 −6 yr −1 .Note that Filter P threshold was set half the value used in STEP (3a), in order to keep a total maximum theoretical bias on the hazard curves at 10 −5 yr −1 (as in STEP (3a)), considering that Filter P is separately applied both to far-and near-field scenarios.Then, the cluster analysis was carried out on the tsunami amplitudes for far-field scenarios (using a threshold equal to 0.2 on the intra-cluster variance) and on the coseismic deformation for near-field scenarios (using a 10% threshold for the intra-cluster variance).We obtained 634 and 520 clusters for remote and local sources, respectively, that is a total of 1154 scenarios to be explicitly modeled, again corresponding to a reduction above 99% of the initial set of sources.
Inundation simulations at STEP (3) have been carried out again with the Tsunami-HySEA code, exploiting the nested grid algorithm.We used 4-level nested bathymetric grids with refinement ratio equal to 4 and increasing resolution from 0.4arc − min (∼ 740m) to 0.1arc − min (∼ 185m) to 0.025arc − min (∼ 46m) to 0.00625arc − min (∼ 11m).The largest grid was obtained by resampling the SRTM15+ bathymetric model (http://topex.ucsd.edu/WWW_html/srtm30_plus.html).The finest three grids have been produced by interpolation from TINITALY (inland, Tarquini et al. (2007Tarquini et al. ( , 2012))) and EMODNET (offshore, http://www.emodnet-bathymetry.eu/), working on grids of 0.00625arc − min that have been resampled at 0.1arc − min and 0.025arc − min.A picture of the telescopic nested grids is provided in Fig. S1 of the Supplementary Material.The initial conditions were differently provided for subduction and crustal seismicity.The subduction scenarios have been simulated by modeling the slab as a 3D triangular mesh honoring the interface profile and using unitary Okada sources associated to each element of the mesh (i.e., to each triangle) as Green's functions (Okada, 1985;Meade, 2007).For crustal events, the initial sea level elevation was obtained by modeling the dislocation on rectangular faults according to the Okada model.A Kajiura-like filter for the sea-bottom/water-surface transfer of the dislocation was also applied (Kajiura, 1963).For each simulation an overall length of 8 hours was fixed.The results were stored as maximum wave height (H max , m) and maximum momentum flux (M max , m 3 s −2 ), at each point of the inner grid.
At STEP (4), SPTHA was evaluated in parallel using results both from STEPs (3a) and (3b), in order to compare the outcomes of the two different workflows and estimate the impact of the special treatment of near-field sources on the sitespecific hazard assessment.Note that alternative models for the epistemic uncertainty were considered only at STEP (1), that is only as far as the probabilistic earthquake model is concerned, since the Selva et al. (2016) model was used.
Figures 3 to 5 compare the results from STEPs (3a) and (3b), in terms of mean hazard curves and inundation (both probability and hazard) maps for H max .In the Supplementary Material, analogous figures for M max are provided (Fig. S2 to S4).At a first glance, differences are evident in both the curves and the maps.It is worth noting that results at H max < 1m can not Nat.Hazards Earth Syst.Sci.Discuss., https://doi.org/10.5194/nhess-2018-202Manuscript under review for journal Nat.Hazards Earth Syst.Sci. Discussion started: 18 July 2018 c Author(s) 2018.CC BY 4.0 License.be considered meaningful, as that is the chosen threshold for Filter H in STEP (3a) and in the far-field-branch of STEP (3b).
Curves and maps will be described in more detail in the following.
The hazard curves in Fig. 3 (panels a) and b)) show the mean (mean of the model epistemic uncertainty) exceedance probability in 50yr for H max (evaluated assuming a Poisson process, as in Selva et al. (2016)), plotted for each point of the finest resolution grid.For small values of H max , the envelope of the curves obtained from STEP (3b) is systematically higher than that from STEP (3a), with a stronger negative gradient up to 3m.This means that the largest probabilities would be underestimated without the correction for the near-field sources.For values greater than 3m, the differences between the envelopes of the families of curves are less pronounced and the maximum hazard is slightly -although systematically -lower for STEP (3b).
A more complex pattern emerges when analyzing the one-by-one relative differences in terms of exceedance probability (in 50yr), as a function of H max , between the STEP (3a) and (3b) curves at each grid point (panel c) of the same figure).Note that a positive difference means that STEP (3a) overestimates the probability for a given H max value.For values below 3m, the median confirms the underestimation without the correction, although individual grid points are dispersed and assume both positive and negative values.For values greater than 3m, there is a definite overestimation without the correction (STEP (3a)), both as far as the median and the individual points are concerned.In Fig. 3d the relative differences are shown also in terms of H max as a function of exceedance probability (in 50yr).In the low probability region, supposedly corresponding to high H max , the overestimation by STEP (3a) is confirmed; conversely, for exceedance probability greater than ∼ 10 −4 , which is likely to correspond to small H max , the differences are almost all negative.In other words, in this range, for a given average return period (ARP), the predicted H max turns out to be greater when STEP (3b) is used.
Probability and hazard inundation maps can be achieved by vertically and horizontally cutting the hazard curves at chosen fixed values, in order to give a geographical representation of results.As each hazard curve corresponds to a grid point, the probability maps are obtained by plotting on a map all the probability values for a fixed value of the intensity metric.
Instead, in the hazard maps the intensity values are plotted for a fixed exceedance probability, corresponding to a given ARP.
For the selected values, the maps confirm what we already discussed about the curves: the probability maps show mostly positive relative differences inland, even larger than 50%; these differences are positive in a larger number of points, even offshore, for the higher intensity.On the other hand, in the hazard maps differences are negative, namely H max retrieved from STEP (3b) is smaller than from STEP (3a), as the analyzed ARPs lie in the low intensity range.We also notice that the inundated area decreases with increasing the H max value, as expected.
Further details about the comparison can be found by analyzing the curves and the maps for the maximum momentum flux reported in the Supplementary Material.We just note that the envelope of the hazard curves obtained from STEP (3b) is definitely above the curves from STEP (3a) in the entire range of M max (see Fig. S2); moreover, when the correction for nearfield is taken into account, the inundation maps (Fig. S3 and S4) highlight an enhanced current vorticity near the docks, which is a known effect due to the flow separation at the tip of a breakwater (Borrero et al., 2015).As the probability and hazard

Conclusions
We proposed a computationally efficient approach to achieve robust assessment of site-specific SPTHA, developing an improved version of the method by Lorito et al. (2015) and Selva et al. (2016).
The procedure is based on 4 STEPs, which can be resumed as follows:(1) the definition of the whole set of earthquake scenarios affecting the target site, fully exploring the source aleatory uncertainty, and their mean annual rates; (2) the computation of tsunami propagation up to an offshore isobath; (3) the implementation of a filtering procedure to select relevant scenarios for the target site, which are then explicitly modeled; (4) the assessment of local SPTHA through an ensemble modeling approach, to jointly quantify aleatory and epistemic uncertainty.
In the present work we mainly focused on STEP (3), modifying the filtering procedure to enhance the computational efficiency and introducing a separate treatment for sources located in the near-field, to take into account the effect of the coseismic deformation on the tsunami intensity.This is crucial as the original method is based on the assumption that offshore tsunami profile is representative of the inundation at the nearby coast, which is actually true if a coseismic deformation of the coast is not involved; otherwise seafloor uplift or subsidence make the assumption invalid as the tsunami intensity is not predictable from offshore wave amplitudes.Consequently, local sources must be separately treated and, to ensure a feasible computational effort by reducing the number of explicit inundation simulations, different filtering procedures must be employed in the farand near-field.This may also allow for a specific and more detailed parameterization of the near-field sources, to which the local hazard is known to be more sensitive.
We tested the procedure investigating a case study, i.e.Milazzo (Sicily), a test-site selected within the STREST project (http://www.strest-eu.org/).The work has only illustrative purposes and has not to be intended as a real hazard assessment at that site, due to some simplifications in the implemented model.The results highlight that near-field sources play a fundamental role, as expected, and confirm that they must be specifically dealt with when evaluating site-specific SPTHA.Moreover, the new implemented filtering procedure allows for a consistent reduction of the number of tsunami inundation simulations and therefore of the computational cost of the analysis.It is worth stressing that in this specific application the computational efficiency was artificially enhanced by limiting the source variability as well as by imposing high filter thresholds.In fact, a real assessment is expected to deal with a greater number of scenarios, provided that a finer tuning of the threshold values is carried out.This may in particular affect the computational cost related to the analysis of the near-field sources, for example when using stochastic slip distributions.
Nat. Hazards Earth Syst.Sci.Discuss., https://doi.org/10.5194/nhess-2018-202Manuscript under review for journal Nat.Hazards Earth Syst.Sci. Discussion started: 18 July 2018 c Author(s) 2018.CC BY 4.0 License.known fault interfaces is treated separately from the rest of the crustal and diffuse seismicity.A similar approach has been used in the recent TSUMAPS-NEAM project (http://www.tsumaps-neam.eu), which provided the first SPTHA model for the North-Eastern Atlantic, Mediterranean and connected seas (NEAM) region.
Nat. Hazards Earth Syst.Sci.Discuss., https://doi.org/10.5194/nhess-2018-202Manuscript under review for journal Nat.Hazards Earth Syst.Sci. Discussion started: 18 July 2018 c Author(s) 2018.CC BY 4.0 License.maps aggregate several different sources, the hazard integral may tend to average and cancel out different source effects, while enhancing local propagation features.The presence of such persistent physically meaningful effects only in the maps retrieved using STEP (3b) confirms the importance of the special treatment.In other words, the blind cluster analysis (STEP (3a)), exclusively based on the offshore tsunami amplitudes, likely produced a non-representative selection of the important scenarios.

Figure 3 .
Figure3.a) Mean hazard curves for Hmax at all points within the highest resolution grid, as obtained from STEP (3a) of the SPTHA procedure (see text and Fig.1).Grey and blue colors refer to inland and offshore points, respectively.The bold black line represents the envelope of the curves from STEP (3b).Red dashed lines represent the values used to obtain probability (Fig.4) and hazard (Fig.5) inundation maps.b) Same as a) but using STEP (3b).The bold black line is the envelope of the curves from STEP (3a).c) Relative differences in terms of exceedance probability (in 50yr) as a function of Hmax, computed as [(3a) − (3b)] /(3b).The black line is the median of the point distribution; the green dashed lines correspond to the 16 th and 84 th percentile.d) Same as c) but in terms of Hmax as a function of the exceedance probability (in 50yr).