Invited perspectives: Current challenges to face knowns and unknowns in natural hazard risk management-an insurer perspective

All along the value chain of Property and Casualty (P&C) insurance, various types of data feed a wide range of models to estimate the amount of loss for different probabilities and magnitudes of events for all risks undertaken (natural hazards, financial, cyber. . . ). These data and models support the current understanding and knowledge of risks as well as the assessment of not yet experienced situations. The current state of what is known today about natural hazards loss modelling in the (re)insurance market is the outcome of more than 30 years of research, of the development of a wide community around 5 natural hazards as well as of the occurrence of natural hazards. This paper highlights the need for an in-depth review of the current loss modelling framework, created at the early 1990s, to capture the increased complexity of each driver of the risk (exposure, hazard and vulnerability) as well as their interconnection.

of natural hazards (Ward et al., 2020). The learning curve has been steep, closely linked to the increase of computer power (e.g. enabling the development and implementation of millions of possible climatic or seismic scenarios) and the collection of more and more granular observation data (e.g. hazard, claims, geocoded exposure).
In terms of data collection, we can mention the massive improvement over the last decade to retrieve and complete information characterizing the exposure, notably the location at (longitude, latitude) granularity and the physical properties of buildings. 60 Geocoding tools and satellite data are used to complete the information that is difficult to get at the time of underwriting, especially for individual insurance. Based on the address, it is possible to get the precise geolocation, the structure of the building, number of floors or even the roof type, all are critical drivers of damage, and therefore of loss, for different perils (Ehrlich and Tenerelli, 2013;Castagno and Atkins, 2018;Kang et al., 2018;Schorlemmer et al., 2020).
While there has been a substantial increase of observation data made available over the last two decades (Yu et al., 2018), 65 further investment should be made in the systematic collection of building damage and hazard magnitude information in the aftermath of natural events. This is already the case for earthquakes as it is of primary importance for public authorities to identify buildings that are about to collapse and those that are safe to stay in. It is less the case for other perils (e.g. flood, windstorm), as there are usually less structural damages on buildings and population is evacuated. In the case of floods, it may also be complicated to go into flooded areas for a long time after the event as it takes time for the water to recede. (Molinari Could this type of work be extended at the scale of Europe or even more globally? In terms of modelling, the occurrence of natural disasters feeds research that is integrated into the loss modelling framework Before then, the occurrence process of European windstorms was assumed to follow a Poisson distribution, which did not 85 allow for successive events to occur. As exhibited in (Priestley et al., 2018), clustering effect has a significant impact on the estimation of yearly aggregated losses and therefore on the dimensioning of reinsurance covers.
Here after are two topics where further investigation could be performed: (i) the assessment of uncertainty all along the modelling chain and (ii) the scalability of the loss modelling framework. Regarding the first topic, uncertainty is inherent to modelling and is today captured to some extent in the loss modelling framework through the primary uncertainty (related to the 90 catalogue several times to test different set of parameters and second, to run the loss simulation engine multiple times, which is costly both in terms of computer power and inclusion in the loss modelling framework. While (Beven et al., 2018) suggest a framework to deal with epistemic uncertainty in natural hazard modelling, recent work as in (Noacco et al., 2019;KC et al., 2020) has been carried out to address uncertainty quantification with appropriate methods and tools which could be further implemented directly as new features or components in the loss modelling framework.

100
As for modelling scalability, the issue is to reconcile the precise evaluation at the moment of underwriting clients' policies (i.e. for a corporate client a few buildings) and the evaluation at portfolio level (i.e. for a global insurer, millions of buildings).
For example, to estimate the premium of one policy for an industrial facility it is crucial to capture: (i) the specific features of the buildings at stake (3D shape of the building, location, structure. . . ), (ii) any prevention measures that may have been put in place by the policyholder and (iii) hazards' information on magnitude and frequency. Such detailed and localized in-105 formation cannot be captured today in the loss modelling framework, one reason being for example the spatial resolution of the hazard events catalogue that varies from one peril to the other, the finest ones achieving 30 meters and even 5 meters in some urban areas. This generates a non-alignment of approaches used to evaluate the risk of the same policy at the different levels of assessment (from localized building level to global portfolio level). This raises the following questions: Would it be appropriate and feasible to have a unique and scalable model, able to resolve the various scales of each purpose? Or could there 110 be a downscaling methodology used to refine the modelling performed at the large scale of a portfolio to the building level?
Challenges suggested in this section to improve what we know we don't know highlight the potential limitations of the current loss modelling framework and its simulation engine. For example, considering the shape of buildings in addition to their coordinates (longitude, latitude) would require changing not only the format used to capture the exposure information but also how the exposure is intersected with hazard. There is therefore a need for an in-depth review of the current loss modelling 115 framework to support these already identified evolutions and increase insurers' understanding of natural hazard risk, all the more in an ever more connected environment that will be described in the next section.

Challenges to face what we do not know
As said in (Baum, 2015), "threats are rarely completely unknown or unquantifiable". Sometimes what we do not know is already present in the data or the model but has not been understood nor analyzed yet. Since the design of the loss modelling 120 framework in the 1990s, clients have become more interconnected (Gereffi et al., 2001), and the correlations between natural hazards and regions is also better understood and quantified (Steptoe, 2016;Steptoe et al., 2018;Zscheischler et al., 2020).
With globalization, clients around the world have become more and more connected and dependent to each other within so