Landslides are one of the major weather-related geohazards. To assess their potential impact and design mitigation solutions, a detailed understanding of the slope processes is required. Landslide modelling is typically based on data-rich geomechanical models. Recently, machine learning has shown promising results in modelling a variety of processes. Furthermore, slope conditions are now also monitored from space, in wide-area repeat surveys from satellites. In the present study we tested if use of machine learning, combined with readily available remote sensing data, allows us to build a deformation nowcasting model. A successful landslide deformation nowcast, based on remote sensing data and machine learning, would demonstrate effective understanding of the slope processes, even in the absence of physical modelling. We tested our methodology on the Vögelsberg, a deep-seated landslide near Innsbruck, Austria. Our results show that the formulation of such a machine learning system is not as straightforward as often hoped for. The primary issue is the freedom of the model compared to the number of acceleration events in the time series available for training, as well as inherent limitations of the standard quality metrics such as the mean squared error. Satellite remote sensing has the potential to provide longer time series, over wide areas. However, although longer time series of deformation and slope conditions are clearly beneficial for machine-learning-based analyses, the present study shows the importance of the training data quality but also that this technique is mostly applicable to the well-monitored, more dynamic deforming landslides.

Landslides make up 6 % of the weather-related disasters globally

Where the installation of effective remediation concepts is not possible, early warning systems may help to reduce the landslide risk. Such systems should quickly adapt to changing conditions, both on the slope and globally (e.g. climate change). Moreover, such a system should be fast to adapt and implement to assess as many slopes as possible.

Existing local systems typically provide early warning based on in situ slope monitoring

We focus on slow-moving, reactivating, deep-seated landslides on natural slopes, for which the deformation pattern is controlled by hydro-meteorological forcing. These deep-seated landslides are estimated to comprise 50 % of the landslides globally

Monitoring systems only supported by the detection of currently emerging acceleration events (e.g.

Past landslide deformation events are indicative of the future behaviour, as landslides are likely to display similar behaviour in similar situations

Deformation nowcasting could be considered an intermediate option between monitoring and modelling, integrating sensor data to estimate the current situation (the system state) and extrapolate on a short timescale. New data and data integration methods, “machine learning”, offer new possibilities for such data-driven landslide forecasting

In the past decades satellite observations have increased in quantity, shortening the time between subsequent acquisitions, as well as increasing the variables observed

Here, we present a data-driven nowcasting model with a 4 d lead time of the deformation of the Vögelsberg landslide, near Innsbruck, Austria. We use readily available, remotely sensed data and products and test various similar remote sensing products to assess their relative performance in the nowcasting model. We discuss the complications encountered during modelling: over-parametrization, the impact of optimization metrics, and the challenges due to the deep-seated landslide inertia compared to the highly dynamic forcing of the slope.

First, we introduce the modelling options and study area. Second, we present the resources available to us, and our modelling approach, followed by the results and an extensive discussion on the insights gained during the modelling exercise. Last, we provide recommendations for future data-driven landslide nowcasting exercises.

In the present study we interpret data-driven modelling as a form of naive modelling. That is, the model is unaware of the physics behind the landslide process. For data-driven models, the deformation of the slope is merely a signal to be reproduced from a collection of observations by empirical relations, in contrast to traditional, landslide geomechanical modelling that is rooted in physics. Table

The indirect transfer from precipitation and snowmelt to storage may be captured by, for example, including recent observations in a bucket model

Two distinct modelling approaches can be distinguished. Modelling either is based on classification of the environmental conditions and associated deformation response or calculates the expected deformation response from the conditions on the slope. In either case, the model parameters are tuned on historic observations such that they best reproduce the deformation signal from the conditions observed previously at the slope. Our model of the Vögelsberg landslide is a continuous model. For completeness classification models will be introduced briefly.

Based on the assumption that similar conditions trigger a comparable deformation response

The simplest, linear, model is the weighted sum of the quantified conditions at the slope. However, the slope response may not be linear and is typically not immediate. Neural networks may be used to estimate any signal by the formation of a network of interlinked nodes that ingest and combine the conditions on the slope in subsequent layers of nodes

As more hidden layers of neurons are introduced to the system, the direct link to the (time series) input is lost as combinations are made. Furthermore, an activation function may be applied to scale the output of each node, especially to normalize the response and filter outliers, at the cost of introducing non-linearity to the system. The number of parameters, degrees of freedom of the model, is associated with the number of input variables. When historic observations are supplied as additional observations, they will each require their own model parameters and increase the degrees of freedom in the model.

State aware models, such as recurrent neural networks

Models based on recurrent neural networks suffer from computational difficulties during optimization, where gradients may vanish

The challenge specific to forecasting and nowcasting is the absence of information on the future slope conditions. The latest information available to the system is the current conditions and the last estimation of the system state. Auto-regressive models predict these conditions as well so that subsequent forecasts may use these environmental conditions in their models. However, precipitation especially is governed by external influences and may not be predictable from the other forcing parameters in the system. As an alternative, forecasts may be included in the model. However, this would require forecasts for all input variables. Therefore, such a system was deemed not suitable for this application.

Special attention should be paid to the robustness of the model. Even 10 years of daily observations will result in a time series of less than 4000 reference observations, much less than desirable for use in more complex machine learning models such as neural networks

There are infinite data-driven modelling possibilities, and the generic character of many data-driven models suits the diversity in available remote sensing variables. However, due to the limited length of the time series, in comparison to typical machine learning studies, one should stay close to the physics and processes, to limit the freedom of the model towards a solution. Therefore, one has to ensure a balance between the number of parameters to be estimated and the training and validation data available.

The Vögelsberg is a deep-seated landslide, located in the Wattens basin near Innsbruck, Austria (Fig.

In 2016 a Leica TC1800 automated total Station (ATS) was installed in Wattenberg, opposite Vögelsberg, by the Division of Geoinformation of the Federal State of Tyrol. The system surveyed each of the 53 benchmarks every hour. Extensive corrections to the measurements were necessary, primarily due to the instability of the monument the total station is located on. In this study a series of pre-processed range measurements was used, fixed to stable benchmarks around the active area that showed no signs of landslide deformation damage. The accuracy of this time series was estimated to be in the order of

Daily deformation rate of the Vögelsberg landslide at benchmarks “D5_1” and “D_WS_1” (Fig.

The deformation of the Vögelsberg landslide is a complex response to the hydro-meteorological conditions in the catchment, in particular precipitation and (delayed) infiltration from snowmelt. A binary prediction of stability/instability or acceleration/deceleration is insufficient for the Vögelsberg landslide, as the slope is undergoing continuous deformation.

The deformation rate, derived from the total station range measurements, was smoothed by a moving average filter until few, noise-induced, negative (up-slope) deformations remained, while maintaining the highest possible temporal resolution (Fig.

Our model's aim is to predict the landslide deformation based solely on the current conditions at the slope. No recent deformation observations or prior defined geomechanical model will be available to our model during prediction. The main model constraints are that we have a relatively limited number of data points (1482 samples) and will work with readily available remote sensing data and products. Furthermore, we set the objective to model with daily time steps and a forecast lead time of 4 d. A successful prediction of the deformation rate 4 d ahead will demonstrate the model's ability to predict a tipping point based on the environmental conditions (acceleration, peak, deceleration). Moreover, a 4 d prediction would give sufficient time for further investigation as part of an early warning system.

With these constraints in mind, a system was designed based on a parsimonious recurrent neural network. First, we will introduce the data available. Second, an overview is provided of the pre-processing applied to the input variables. Third, we provide the specifications of our model. Last, the training and validation of the model are discussed.

The model variable selection is based on the analysis of factors of influence

Our method is designed with the intent to be generally applicable. Therefore, except for the deformation, remote sensing products were used, as they are likely to be available elsewhere as well. Where available, redundant products that represent the same or similar quantities were included to assess their relative performance in the nowcasting model. The correlation between the products is limited (

The desired output of our model is a daily, 4 d ahead prediction of the landslide deformation rate at benchmarks “D_WS_1” and “D5_1”. Reference, training, and validation samples are provided by the automated total station located on the Wattenberg, opposite Vögelsberg (Fig.

Selection of time series considered for integration into the model. Deformation variables are marked “D”, while slope conditions, input variables to the model, are marked “V”. Observations are marked “S” for directly observed variables processed and available within the time frame of a nowcasting system, “R” for reanalysis variables, and “M” for variables modelled within this study (see Sect.

Daily precipitation information is provided by the Integrated Multi-satellitE Retrievals for GPM (IMERG) algorithm of the Global Precipitation Measurement mission (GPM)

Soil moisture, especially at depth, cannot be observed directly from space at a high enough resolution for this application. The low-latency, operational products from the Copernicus Land Service, Soil Water Index and Surface Soil Moisture, are frequently unavailable either due to unfavourable slope topography or due to snow cover. Alternatives are provided by SMAP L4

The model is fed with the 11 variables defined in Sect.

Overview of the variable space (Table

The antecedent precipitation index (API,

Autocorrelation of one of the generated signals compared to the autocorrelation of the temperature as taken from ERA5 (

A random variable with seasonal characteristics is added to the variable selection to analyse the effect of spurious correlation on the model. The random variable,

All variables are offset to become zero-mean and scaled by the standard deviation. Therefore, all input variables are on approximately equal scale and represented as deviations from their average condition. The normalization parameters, mean and standard deviation, should be kept fixed while new data are added to remain consistent with the scaling of the time series used during training. The data set is fed to the model as a time-stamped collection of daily observations, illustrated in Fig.

Our model is a shallow neural network with only a single hidden layer

In total, for a network configuration with a single memory cell (

Four parameters are added per extra prediction day (two benchmarks, one bias and weight each). An extra memory cell requires

An interpretation of the network is that the development of the slope state in the last 32 d is described by the LSTM node. The state is scaled, and otherwise matched to the individual benchmarks, by the output neurons. The 4 d are an extrapolation of the current state of the system; no prediction of the conditions on the slope is made.

The “mean squared error” was chosen as the loss function. This function that quantifies the difference between the predicted and observed deformation is to be minimized during training. The quality of the prediction is measured on the period not used for training. This function assures that the cumulative deformation over time is realistic, as errors are balanced between overestimation and underestimation. Therefore, the predictions will not show a bias towards acceleration or deceleration. The TensorFlow machine learning framework was chosen to implement the model

Simplified schematic of the model. From left to right: the hydro-meteorological conditions (

During training the model parameters are tuned such that the final model state best describes the deformation prediction. The model is optimized with the Adam optimizer

Training periods as supplied to the model. The data outside the training period are used for validation. Note that with the longest training period (4) there are very limited validation data left. The deformation pattern (Fig.

Due to temporal correlation, training and validation cannot be divided over random chunks or batches according to the “traditional” 30 %–70 % chunks

The robustness of the model to the selection of the training data is assessed from the stability of the results when training over the subsequent periods (Fig.

To assess the impact of irrelevant data on the system, as well as the effect of overfitting, the additional, correlated random variable (

All possible combinations of the 11 input variables were tested on the model. With 11 variables this results in

The best solution out of all model runs, judged on the minimal mean squared error on validation, is based on a single LSTM node and only 4 of the 11 input variables available: precipitation from GPM (V2), soil moisture from SMAP (V5) and ERA5 (V7), and evaporation from GLEAM (V8), where the numbers refer to Table

The full nowcast is shown in Fig.

Result of the deformation nowcast, run of the full time frame of the available deformation time series. The shaded time span was used for training. Shown as thin lines are the subsequent, daily, nowcasts for benchmarks “D5_1” and “D_WS_1”. Per day, four deformation nowcasts are shown, with the start of each line being the day after the day the nowcast was issued. Note the warm-up time at the start, shown hatched and without predictions, that is required to initialize the moving average filter on the deformation data and fill the memory of the LSTM node. The final nowcast ends 4 d after the end of the reference measurements.

The cumulative deformation, as predicted by the consecutive, individual model runs closely matches the observed deformation over the full 4 years of deformation measurements. The difference is calculated as “modelled

The modelling results are overall unsatisfactory: the acceleration and deceleration are typically not predicted timeously or at all. This is surprising in light of the success reported by others (Table

Due to the complexity of the operations applied to the input signal in the LSTM layer, it is not straightforward to analyse the contribution of the individual components to the final model outcome. As all model variations were tested (Sect.

Figure

Violin plots of the mean squared error for model variations with one to four variables, including the variable listed. For more than four variables the relative importance of the individual variables to the model quality becomes insignificant.

Models based only on

We believe the unsatisfactory performance of the model has three root causes: (i) the inability of the model to capture the complex dynamics of the system; (ii) the limited quantity of training data available to this type of problem; and (iii) the limited, noisy representation of the slope dynamics in the available remote sensing data. Most natural deep-seated landslides are characterized by a complex interplay of causal (antecedent) and triggering conditions: this is also true for the Vögelsberg landslide. However, we believe that it is exactly these challenges that we should aim to tackle with a machine learning model approach.

The possibilities for data-driven modelling are infinite: our model is only a single realization of the possible combinations of variables and operations. This raises three questions regarding the model selection: (i) how to match model and process, (ii) how to validate and quantify the quality the nowcast, and (iii) how to tune the model implementation.

The major challenge for the model of a deep-seated landslide is the discrepancy between the sub-daily variations of the input (especially precipitation and snowmelt) and a delayed, daily output (accelerated deformation). Therefore, non-time-aware models show erratic behaviour, as the consequence of sudden changes to conditions such as snow cover and (extreme) precipitation that, in reality, do not translate into immediate acceleration. Traditionally, the addition of groundwater physics, smoothing the hydro-meteorological signal, circumvents these peaks. However, the addition of groundwater physics requires knowledge of the geohydrology of the specific slope.

An LSTM node resembles a bucket model and was chosen to capture the delay between precipitation and deformation by modelling the buildup of water in the model. Our results showed that our model was unable to fully capture these hydro-meteorological dynamics. For reference, five alternative models were implemented (Table

List of reference models tested for comparison to

The

The

The

Relationship between the number of model parameters and the quality (mean squared error) of training and validation as extracted from the 147 984 model runs. The number of parameters is related to the number of input variables. For LSTM-based networks, for example, there are four parameters per input variable per LSTM memory cell required. Note the logarithmic scale on the

The performance of each model is shown for comparison in Fig.

For early warning systems, prediction of the onset of acceleration (Fig.

This leads to five desired properties for the nowcasting system: the system should (i) predict onset of acceleration, (ii) predict the maximum deformation velocity, (iii) predict 4 or more days ahead that deformation will begin, (iv) predict when the slope is “stable” again, and (v) quantify the certainty in the prediction. Unlike most estimation problems, not only is the quantity of the predicted deformation important to the user but so is its timing. An acceleration phase predicted too early or slightly late may still trigger the desired alertness and still serves a purpose, even though the predicted amplitude on that day is wrong.

A “standard” error metric, e.g. the mean squared error, is sensitive to the mean as local optimum but is unbiased and therefore stable in the long term. As an alternative, such an error metric could be evaluated at “peaks and valleys”, the peaks of the deformation rate, only, emphasizing extremes and disregarding their onset. With this method there are less samples, only the extremes, but they are less correlated and include the amplitude of the event. Although this captures the timeliness of the extremes, it disregards the timing of the onset and pattern of the acceleration phase. Moreover, this approach requires information on the peaks and valleys and that those are correctly identified beforehand.

Due to the lack of information on the extremes of the deformation, we chose to use the mean squared error as the error metric. This metric ensured a long-term stability and connected stability of the deformation nowcast, as demonstrated by the cumulative deformation (Fig.

Accelerations of the Vögelsberg landslide are known to be triggered by precipitation in summer/autumn and by snowmelt in winter/spring

Additional variables may be derived from the direct observations. In our model, the antecedent precipitation index (API) is such a derived observation and was chosen to enhance the information content of the hydro-meteorological observations to the model (i.e. provide higher predictive power to the model). This “feature generation” is an important component of more traditional machine learning techniques, where the system is not expected to derive those relations autonomously. Derived, additional features were extensively used by

Given the limited availability of deformation measurements, most of the data are required to train the model. Moreover, the variation in conditions is limited to the variation in those 5 years. It is therefore likely that the model will encounter conditions in operation that it had not encountered before. Due to the continuous nature of the model proposed, and the alternatives discussed in Sect.

For simple combinations of variables, i.e. of a single or a few variables, the response may be tested empirically. Note that the full 32 d history has to be included in this simulation. However, the response may not be so straightforward: a warm summer day combined with hail from a thunderstorm may trigger an unrealistic “path” in the model. Therefore, for more variables, the number of potential combinations increases drastically and may no longer be feasible to simulate.

Predictions of extraordinary responses are not necessarily undesirable; an unbound acceleration, i.e. landslide collapse, prediction should be possible. However, the model would preferably warn for a potential unstable state of the nowcasting system. This could be achieved by an ensemble of models, either based on the same model, or model variations. Models with different time series lengths, especially, may be able to help pinpoint the source of the discrepancy.

Our model of Vögelsberg is based on two benchmarks that are on two distinct sections of the slope (Fig.

As an alternative, a location index could be specified, for example as a binary indicator of the landslide section or as continuous signals such as the distance to the centre. Instead of two or more predefined outputs from the same model, a single model may handle different benchmarks differentiated by additional input variables encoding their position within the system. However, given the shallow model design, care should be taken to design the model such that this index works as a scaled multiplier of the hydro-meteorological conditions.

Over the full time span of the measurements, four distinct acceleration periods can be identified (Fig.

Three acceleration events (#1, #2, & #3) at the Vögelsberg landslide, as identified by

Given that there is more than a single degree of freedom in the model, without prior knowledge of the process, there is no predictive power in a single acceleration event. Hence, multiple events are required to properly train complex models in the absence of constraints on the process and model. As a consequence, due to the limited variety of events in the training data, the predictive power of the nowcasting system may be reduced due to overfitting on the characteristics of these events only.

To test the effect of the training length on the models, the models were trained on 9 of the 10 training periods identified in Fig.

Length of the training region, aggregated to (approximate) years, compared to the quality of fit of the model, measured as the mean squared error. An increase in model fit is visible with the increase in training length; however, most models are outperformed by the models that use the mean deformation rate (pink) of 1, 2, or 3 years respectively as the predictor.

Essential to the success of the nowcast are the properties of the signal to be predicted. The effect of noise in the deformation signal on the modelling is twofold: first, random perturbation complicates the training by masking the best solution, and, second, this leads to an underestimation of the final quality of the model during validation. Hence, the noise in the deformation signal defines the upper limit for the quality of the deformation estimate. Up-slope deformation, present in the raw deformation time series, was considered to be unrealistic and therefore noise by definition. Under the assumption that the noise is unbiased, the noise will be reduced in averaged samples. Therefore, a moving average filter was applied to the deformation time series with increasing length until no negative deformation remained.

The model was developed with the requirements for an operational system in mind, restricting the system to only use historic observations at any point in the process. The inclusion of future samples would require the system to react to future conditions that have not (yet) been observed on the slope: any filtering, such as smoothing, should not drag future observations back in time. Therefore, the moving average filter cannot be centred, and averaging is applied to the preceding 31 d rather than

The variation in the deformation signal at Vögelsberg is relatively small, in deviation from a long-term trend. Due to the millimetre-scale measurement uncertainty in the deformation measurements, the deformation signal is dominated by noise on the short timescale of days to weeks, and the relevance of a deformation prediction on a daily basis is doubtful. Furthermore, due to the inertia of the landslide body, as well as smoothing of the deformation measurements, accelerations and decelerations are spread over adjacent days (Fig.

The variable selection in Table

The model was designed under the assumption that data from all sources are continuous and readily available to the system. Out of the variable selection (Table

For a successful integration of satellite observations in an operational nowcasting system, a high, sub-weekly, update frequency is required. However, most remote sensing products were available at a delay of days to weeks, still too late for integration in a nowcasting system. As a consequence, the variable selection in Table

Satellite radar interferometry (InSAR) is a proven method for landslide deformation monitoring

Temporal continuity of input data is required to provide the model with consistent samples of the slope conditions. Short periods of missing data, e.g. days, may be forward filled but will reduce the data quality for the full integration length (i.e. 32 d). Observations received late may still be updated in later iterations to mitigate this effect. However, what should one do with missing data: a single day or a whole season or the termination of a data source, for example due to satellite failure? As a fallback, one could model and train systems with different variable combinations in advance and nowcast based on the best model available for the variable combination available in the 32 d prior.

The LSTM nodes may be implemented in a stateful fashion, where the state of the hidden nodes is retained after each prediction. Such implementation is more computationally efficient, as each subsequent nowcast will require only a single pass over the most recent data. In such an implementation, however, discontinuous or erroneous variables may have a lasting effect on the model memory. Therefore, the system was based on continuous re-initialization with a 32 d observation history instead. The computational drawback is limited, given the small scale of the model, and is acceptable in the light of the greater operational flexibility.

Indirect observations of the hydro-meteorological cycle may still prove valuable to the nowcasting system. The temperature, for example, may serve as a proxy indicator for evaporation. Temperature is related to the seasons in most climates, and therefore there will be a correlation with the season (day of year) as well. However, extra care should been taken when including variables that describe the typical/average condition, such as the season. Such variables do not capture the current dynamics of the system and may only describe average conditions and constrain the system in extraordinary circumstances. The Vögelsberg landslide is known to be sensitive due to changes in the ground water level, irrespective of the season.

The success of a data-driven model lies in the (expert) selection of the input data. Unrelated variables make the system prone to spurious correlations, especially with limited training data compared to the degrees of freedom in the model or if the method is unable to discard or otherwise ignore sources with low information content. Furthermore, unrelated input variables, or even just noise, should not yield sensible results: “garbage in, garbage out”.

The effect of noise in the conditions was tested by the inclusion of a Brownian motion signal (see Sect.

Parameters on geology and topography were left out of the selection and assumed static. However, land cover changes were not included either. In the case of Vögelsberg, it was known that little changes were to be expected over the time frame of the measurements available. An alternative to the inclusion of such variables is to frequently re-train the model on a recent section of the time series only to adapt to changes. However, although the system will adapt to changing dynamics, re-learning will mask the drivers behind long-term effects and/or adapt too swiftly, for example to seasonal differences, reducing the overall model quality. Land cover changes will not be uniform across slopes and will act on different timescales (e.g. neglected pasture fields versus forest fires) and may not be trivial to capture by remote sensing. Moreover, especially in regional studies, the land cover and land cover change may not be comparable between slopes.

To limit the number of variables, only the observation or modelling result closest to the Vögelsberg landslide was used from regional products. However, as

Our results show that deformation nowcasting is an open challenge. Although well monitored, the Vögelsberg landslide is a complex system and therefore not a straightforward test case. Our results are inconclusive regarding whether our method could work on other deep-seated landslides. More direct dynamics and/or stronger and more frequent acceleration periods would help constrain the system. The inclusion of field data, such as groundwater level

For short time series machine learning methods are known to be outperformed by basic statistical methods

Notable is the recent publication of the first version of the European Ground Motion Service data set

Although Vögelsberg is a well-monitored landslide, the number of recorded acceleration events within the available 4 years of daily deformation measurements is limited compared to other machine learning problems. A simple, time-series-capable model with limited parameters was required; therefore, we designed an LSTM-based machine learning algorithm to nowcast the deformation of the Vögelsberg deep-seated landslide from the conditions on the slope. The algorithm was trained on a maximum of 3 years of deformation observations and satellite observations of relevant hydro-meteorological conditions at the slope. The best model configuration and variable combination was determined by cross-validation with 147 984 model variations.

Although rooted in the landslide dynamics, even our best model was incapable of capturing the versatility of responses of the Vögelsberg landslide and convincingly predicting the deformation rate at Vögelsberg 4 d ahead. The four acceleration events especially were not predicted timeously, although the mean squared error successfully constrained the average deformation rate of the prediction to that of the training time series. The Vögelsberg landslide showed versatile dynamics, where the full range of slope dynamics and responses to the hydro-meteorological conditions was not present in the available data. Therefore, the slope processes were too complex to model the landslide deformation from satellite surface observations given the limited observations of acceleration events. Hence, the machine learning model was incapable of “understanding” the relation between conditions and deformation.

Deformation nowcasting will be a necessity for regional or even continental landslide monitoring and early warning systems. Satellite remote sensing has the potential to provide longer time series over wide areas. This leads us to the general recommendation for the application of machine learning to reactivating, deep-seated, landslides: improve data quality and lengthen the deformation time series. The ideal landslide for further development of deformation nowcasting is highly dynamic (many events to train on), has a limited delay between forcing conditions and deformation, is well monitored, and does not undergo catastrophic failure.

See Fig.

Correlation between variables.

See Fig.

Smoothed deformation signal, shown for an increasing length (in days) of the moving average filter. The filter only includes historic observations and is not “centred” to match the properties of an operational system. The increasing time lag is visible for the subsequent filter lengths by the right shifting of the velocity peaks. For initial observations, a filter length of half the final length of the filter was accepted.

See Table

Examples of different integration methods, linking hydro-meteorological conditions to deformation time series, and associated case studies. Most studies are at deep-seated landslides that did not undergo catastrophic collapse. Studies with and without reservoir level as observation are grouped together. Updated after

No specific algorithms or code has been developed for this study. The scripts used for the different runs consisted of a series of standard building blocks available within TensorFlow with their settings as mentioned in the text (

The variables

ALvN: methodology, software, formal analysis, writing – original draft. TAB: conceptualization, writing – review & editing, supervision. TZ: resources, writing – review & editing. JP: resources, writing – review & editing. RCL: writing – review & editing, supervision.

The contact author has declared that none of the authors has any competing interests.

Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the special issue “Advances in machine learning for natural hazards risk assessment”. It is not associated with a conference.

This research has been supported by the OPERANDUM (OPEn-air laboRAtories for Nature baseD solUtions to Manage hydro-meteo risks) project, which is funded by European Union's Horizon 2020 Framework Programme for research and innovation under grant no. 776848.

This paper was edited by Sabine Loos and reviewed by Katy Burrows and one anonymous referee.