Rockslides are a major hazard in mountainous regions. In formerly glaciated regions, the disposition mainly arises from oversteepened topography and
decreases through time. However, little is known about this decrease and thus about the present-day hazard of huge, potentially catastrophic
rockslides. This paper presents a new theoretical concept that combines the decrease in disposition with the power-law distribution of rockslide
volumes found in several studies. The concept starts from a given initial set of potential events, which are randomly triggered through time at a
probability that depends on event size. The developed theoretical framework is applied to paraglacial rockslides in the European Alps, where available data allow for constraining the parameters reasonably well. The results suggest that the probability of triggering increases roughly with the cube root of the volume. For small rockslides up to 1000

Rockslides are a ubiquitous hazard in mountainous regions. The biggest rockslide in the European Alps since 1900 took place in 1963 at the Vaiont
reservoir. It involved a volume of about 0.27

In turn, two huge rockslides with volumes of several cubic kilometers have been identified and dated. These are the Flims rockslide with a deposited
volume of about 10

Although an immediate effect of deglaciation can be excluded for the Flims and Köfels rockslides, the former glaciation of the valleys plays a
central part in rockslide disposition. In the context of paraglacial rock-slope failure,

As a main limitation, however, the estimate

Analyzing the statistical distribution of landslide sizes became popular a few years after the concept of exhaustion was proposed, presumably
pushed forward by the comprehensive analysis of several thousand landslides in Taiwan by

Several models addressing the power-law distribution of landslides have been developed so far

Now the question arises of how the idea of paraglacial exhaustion can be reconciled with the power-law distribution of rockslide sizes, perhaps in
combination with SOC. While size distributions of rockfalls and rockslides were addressed in several studies during the previous decade

In this paper, a theoretical framework for event-size-dependent exhaustion is developed, which means that the decay constant

Let us start with the Drossel–Schwabl forest-fire model (DS-FFM in the following) as an example. While several similar models in its spirit were
developed soon after the idea of SOC became popular, the version proposed by

The DS-FFM is a stochastic cellular automaton model that is usually considered on a two-dimensional square lattice with periodic boundary
conditions. Each site can be either empty or occupied by a tree. In each time step, a given number

Regardless of the initial condition, the DS-FFM self-organizes towards a quasi-steady state in which as many trees are burned as are planted on
average. If the growth rate

Let us now assume that the growth of new trees ceases suddenly at some time in the quasi-steady state so that the available clusters of trees are
burned successively. Figure

Burned clusters of trees without regrowth on a 256

Owing to the preference of large fires, the DS-FFM in a phase without regrowth is an example of event-size-dependent exhaustion. Figure

Frequency of the fires in the DS-FFM during phases without growing trees. All distributions were obtained from simulations on a 65 536

The distribution of the fires that take place during the first 10 000 steps after growth has ceased is almost identical to the distribution in the
quasi-steady state. A small deficit is only visible at the tail. So the overall consumption of clusters during the first 10 000 steps is negligible,
except for the largest clusters. The trend that large clusters are consumed more rapidly than small clusters is consolidated over larger time
spans. In the time interval from

As a central result, the power-law distribution of the fires is consumed through time from the tail. In particular, the exponent (slope in the double-logarithmic plot) stays the same in principle, while only the range of the power law becomes shorter. Finally, however, the decay also affects the frequency of the smallest fires.

Applied to the topography of the Alps, the Himalayas, and the southern Rocky Mountains, the model reproduced the observed power-law distribution of
rockslide volumes reasonably well. Differences between the considered mountain ranges were found concerning the transition from a power law to an
exponential distribution at large volumes. However, the model has not been applied widely since then, except for the study on landslide dams by

Figure

Simulated rockslide sites for a part of Switzerland. In addition to the outlines of the unstable areas, the respective triggering points are also shown for events with

As the main point to be illustrated, different triggering points result in very similar events at some locations. At some other locations, events arising from different triggers are overlapping but differ in size. Both effects become stronger with increasing event size. This means that larger potential rockslides are more likely to be triggered than smaller potential rockslides in the model.

Qualitatively, this behavior is similar to that of the DS-FFM but more complex. Since randomness is not limited to triggering but is also part of the propagation of instability, even rockslides of different sizes may be triggered from the same point. In contrast to the DS-FFM, finding a quantitative relation between event size and probability of being triggered is not trivial for the rockslide model.

It is, however, recognized that the triggering points are not distributed uniformly in the area but are concentrated around the lower part of the outline. In this model, the initial instability preferably occurs at very steep sites and then predominantly propagates uphill since the uphill sites become steeper due to the removal of material. Accordingly, the increase in triggering probability with area (and thus also with volume) should be weaker than linear.

Let us assume that the process of exhaustion starts at

This leads to

Let us further assume that the objects initially follow a power-law (Pareto) distribution, which is most conveniently written in the cumulative form

Since

Computing the cumulative frequency

Substituting

The negative rate of change in

The respective cumulative frequency of the events per unit time,

As an example, Fig.

Cumulative frequency and frequency density of the events per unit time (

Owing to this property,

For the cumulative frequencies, the deviations from the respective power law extend towards smaller sizes compared to the frequency densities. The
stronger deviation arises from the dependence of the cumulative frequency at size

Applying the framework developed in Sect.

If the shape of the detached body was independent of its volume, areas would be proportional to

Keeping the exponent

Since data on the frequency of rockslides are sparse and the completeness of inventories is often an issue, validating the exhaustion model and
constraining its parameters is challenging. For the European Alps as a whole, a combination of historical and prehistorical data is used in the
following.

A total of 18 rockslides with volumes between 0.001 and 0.01

Seven rockslides with volumes between 0.01 and 0.1

Two rockslides with volumes greater than 0.1

At

At

At

At

Anthropogenically triggered rockslides were not taken into account in these data.

Constraints 4–7 differ from constraints 1–3 since they are not inventories over a given time span but refer to the largest or second-largest available
volumes at a given time. The respective statistical distributions are described by rank-ordering statistics

However, the seven constraints defined above provide a very limited basis for constraining the five parameters

While

Technically, all computations were performed in terms of

Likelihood as a function of

Figure

The highest likelihood is even achieved for

Qualitatively, however, the observed increase in likelihood towards smaller exponents

The data set used for calibration is not only quite small but also potentially incomplete. For the inventories used for constraints 1 and 2,
incompleteness should not be a serious problem. The inventory used for the third constraint is small and thus does not contribute much information, so
an additional event would not change much. In turn, the assumptions on the largest or second-largest potential rockslide at a given time are more
critical. As an example, the Kandersteg rockslide was assumed to be much older

In general, constraints 4–7 based on rank ordering may be affected by the discovery of unknown huge rockslides, as well as by new estimates of ages or volumes of rockslides that are already known. Perhaps even more important, rockslides larger than those in constraints 4–7 may take place in the future.

To illustrate the effect of a potential incompleteness, it is assumed that the Kandersteg rockslide is not the largest potential event at

As shown in Fig.

The third scenario (Fig.

In the following, the rockslide size distributions corresponding to the five dots in all three scenarios (Fig.

Let us now come back to the question about the size of the largest rockslide to be expected in the future in the Alps, i.e., for the largest potential
rockslide volume at present (2020 CE). Let

Figure

Cumulative probability of the largest rockslide at present (2020 CE). Different line types refer to the three considered scenarios.

As already expected from Fig.

If we go back to the time of the Kandersteg rockslide (

Cumulative probability of the largest rockslide at the time of the Kandersteg rockslide (3210 BP). Different line types refer to the three considered scenarios.

As a third source of uncertainty, the statistical nature of the prediction must be taken into account. Depending on

In all scenarios, the probability that a rockslide with

In turn, the probability that there will be no rockslide with

Figure

Cumulative rockslide frequency at present (2020 CE). Different line types refer to the three considered scenarios.

The 100-year event (

Let us now come back to the question of what we can learn about the process of exhaustion. Concerning the process, the exponent

This knowledge may also be useful for validating or refuting models. So far, reproducing an exponent in the range

However, the simulation shown in Sect.

As a fundamental property of the process of exhaustion, Fig.

For

The

From a geological point of view, the time

Cumulative frequency of potential rockslide sites with

However,

In view of this result, the deglaciation of the major valleys cannot be refuted only as an immediate trigger of the huge paraglacial landslides in the
Alps but also as the start of the process of exhaustion. The starting point may, however, be the massive degradation of permafrost caused by rapid
warming in the early Holocene era. For the Köfels rockslide, the potential relation to the degradation of permafrost was discussed by

However, the question for the actual trigger for the respective rockslides remains open. In principle, even the question of whether a unique trigger is
needed is still open. Large instabilities may also develop slowly

In this study, a theoretical concept for event-size-dependent exhaustion was developed. The process starts from a given set of potential events, which
are randomly triggered through time. In contrast to a previous approach

The concept was applied to paraglacial rockslides in the European Alps. Since available inventories cover only a quite short time span and older data are limited to a few huge rockslides, constraining the parameters involves a large uncertainty. Nevertheless, some fundamental results could be obtained.

Assuming that the probability of triggering is related to the volume

The concept of event-size-dependent exhaustion predicts an exponential decrease in rockslide frequency through time with a decay constant depending
on

For the largest rockslide possible at the present time, different considered scenarios predict a median volume of 0.5 to 1

In this section, a maximum likelihood approach that combines data of the two types discussed in Sect.

The first type of data (constraints 1–3 in Sect.

Then the respective factor in the likelihood is the probability that the actual number

The second type of data (constraints 4–7 in Sect.

In the limit

If

Finally, the total likelihood is the product of the seven factors according to Eqs. (

All codes are available in a Zenodo repository at

The author has declared that there are no competing interests.

Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The author would like to thank the two anonymous reviewers for their constructive comments and Oded Katz for the editorial handling.

This open-access publication was funded by the University of Freiburg.

This paper was edited by Oded Katz and reviewed by two anonymous referees.