We develop a network-based model of a catchment basin that incorporates the possibility of small-scale, in-channel, leaky barriers as flood attenuation features, on each of the edges of the network. The model can be used to understand effective risk reduction strategies considering the whole-system performance; here we focus on identifying network dam placements promoting effective dynamic utilisation of storage and placements that also reduce risk of breach or cascade failure of dams during high flows. We first demonstrate the model using idealised networks and explore risk of cascade failure using probabilistic barrier-fragility assumptions. The investigation highlights the need for robust design of nature-based measures, to avoid inadvertent exposure of communities to a flood risk, and we conclude that the principle of building the leaky barriers on the upstream tributaries is generally less risky than building on the main trunk, although this may depend on the network structure specific to the catchment under study. The efficient scheme permits rapid assessment of the whole-system performance of dams placed in different locations in real networks, demonstrated in application to a real system of leaky barriers built in Penny Gill, a stream in the West Cumbria region of Britain.
The concept of “green infrastructure” is embedded within environmental policy in Europe (European Commission, 2007, 2013a, b; EEA, 2015) and the UK (Defra, 2019) as a strategic approach involving the design and management of networks of natural and semi-natural environmental features to deliver a wide range of ecosystem services. Echoing this approach, projects around the world have been blending natural and engineering approaches to deliver multiple social and environmental benefits (WWF, 2016; Bridges et al., 2018). In Flood and Coastal Risk Management (FCRM) there has been a growing interest in so-called “nature-based” measures, including small-scale, distributed storage features, tree planting and soil structure improvement to prevent fast overland flow. These measures have collectively become known as natural flood management (NFM) in the UK (see Dadson et al., 2017, and Lane, 2017), or Working With Natural Processes (WWNP) after the Pitt Review of the UK 2007 summer floods (Pitt, 2008), a term adopted in the recent UK Evidence Directory (Burgess-Gamble et al., 2017). Internationally they have also been termed “nature-based approaches” or “engineering with nature” (Bridges et al., 2018).
One such nature-based measure is to encourage in-channel flood attenuation (e.g. see Metcalfe et al., 2017), using small dams or barriers, usually made from wood (Fig. 1). These barriers, which are often deliberately built to be permeable (and sometimes called “leaky barriers”), allow low flows to pass under or through but hold back high flows, providing temporary water storage analogous to beaver dams. It is hoped that a large collection of such features deployed in a catchment may hold back enough floodwater (in-channel or on the floodplain) to mitigate flood risk downstream (Fig. 2a). In the UK, use of leaky barriers has been incentivised under the current environmental stewardship grants across England and Wales (UK Government, 2017). However, whilst the effectiveness of systems of runoff attenuation features and leaky barriers in terms of peak flow reduction has been investigated recently (e.g. Metcalfe et al., 2017; Addy and Wilkinson, 2019), these studies do not consider performance failure, and there remains much trial-and-error installation of different designs which could be improved upon for more efficient risk-reduction strategies at the large scale.
Leaky barriers in sequence in Penny Gill, Cumbria (Barry Hankin).
Schematic
There have been many attempts at representing the effects of leaky barriers
on flow, with methods ranging from increasing roughness in 1D models to full
3D representation, but relatively few have been able to test the accuracy of
the physical representation (see Addy and Wilkinson, 2019). The NERC
project, Q-NFM (Lancaster Environment Centre, 2017), has developed a set of
small, accurately monitored “micro-catchments” in Cumbria to attempt to
quantify the effect of different nature-based interventions. The Penny Gill
micro-catchment drains to the small community at risk of Flimby on the west
coast of Cumbria and is designated at risk because of the interaction of the
stream with infrastructure downstream of the test site. In this case the
capability to attenuate the peak flows for this small sub-catchment
(
Significant research questions remain about whether “many small interventions (each creating local benefits) [will] combine to create large benefits at large scale” (Dadson et al., 2017) and whether the lack of demonstrable effect at large scale is because noticeable flood mitigation could not be achieved in a large catchment, or because a sufficiently large-scale set of interventions have not yet been implemented. Meanwhile, in the UK at least, the government's approach is to see working with natural processes as complementary to conventional, engineered flood risk management measures. In England, this is reflected in the latest long-term investment planning scenarios published by the Environment Agency (2019), whilst noting uncertainty about the effectiveness of NFM to manage large floods and large catchments.
If this complementarity is to be realised in practice, we believe there is a pressing need to integrate NFM more tightly within the cost, benefit and risk assessment frameworks that apply to “conventional” flood management. This means that we want to understand NFM features as systems of assets and to assess those systems within a risk-based analysis that considers the whole-system performance in terms of risk reduction. A risk-based analysis of NFM asset systems should take account of both the reliability of the assets and their performance as a whole system under different plausible hazard or loading scenarios. One vital lesson from conventional flood management is that even when flood mitigation measures are in place, the residual risk cannot be ignored.
Some initial work to test the effectiveness of catchment-wide NFM under a
range of spatially distributed extreme rainfalls has been reported by Hankin
et al. (2017a), but without consideration of the reliability of the
underlying NFM assets. Here, we focus instead on the resilience of a network
of NFM features as an asset system. To do this, we develop a simple
network-based model of a river catchment that incorporates the possibility
of leaky barriers being installed on each edge of the network, similar to
the approach taken by Metcalfe et al. (2017). We wish to understand the
impact of different spatial configurations of the leaky barriers, taking
into consideration three possible performance issues. These are
underutilisation of dynamic storage (see Metcalfe et al., 2018), i.e. redundancy in the network of leaky barriers that could be regarded as an inefficient use of resources;
undesired synchronisation of flood peaks (see Pattison et al., 2014),
where measures intended to slow the flow could result in flood peaks being
increased under some scenarios; structural failure and cascade failure of barriers.
In Sect. 2 we develop a mathematical drainage network model and show how leaky barriers can be incorporated in a form that is simple enough to enable solution of the resulting system of equations, but sufficiently realistic to describe key hydraulic modes of behaviour. We then apply the equations in Sect. 3 to study the performance of idealised one- and two-dimensional stream networks subjected to single-peaked and multi-peaked flood events, including the potential for failure of individual or multiple assets (quantified in terms of the frequency of barrier failure and percentage change to peak flow). Multi-peaked flood events are a more effective test to the resilience of the system aimed at providing dynamic storage that can be reused on consecutive events, and it is this kind of event that often resulted in more severe impacts. We discuss the findings in terms of the risk reduction (quantified as percentage peak flow reduction) and the residual risk achieved by the systems of NFM features under different configurations and how the idealised cases may help inform analysis of real NFM systems. In Sect. 4, the model is applied to the real system of leaky barriers in Penny Gill, West Cumbria, and conclusions are drawn about more effective designs and placement.
There are a number of studies documenting the benefits of beaver dams in terms of habitat improvement, peak flow attenuation and water quality improvements (Puttock et al., 2017, 2018), so it is natural to try and emulate these types of benefits artificially. However, we should also study what happens in nature when things go wrong. Structural failure of natural beaver dams has been reported as occurring frequently by Butler and Malanson (2005), citing numerous cases of dam failure that resulted in outburst floods. These floods have reportedly been “responsible for 13 deaths and numerous injuries, including significant impacts on railway lines”. Engineered NFM measures are likely to be more robust than beaver dams (contingent on maintenance in the longer term), but the relative risks of different configurations, positioning in relation to geometry, slope and proximity to each other, and build design need a mechanism for appraisal. The intention is to help design safer and lower-risk configurations of NFM, which is seen as a potentially low-cost complement to conventional flood risk management strategies.
Failure of beaver dam structures in the US has been reasonably well documented (Hillman, 1998; Butler and Malanson, 2005), and there have been two records (Tom Nisbet, personal communication, 2018) of leaky barrier performance failure at Pickering, UK, to the authors' knowledge, after two large flood events. The first flood event in November 2012 resulted in the washout of one of the larger dams on the main Pickering Beck and a shift to the edge/bank of a second dam below this. These features were relatively tall structures and located within a straightened section of channel alongside a railway line, with limited floodplain storage. The logs from the failed dam were caught within the downstream reach between that and a third dam downstream. The failed and shifted dam plus one other were found to be deflecting flows into the river bank, causing some local scouring, placing a local railway at risk of undercutting so they were removed (2014) and replaced with five new dams on a better reach downstream. The second failure event occurred during the UK Boxing Day floods, 2015, where a total of 11 dams were damaged, all involving a shift/deflection in the dam by edge scour or loss/breakage of top logs, rather than a complete washout. These losses all involved the original, more natural design of cross logs used to construct dams in 2010/11, with no wiring used to secure logs in place. All of these have since been replaced using the now favoured semi-engineered design of horizontal stacked logs secured by wiring (a design also used in Penny Gill – see Fig. 1). Additionally, Addy and Wilkinson (2016) report on complete failure of one structure during a 10 % annual exceedance probability (AEP) event for “engineered log jams” that are albeit designed to trap sediment.
Siting, construction and improvements in engineering design are therefore important, and recent research (Dixon and Sear, 2014) shows logs 2.5 times the channel width provide “near functional immobility” – unlikely to be transported in an extreme event. Such design construction “rules of thumb” can be very useful, but cannot always describe the complexity of the whole-system response, which can be very place-specific, driving the need for a network model that can be rapidly set up to test different situations of the kind described here.
In this paper we explore network issues impacting the three performance issues categorised above, particularly with respect to spatial configuration of leaky barriers in a network that have a probability of failure defined by a fragility curve, an approach commonly used in the systems approach to quantifying flood risk (Hall et al., 2003). The probability of failure is very difficult to define for the range of constructions that are being implemented – and how this varies with age, decay and sedimentation is not known. Thus we attempt to understand what aspects of geometry, slope and proximity are the best trade-offs for a given reasonable assumption about fragility. We later translate this back the real world example on Penny Gill, Cumbria, and the implications for spacing and siting.
We begin by setting up a network model for an arbitrary stream network, breaking the stream up into segments that may each potentially contain leaky barrier designed to attenuate high flows (often referred to generically as runoff attenuation features). Our aim here is to set out a mathematical formulation for the network of features that will enable us to describe and experiment numerically with different configurations of NFM features within a probabilistic analysis. The model is based on a consideration of essential hydraulic principles, with enough simplification to enable solutions to be obtained quickly for idealised cases. Rules for the storage and discharge (flux) in each segment are prescribed based on the slope, stream cross section and roughness. Modifications of these rules to account for the effect of a leaky dam are developed. The model amounts to a series of coupled ordinary differential equations (ODEs) that are solved numerically given prescribed runoff inflow. We then explore solutions for some simple networks forced by idealised flood hydrographs, focussing on the response of the discharge at the downstream end of the network. We then examine the response to failure of the dams including cascade failure.
We construct a network model in which segments of a channel (“reaches”) are
described in a lumped fashion (Fig. 2). The primary variables are the
average cross-sectional area
Taking
Given the known slope of each channel segment
The relationship depends on the assumed shape of the channel and on a
parameterisation of turbulent flow. If we assume for simplicity that the
channel has a rectangular cross section with fixed width
We take
Flow modes for a leaky barrier.
For the first mode we use the same Manning relationship as given above to
relate
Examples of the relationships between discharge
We expect that the flow through the dam will be small compared to that under
and over it (the fact that it is allowed to be leaky makes it easier to
construct, but the leakiness between logs is not fundamental to its
operation in that there is leaking from underneath the barriers). Thus, for
the results presented here, we assume this can be ignored and set the dam
permeability coefficient
We choose scales, denoted by square brackets, such that
In non-dimensional form, and assuming negligible dam permeability, these
are
The parameter
A small value of
One potential concern with the above formulation is the discontinuity in the
discharge–depth relation when the water depth reaches the bottom of the dam
(Fig. 4). This occurs in the model because the physics used to relate the
depth to discharge is different in the two cases of free-stream flow (when
we use Manning's law to describe turbulent drag) and flow under the dam
(when we use an essentially inviscid formula for flow beneath a sluice
gate). Mathematically, provided the discontinuity in flux involves a
reduction as
The system of equations Eq. (1), coupled with the expressions for
In this section we consider a simple example of the model, using the one-dimensional network shown in Fig. 2b. We suppose that each of the channel segments is the same (i.e. equal widths, lengths and slopes), and the discharge in the final segment is of most interest for the community requiring protection. For these calculations (and all others shown in this report) we use the original flux and area formulas (Eqs. 8 and 9), with the enhancement factor described in Eq. (12).
The model is forced with a “storm” input in the form of a hydrograph based
on a simplified Gaussian functional form, as an approximation to a typical
design storm estimated using the unit hydrograph approach (used in the
application to the real case in Sect. 4). Here an extreme flow of 10 m
Solutions for a one-dimensional five-node network as in Fig. 2b,
forced by uniform inflow to each node
Figure 6 shows an example when the input has a double peak. In this case, as might be expected, the dams are less effective at reducing the height of the second peak, because they are already holding back a lot of water and have less capacity to store and delay water for the second storm. This indicates that testing of the performance of NFM, or any risk reduction measures, should potentially consider testing resilience against real storm series or double peaks and not simply single-peaked storm events, as are commonly assumed in practice when considering flood storage design analysis.
Solutions for a one-dimensional network as in Fig. 2b, forced by a double-peaked input to each node. Parameter values are as in Fig. 5.
Here we consider a simple two-dimensional network as shown in Fig. 2c, which reflects a more likely pattern given the dendritic nature of channel formation in headwater catchments. There are more interesting questions to consider about the positioning of dams in this case. For example, if one has funding to build a certain number of dams, which of the channel segments are the best ones on which to put them? Putting them on the central trunk is likely to ensure that they are used (performance issue 1), but also means that they may more easily overspill and lose their effectiveness. They may also be more susceptible to cascade failure (performance issue 3 – discussed in the next section).
In Fig. 7 we show two examples of the response to a flood input of the form given by Eq. (15). In the first case, four dams are placed on the main trunk (nodes 1–4), whereas in the second case four dams are placed on the upper branches (nodes 5, 6, 9, 10). The discharge from the final segment (node 4) is plotted, along with its maximum value. Both dam placements have the effect of slightly delaying and reducing the peak discharge, with the second design being marginally more effective. This is because the dams near the bottom of the central trunk are overspilling and losing their effectiveness, whereas the dams on the side branches are all having a significant effect.
Solutions for the 2D network as in Fig. 2c, forced by uniform
inflow to each of the eight branch nodes
However, for different sized floods or realistic spatial patterns of extreme
rainfall (see Hankin et al., 2017a), the optimal arrangement can vary.
Unfortunately, there does not appear to be a clear rule for the most
effective dam placement, even in this simple example, where the resilience
of distributed NFM in terms of temporary storage and tree-planting was
tested against different storm extremes having spatially realistic patterns
(Lamb et al., 2010). In this network study, the on-average performance of
one particular system of NFM was tested, allowing for utilisation and the
risk-reducing or risk-increasing impacts of changes to tributary
synchronisation (performance issue 2), using average annual losses as the
integrated measure of risk reduction. However, the high-resolution model,
with 180 million cells, took over 26 h to run so only 30 extreme events
were simulated with and without NFM measures, and
One of the potential risks of installing many dams in a catchment is the possibility that they all collapse in sequence, creating a flood surge that is larger than would have occurred if no dams had been installed at all. Provided each dam stores only a small reservoir of water, the collapse of one dam on its own should not be catastrophic. But if the collapse of one dam causes others further downstream to collapse too, there is the obvious danger of the surge escalating. This risk may be an important factor in deciding the best placement of dams (perhaps outweighing the efficiency of peak-flow reduction under “normal” operating conditions).
The main method suggested for analysing this risk is to run an ensemble of simulations of flood events, assigning a failure depth to each dam using the probability distribution suggested by a fragility curve. Ideally, this ensemble should include a range of storm conditions too. Such analysis could in principle use dynamical weather models or rainfall records to construct statistical models and then sample from the modelled joint (spatial) distributions of rainfall forcing. Both approaches have been considered in the context of reviewing flood resilience in the UK (HM Government, 2016) and for flood risk analysis over large and complex infrastructure networks (Lamb et al., 2019). This type of spatially structured risk analysis can be expensive, and so it may be desirable to establish some rules of thumb about which dam placements are more, or less, at risk of cascade failure.
Knowledge gained from this type of analysis might be used to plan for the size and strength of dams that should be built at different locations. For example, it could be that certain locations are particularly prone to collapse (downstream of merging tributaries for example), and building one stronger “buffering” dam could significantly reduce the risk of a cascade.
As an example of cascade failure in the network model, we return to the
one-dimensional example shown in Fig. 2b. We impose a regular storm inflow
to the upstream node of the form given in Eq. (15) and examine an ensemble of 50 possible system states (describing different combinations of survival or
failure of the individual dams). Each of the dams is assigned a critical
water depth
Maximum discharge at the downstream node for an ensemble of runs
(indexed along abscissa) on the one-dimensional network in Fig. 2c, forced
by the same upstream inflow
Example from an ensemble of runs on the one-dimensional network in
Fig. 2b, forced by the same upstream inflow
Figure 8 helps illustrate whole-system resilience across a wide range of
events and is coloured by the number of dams that are predicted to collapse
within each ensemble member; small blue dots indicate that no dams failed,
and since the forcing is identical in each case, the peak discharge in this
case is always the same. It is lower than what the peak would have been in
the absence of any dams, so the dams are proving effective in these cases.
Larger dots correspond to more dams having failed. In most of the ensemble
members only one dam collapses, and the peak discharge recorded downstream
is strongly dependent on which one fails (the red dots in Fig. 8). A larger peak occurs if the collapsed dam is further downstream, since if an upstream dam fails (and importantly does
If, on the other hand, a single dam failure leads to further collapse of two or more dams, the peak discharge can be much larger. In one example, all five dams collapse in quick succession, and the time series of this example (the black dot in Fig. 8) is shown in Fig. 9, where it is compared to an example with no failure. We have found that the pattern of failure in this one-dimensional model, including the likelihood for cascading failure, depends heavily on the assumed dam sizes, critical water depth distribution and magnitude of rainfall events.
As a second more instructive example of cascade failure, we revisit the herringbone network in Fig. 2c. We consider the two possible placements of four dams that were discussed earlier: either along the main trunk (nodes 1–4) or on the upstream side branches (nodes 5, 6, 9, 10). In Fig. 7 we found that there was relatively little difference in the peak discharge measured at the downstream node with these different placements. However, in Fig. 10 we see that the first case is much more at risk from cascade failure, and the whole system of dams is not resilient in this spatial configuration. This figure shows the peak downstream discharge in an ensemble of simulations, with the failure depths for each dam being different each time. The failure depths are sampled from the same distribution in each case.
Maximum discharge at downstream node for an ensemble of runs
(indexed on abscissa) on the two-dimensional network in Fig. 9, forced by
uniform inflow to each of the eight branch nodes
In almost every run with dams on the trunk, we see cascade failure occurring
so that three or four of the dams collapse; this leads to extremely high (though
short-lived) peak discharge. In contrast, when the dams are placed on the
side branches, there is no possibility of cascade failure (the individual
branches do not communicate with each other), and it is unlikely that more
than one dam collapses. Thus, having used the model to consider the
resilience of the whole system of barriers for two cases, it can be seen
that although each of these dam placements is similarly effective at
reducing the peak discharge, there may be strong reason for preferring the
second design that places them on the upstream tributaries because this
configuration is a more resilient system (even though the resilience of the
individual dams is the same in both designs). The large surges predicted in
the simple network model when multiple dams fail have some support in the
literature. For example, Hillman (1998) describes a June 1994 outburst flood
in central Alberta, Canada, releasing 7500 m
We consider the application of the model in “sensitivity to change investigation” to a site on Penny Gill, West Cumbria (Fig. 1). The geometry of the network of leaky dams installed by the West Cumbria Rivers Trust is shown in Fig. 11a, with the inflow hydrograph in panel (b). The inflow hydrograph has this time been based on a 100-year-return-period design hydrograph (synthetic hydrograph with estimated annual exceedance probability) using Revitalised Flood Hydrograph or ReFH version 1 (Kjeldsen, 2007), which is based on a unit hydrograph approach assuming empirical relationships with local catchment descriptors such as slope and annual average rainfall from the Flood Estimation Handbook (Institute of Hydrology, 1999). ReFH also includes a loss model that accounts for hydrology of soil types and gives the hydrograph, in this instance, a slight tail due to slower baseflow contribution. It should be noted that the peak flow for the 100-design-year event is relatively small but is likely to be underestimated owing to contributions from old coal measures that are known to generate additional flows during times of prolonged rainfall.
Model run using the given measured dam locations and bottom height of 0.02 m. Note that each segment has a different width, which is not shown on the diagram, but which can lead to different rates of filling of the region behind different dams. We take a Manning roughness of 0.1 s m
The model is as described in Sect. 2, with mass conservation for each network given by Eq. (1) as before, although in this example water is all fed into the uppermost segment of the network according to the input hydrograph
A model calculation using the measured values for the dam parameters (bottom
height
We can use the network model to quickly explore alternative arrangements of
the dams and examine whether there are general rules about how to site the
dams that could lead to more efficiency. To do this we break the stream into
20 segments, and we allow for the possible siting of a dam on each one. All
such dams are assumed identical, with bottom height
Model run with more leaky dams. The reduction peak discharge is
96 % of the inflow, and the maximum volume stored is 430 m
We also consider randomly siting eight dams on the 20 segments, to analyse which positions work well. Examples of some of these are shown in Fig. 14. Some are considerably better at reducing the peak flow than others (though none are particularly good – simply because there is not enough storage overall to reduce the peak discharge substantially).
Example model runs with random arrangements of eight dams. Panel
Whilst storage may be improved well above that for the real system
(355 m
Model run with eight dams, chosen to be sited on segments with lower slopes. The reduction peak discharge is 97 % of the inflow, and the maximum volume stored is 457 m
What insights can we learn from this? A good general principle is to site
the barriers in low-lying areas or regions where the upstream area widens,
so as to provide more storage per barrier. The lower height of the barrier
should be made sufficiently high that it does not start to dam water too
early; if the dam has filled before peak inflow conditions, it serves no
further useful purpose in reducing the flow downstream. Dams in locations
with a large backwater region (which is characterised by the local value of
Such advice could be used at a much larger scale to supplement the relatively scarce advice for where to locate leaky barriers, which also tends not to include details of geometries, leakiness or slot heights. For example, in England and Wales, countryside stewardship grants can be applied for to support construction of leaky barriers, and the website (UK Government, 2017) advises that leaky barriers should be sited on channels between 3 and 5 m, yet says nothing about slope, height and slot dimensions, which could be more of a determining factor in the effectiveness of potential storage. The dimensions of the slot height are also not clear and vary in grey literature between 0.1 and 0.3 m, but in practice, they are less (on average across a cross section) than this in locations like Penny Gill. A compounding factor is that there are many forms of leaky barrier or large woody debris dams (Addy et al., 2019) including placing large woody debris in the channel or the horse-jump type barriers in use in Penny Gill, combined with engineered log jams which will also reduce passage of debris should a structure fail and enhance floodplain reconnection. These different designs all need to consider the trade-offs in barrier design (slot height, leakiness – both of which can be adjusted in the model reported here) when considering ecological impacts such as fish passage; for example a narrow slot might improve flood attenuation but make passage more difficult.
However, general advice on design can oversimplify, and the final example has demonstrated that this type of network model can be used effectively to rapidly test different arrangements of dams and to assess which are likely to work best to reduce risk given the unpredictability of the whole-system response. Only one particular input hydrograph has been used here, and a more thorough analysis ought to consider different amplitudes and shapes and multiple peaks, since they are likely to influence the effectiveness of the whole scheme. It would also be useful to test further failure scenarios, although the simplified 1D representation does not include logjams that were also placed between some of the dams, which would help mitigate the risk of cascade failure explored in Sect. 3. In summary, the network analysis has demonstrated how the effectiveness of the system of leaky barriers was quantified overall using the integrating measure of percentage peak flow reduction at the bottom of the network. The approach accounts for additional storage volumes put in place, but also how it is utilised dynamically, something that will also vary across the system depending on the spatial and temporal pattern of runoff inputs, slot dimensions and leakiness.
We have formulated a network model for a catchment area that allows for simple exploration of the effectiveness of different dam placements and
designs and is sufficiently cheap to solve that it may be useful in
analysing risks that require a large ensemble of simulations. We have
applied the model for relatively small idealised and real systems of around
10–20 dams, but its computational simplicity means that it would readily
scale to consider much larger systems at little additional cost. Based on
the analyses presented, we can make four practical conclusions focussing on
the three performance issues highlighted, those of utilisation,
synchronisation and failure or cascade failure.
A large number of dams are needed to have any significant effect on the peak discharge downstream based on scale analysis alone, especially in reaches with a steep gradient. When estimating storage requirements, it is not sufficient to simply estimate the total storage capacity in relation to the volume under the hydrograph for a set of NFM measures that are distributed around the network. Network analysis is required that also permits the assessment of the integrated impact of dynamic utilisation of storage, drain down between events (tested by simulating on multi-peaked events) and changes to flood-peak synchronicity on overall risk reduction. This has been measured in terms of reduction to peak flow hydrograph at the bottom of the network between the pre- and post-NFM situation, providing an integrated measure of the effectiveness of the system of NFM features. The dams should be located in places with the potential to store a reasonable volume of water (in wide reaches of the channel), although with consideration that the loading on each structure is not excessive. With reference to a real-world example at Penny Gill, we have used the network model to highlight how locating dams in areas with wider channel width and low slope is more effective, and it is worth building the dams higher and correspondingly stronger so as to avoid the risk of failure. These conclusions on placement help, in part, to understand whether there are any benefits from making an effort to place dams strategically, seeking an optimal network configuration, or whether it may be justifiable to install them opportunistically, or even randomly. The analysis indicates that, for the relatively simple system at Penny Gill, when considering potential dam sites at up to 20 locations, approximately 50 % of effort could be saved in construction, costs and later maintenance, if fewer dams are placed more selectively. It remains to be seen whether there are any broader advantages at large scales ( Cascade failure is a risk when dams are placed along a main artery, and the risk may be lessened by spreading dams around tributaries. There are very large uncertainties in the fragility assumptions leading to failure, although here water depths of the order of
We envisage future risk assessments using this network approach at larger
scales, taking into account additional factors including uncertainties in
geometry, roughness parameterisation, spacing, fragility assumptions, a wide
range of spatial configurations of NFM measures and a wider range of
feasible storm types, durations and probabilities. These are all required
not just for NFM, but also for improved integrated flood risk management, if
we are to answer the types of simple questions that communities need to
answer, such as “with a limited budget, what's the best approach for
integrated flood risk management?” or “does spatial configuration even
matter at a larger scale?” We hope our conclusions here start to address
such questions, but future analyses would also be better constrained with a
more detailed understanding of the fragility of different types of barriers.
More formal fragility curves can be directly generated (Lamb et al., 2019)
based on analysis of observations of survival and failure of dams, if such
data are recorded for the growing number of leaky barriers.
The MATLAB scripts developed are available on the JBA Trust GitLab repository:
BH wrote the paper with major contributions from IH and RL. All authors helped in developing the model at the Maths Foresees Study Group.
The authors declare that they have no conflict of interest.
This work has been supported by NERC grant NE/R004722/1, the EPSRC Maths Foresees network and the JBA Trust project W17-6962. The model was initially developed at the EPSRC-funded “Maths Foresees” Study Group in Cambridge, April 2017, where this particular challenge was sponsored by the JBA Trust. The problem was worked on by Barry Hankin, Ian Hewitt, Graham Sander, Sheen Cabaneros, Federico Danieli, Giuseppe Formetta, Raquel Gonzalez, Michael Grinfeld, Teague Johnstone, Alissa Kamilova, Attila Kovacs, Ann Kretzschmar, Kris Kiradjiev, Sam Pegler and Clint Wong.
We are grateful to Onno Bokhove for introducing the problem to the study group. We are also grateful to the West Cumbria Rivers Trust for access to Penny Gill, along with MSc student Luke Stockton for assistance surveying the leaky barriers.
This research has been supported by the NERC (grant no. NE/R004722/1) and the EPSRC (grant no. LWEC).
This paper was edited by Bruno Merz and reviewed by Paul Quinn and one anonymous referee.